text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Symmetry Analysis and Conservation Laws of a Generalized Two-Dimensional Nonlinear KP-MEW Equation Lie symmetry analysis is performed on a generalized two-dimensional nonlinear Kadomtsev-Petviashvili-modified equal width equation. The symmetries and adjoint representations for this equation are given and an optimal system of one-dimensional subalgebras is derived. The similarity reductions and exact solutions with the aid of (G/G)-expansion method are obtained based on the optimal systems of one-dimensional subalgebras. Finally conservation laws are constructed by using the multiplier method. The purpose of this paper is to study one such NLEE, namely, the generalized two-dimensional nonlinear Kadomtsev-Petviashvili-modified equal width (KP-MEW) equation [26] that is given by Here, in (1) , , and > 1 are real valued constants.The solutions of (1) have been studied in various aspects.See, for example, the recent papers [26][27][28].Wazwaz [26] used the tanh method and the sine-cosine method, for finding solitary waves and periodic solutions.Saha [27] used the theory of bifurcations of planar dynamical systems to prove the existence of smooth and nonsmooth travelling wave solutions.Wei et al. [28] used the qualitative theory of differential equations and obtained peakon, compacton, cuspons, loop soliton solutions, and smooth soliton solutions. In this paper we obtain symmetry reductions of (1) using Lie group analysis [19][20][21][22][23][24] and based on the optimal systems of one-dimensional subalgebras.Furthermore, the ( /)expansion method is employed to obtain some exact solutions of (1).In addition to this conservation laws will be derived for (1) using the multiplier method [29]. Symmetry Reductions and Exact Solutions of (1) The vector field of the form where , = 1, 2, 3, and depend on , , , and , is a Lie point symmetry of (1) if pr (4) [( + ( ) + ) + ] = 0 whenever ( + ( ) + ) + = 0.Here pr (4) [20] denotes the fourth prolongation of .Expanding (3) and splitting on the derivatives of , we obtain an overdetermined system of linear partial differential equations.Solving this system one obtains the following four Lie point symmetries: 2.1.One-Dimensional Optimal System of Subalgebras.We now calculate the optimal system of one-dimensional subalgebras for (1) and use it to find the optimal system of groupinvariant solutions for (1).We follow the method given in [20].Recall that the adjoint transformations are given by where [ , ] is the commutator defined by We present the commutator table of the Lie symmetries and the adjoint representations of the symmetry group of (1) on its Lie algebra in Tables 1 and 2, respectively.These two tables are then used to construct the optimal system of one-dimensional subalgebras for (1).As a result, after some calculations, one can obtain an optimal system of onedimensional subalgebras given by { 1 + 2 + 3 , 1 + 4 }, where , ∈ R, , = 0, ±1. Symmetry Reductions and Exact Solutions of (1). In this subsection we use the optimal system of one-dimensional subalgebras calculated above to obtain symmetry reductions and exact solutions of the KP-MEW equation. Case 1.Consider the following: The symmetry 1 + 2 + 3 gives rise to the following three invariants: Table 1: Commutator table of the Lie algebra of equation (1). Table 2: Adjoint table of the Lie algebra of equation (1). Now treating as the new dependent variable and and as new independent variables, the KP-MEW equation (1) transforms to which is a nonlinear PDE in two independent variables.We now use the Lie point symmetries of (8) and transform it to an ordinary differential equation (ODE).Equation ( 8) has the two translational symmetries; namely, The combination Γ 1 +Γ 2 of the two symmetries Γ 1 and Γ 2 yields the two invariants which gives rise to a group-invariant solution = ().Consequently using these invariants, ( 8) is transformed into the fourth-order nonlinear ODE: Integrating the above equation twice and taking the constants of integration to be zero we obtain a second-order ODE: Multiplying ( 12) by , integrating once and taking the constant of integration to be zero, we obtain the first-order ODE: One can integrate the above equation by separating the variables.After integrating and reverting back to the original variables, we obtain the following group-invariant solutions of the KP-MEW equation ( 1) for arbitrary values of in the following form: where and is a constant of integration.By taking = 2, = 1/2, = 1, = 1, = 1, = 1, = 1, = 0, and = 1 in ( 14), the profile of the solution is given in Figure 1. The symmetry 1 + 4 gives rise to the three invariants: By treating as the new dependent variable and and as new independent variables, the KP-MEW equation ( 1) transforms to Equation ( 17) has a single Lie point symmetry; namely, and this symmetry yields the two invariants which gives rise to a group-invariant solution = () and consequently, using these invariants, ( 17) is then transformed to a second-order Cauchy-Euler ODE: where = / 2 and 1 and 2 are constants of integration. Let us consider the solutions of (11) in the form where () satisfies and and are constants.The homogeneous balance method between the highest order derivative and highest order nonlinear term appearing in (11) determines the value of and A 0 , . . ., A are constants to be determined. Consider = 2. Application of the balancing procedure to fourth-order ODE (11) yields = 2, so the solution of ( 11) is of the form Substituting ( 23) and ( 24) into (11) leads to an overdetermined system of algebraic equations.Solving this system of algebraic equations with the aid of Maple, we obtain Now using the general solution of ( 23) in ( 24), we have the following three types of travelling wave solutions of the KP-MEW equation ( 1).When 2 − 4 > 0, we obtain the hyperbolic function solution: where = − − (( + )/), 1 = (1/2)√ 2 − 4, and 1 and 2 are arbitrary constants.The profile of the solution ( 26) is given in Figure 2. When 2 − 4 < 0, we obtain the trigonometric function solution: (, , ) Consider = 3. Again the application of the balancing procedure to fourth-order ODE yields = 1, so the solution of ( 11) is of the form () = A 0 + A 1 ( () () ) . (29) Solving this system of algebraic equations with the aid of Maple, we obtain
1,495.2
2015-07-02T00:00:00.000
[ "Mathematics", "Engineering" ]
A Combined Model Based on Feature Selection and WOA for PM 2.5 Concentration Forecasting : As people pay more attention to the environment and health, PM 2.5 receives more and more consideration. Establishing a high-precision PM 2.5 concentration prediction model is of great significance for air pollutants monitoring and controlling. This paper proposed a hybrid model based on feature selection and whale optimization algorithm (WOA) for the prediction of PM 2.5 concentration. The proposed model included five modules: data preprocessing module, feature selection module, optimization module, forecasting module and evaluation module. Firstly, signal processing technology CEEMDAN-VMD (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Variational Mode Decomposition) is used to decompose, reconstruct, identify and select the main features of PM 2.5 concentration series in data preprocessing module. Then, AutoCorrelation Function (ACF) is used to extract the variables which have relatively large correlation with predictor, so as to select input variables according to the order of correlation coefficients. Finally, Least Squares Support Vector Machine (LSSVM) is applied to predict the hourly PM 2.5 concentration, and the parameters of LSSVM are optimized by WOA. Two experiment studies reveal that the performance of the proposed model is better than benchmark models, such as single LSSVM model with default parameters optimization, single BP neural networks (BPNN), general regression neural network (GRNN) and some other combined models recently reported. Introduction In recent years, with the improvement of people's living standards, the problem of air pollution is also increasing.This is especially serious in China [1,2].In the north, industrial development has resulted in serious deterioration of air quality over the past several decades [3][4][5].A recent report by the State Environmental Protection Administration stated that two out of every five cities in China failed to meet the residential area air quality standard, resulting in the exposure of their population to the risk of adverse health effects.As a major pollutant, PM 2.5 have caused widespread concern over the country.PM 2.5 refers to fine particles with particles not larger than 2.5 um, which is extremely harmful to public health.There are two main sources of PM 2.5 in the air.On the one hand, it is mainly from the burning of fossil fuels, such as smelting, metal processing and transportation [6,7].On the other hand, it comes from the chemical reaction of NO 2 , CO and SO 2 in the atmosphere [8]. PM 2.5 can also adsorb a variety of toxic pollutants, including heavy metals, volatile organic compounds and carbonaceous materials.It has been reported that exposure to high concentrations of PM 2.5 leads to an increase in cardiovascular and pulmonary diseases (e.g., [9,10]).According to the American Heart Association, in the United States alone, air contaminated with PM 2.5 particles causes approximately 60,000 deaths per year.In addition, many epidemiological and panel studies have shown that a relationship exists between particulate matter (PM) in the air and the emergence of diseases such as short-term cardiopulmonary function [11], cerebrovascular disease [12], respiratory disease [13], lung cancer (e.g., [14]), etc. Further, particle size less than 0.1 microns, is referred to as "ultrafine particle" or "nanoparticles".Experts from University of Nanjing Information Science and Technology have found that the concentration of ultra fine particles with a diameter of 0.01 to 0.1 um is significantly increased in Nanjing.Most of the particles floating in the air can stay in the lungs and enter the bloodstream, which is also an important reason for the recurrence of asthma and chronic bronchitis [13].Therefore, the research and control of PM 2.5 is an urgent issue. Many countries have established PM monitoring systems to monitor PM 2.5 concentrations in real time, which provide early warnings through analysis and prediction of data to help us adopting regulatory measures.However, due to the huge resource cost of establishing a testing site, or the completed site damaged by rain, human factors, etc., monitoring data may be incomplete or have drawbacks.Therefore, it is necessary to use methods and tools to analyze and model PM concentrations.Based on the above reasons, this paper attempts to propose a combined model to accurately predict PM 2.5 concentration. In order to achieve high accuracy, previous literature has proposed many predictive tools and methods to predict PM 2.5 or other air pollutant concentrations [15,16].These methods can be divided into two categories: the deterministic methods described by the chemical transport model (CTM) [17], and statistically based predictive methods.CTM is the most conventional method on PM 2.5 concentration prediction, which requires the acquisition of meteorological factors.The data acquisition of CTM is difficult and costly, and its prediction accuracy is not satisfying.Therefore, statistical methods [18] and machine learning [19] are widely used in the field of air pollutants prediction.The basic statistical methods are mainly originated from multiple linear regression (MLR) and autoregressive integrated moving average models (ARIMA) [20].However, due to the complex nonlinear relationship between PM 2.5 and air quality [21], the two mentioned models cannot fit these nonlinearities, which causes the predicted value to be different with the actual value.With the rapid development of computer technology, a combined model using artificial intelligence method not only has the advantages of low cost and high prediction accuracy, but also has the nonlinear fitting characteristics, so can be well suited to the prediction of PM 2.5 . Artificial neural networks (ANN) (e.g., [22][23][24][25]), grey models (GM), generalized linear regression models, and support vector regression (SVR) [26,27] are widely used artificial intelligence models in the prediction of PM concentration.In addition, the parameters in these models, such as ANN and SVR [26,27], have a great influence on the prediction effect of the models [28].Therefore, some swarm intelligent optimization algorithms, such as genetic algorithm (GA), particle swarm optimization algorithm (PSO) [29], gray wolf optimization algorithm (GWO), cuckoo optimization algorithm (CS) etc., have been used to optimize the parameters.After using these algorithms to optimize the model parameters, the models' accuracy is increased and the robustness is improved.Paschalidou AK et al. [30] used the multilayer perceptron (MLP) with the radial basis function (RBF) techniques to forecast hourly PM 10 concentrations in four urban areas (Larnaca, Limassol, Nicosia and Paphos) of Cyprus.Feng X. et al. [31] proposed a hybrid model combining air mass trajectory analysis and wavelet transformation to improve ANN forecast accuracy of daily average concentrations of PM 2.5 .Shi F. et al. [32] proposed a neural network model based on GWO, using the PM 2.5 data from 1 November to 22 November 2016, in Shanghai city.Furthermore, the results show that it is much better than neural network based on PSO, BPNN, and SVR.Yali F U et al. [33] proposed a hybrid model using the improved particle swarm optimization algorithm (IPSO) to optimize the number of hidden layer nodes and weights of the extreme learning machine (ELM).Wang L. et al. [34] proposed a PM 2.5 concentration rolling statistical prediction scheme (DC-SVR) based on distance correlation coefficient and SVR.Dai L. et al. [35] combined SVM and PSO algorithm to construct hourly PM 2.5 concentration rolling prediction model.Meanwhile, using the rolling model to predict the nighttime average concentration, daytime average concentration and daily average concentration of the next day.Gan K. et al. [36] proposed a new method based on the secondary-decomposition-ensemble learning paradigm.This model decomposed and reconstructed the raw data before prediction, and then predicted via the LSSVM model optimized by chaotic particle swarm optimization algorithm (CPSO) to obtain the predicted value.Data collected over seven years in a city of northern Spain are analyzed using four different models-vector autoregressive moving average (VARMA), ARIMA, MLP neural networks and SVM with regression, and simulations showed that the SVM model performs better than the other models when forecasting one month ahead and the following seven months [37].Gualtieri G. et al. [38] forecasted PM 10 hourly concentrations in northern Italy through self-organizing maps.Zhou Q. et al. [39] proposed a hybrid ensemble empirical mode decomposition-general regression neural network (EEMD-GRNN) model based on data preprocessing to analysis for one-day-ahead prediction of PM 2.5 concentrations.Li W. et al. [40] used a hybrid model, cointegration theory-flower pollination algorithm-support vector machine (CI-FPA-SVM), to predict PM 2.5 and PM 10 concentration.Ping G et al. [41] proposed a framework, termed HML-AFNN, to analyse and forecast the concentration of particular matter (PM 2.5 ) for a selected number of forward time steps and so on [42].From the above analysis, it is known that the prediction using the hybrid model is already a trend of PM concentration forecasting.However, in the prediction by hybrid model, the computational consumption is high because the input data are too large.If certain technology can significantly reduce the input data without affecting the prediction effect, this will be a big breakthrough. Most researchers do not focus on optimizing the input-output features or doing a feature selecting work when they start to establish their model [43].These models are unlikely to learn the essence of the time series, and thus there is a large gap between the predicted and the actual values.We hope to learn the model between the best input and the best output in the PM 2.5 series by adopting a completely automated machine learning method, so as to avoid artificially selecting the training relationship and establishing a more stable and accurate power load forecasting model. ACF can find the dependence relationship between one time and other times in a time series.Hopefully this hidden input-output relationship can be automatically given by ACF feature selection techniques.LSSVM has strong learning ability for nonlinear relations.The WOA is used to optimize the parameters in LSSVM.Finally, the de-noised data are used to train the model.Therefore, we focus on using ACF and LSSVM, combined with WOA, to select good features for building a strong model.The established model can be used for the prediction of PM 2.5 concentration. What is new about this paper is that the feature selection is added to the general hybrid model so that the computer can automatically select as few inputs as possible for any set of data without affecting the final prediction effect. The rest of this paper is as follows: Section 2 describes the basic methods (CEEMDAN, VMD, ACF, WOA and LSSVM).Section 3 describes two data sets and experimental settings.Section 4 is comparative results; Section 5 is conclusions and further study. Methods The basic structure of the proposed model VCEEMDAN-SF-WOA-LSSVM is presented in Figure 1.First, signal processing technology CEEMDAN-VMD is used to decompose and reconstruct the PM 2.5 concentration series, then ACF is used to extract the input variables.Finally, LSSVM is applied to predict, and the parameters of LSSVM are optimized by WOA.The required methods that were applied in the combined model are introduced as follows. Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (Ceemdan) In general, most data denoising methods perform well only when the signal meets certain characteristics.For example, the wavelet decomposition approach requires non-stationary linear data, while the Fourier transform approach is mainly used to deal with smooth and cyclic data.The EMD developed by Huang et al. [44] is employed to decompose original signals into some intrinsic mode functions (IMFs).Unfortunately, there are disadvantages in combining the mode with EMD.Therefore, Wu and Huang [45] proposed the ensemble empirical mode decomposition (EEMD) method instead.Although the EEMD achieves pronounced improvements and more stability, it is difficult to entirely neutralize the added noise.To overcome this drawback, Torres et al. [46] introduced an additional noise factor to adjust the noise level at each decomposition, making the reconstruction completely noise-free, which requires less cost than EMD and EEMD.Details of CEEMDAN can be shown by Torres et al. [46] Variational Mode Decomposition (Vmd) VMD can decompose complex signals into K amplitude-modulated FM signals, which is a non-stationary signal processing method with preset scale.Compared with the recursive screening mode of the ensemble empirical mode decomposition (EEMD) [45] and EMD, the center frequency and bandwidth of each mode function are determined by iteratively searching for the optimal solution of the variational mode.Finally, the frequency band of the signal is adaptively decomposed, and the K band-limited intrinsic mode functions are obtained.Therefore, VMD is a completely non-recursive signal decomposition method.In addition, VMD has better noise robustness, and the number of components is much smaller than EEMD and EMD through reasonable control of convergence conditions.The basic principles of VMD can be found in Dragomiretskiy K. et al. [47]. Autocorrelation Function (Acf) Autocorrelations are statistical measures that indicate how a time series is related to itself over time.Autocorrelation coefficients are key statistics in time series analysis.They are used to evaluate the relationships among series values.The autocorrelation at lag1 represents the correlation between the original series x t and the same series moved forward by one period.The autocorrelation at lag k is defined by Equation ( 1) where µ is the true mean of the stochastic process. Whale Optimization Algorithm (Woa) Whales are the largest mammals in the world, and humpback whales are one of them.When the humpback whale seeks the target, it begins to create a bubble net that rises along the spiral path and swims upward toward the water surface to capture the food in the center of the spiral bubble net.Inspired by the unique foraging behavior of the humpback whale, S.Mirjalili and Lewi [48] first propose a new meta-heuristic optimization algorithm WOA.The location update behavior of WOA algorithm is mainly divided into three kinds of behaviors: (1) swimming foraging: artificial whales use random individual position in the population to navigate for food; (2) surrounding contraction: spatial position is updated; and (3) spiral predation: while the artificial whale swims to the optimal individual X best , it also follows the trajectory movement of the logarithmic spiral, and its spatial position is updated again.The algorithm is shown in Figure 1C. The specific steps of the WOA optimization algorithm are as follows: 1. Given a random number p ∈ (0, 1), if p < 0.5 and |A| < 1, proceed to wandering for prey Artificial whales use random individual position in the population to navigate for food, and their spatial position is updated by Equation (2): where X is the position of the individual, t is the current number of iterations, and D =| C • X rand − X t | represents the length of the population to a random choosing individual X rand before the position update.The parameter A is random number on the interval [−2, 2].Furthermore, C is the random number on the interval [0, 2], which controls the influence of the random individual X rand on the distance of the current individual X. 2. If p < 0.5 and |A| > 1, proceed to Encircling prey After the artificial whale finds the food, its spatial position is updated by Equation (3): where the position of the food is the position of the global optimal individual in the population X best .3. If p ≥ 0.5, Spiral catching prey While the artificial whale swims to the optimal individual X best , it also follows the trajectory movement of the logarithmic spiral, and its spatial position is updated by Equation (4): where X t+1 is the position of the artificial whale after the current iteration update, D =| X best − X t | indicates the length of the individual X best of the individual X before the position update, and b is the constant for shaping the spiral trajectory, l is a random number on the interval [−1, 1]. 4. Substituting the optimized model parameters into the main model to calculate the fitness value. Least Squares Support Vector Machines (Lssvm) Support vector machine (SVM) is a two-class classification model traditionally.Its basic model is a linear classifier that defines the largest interval in the feature space.The SVM also includes a kernel technique, which makes it a substantially nonlinear classifier.The learning strategy of SVM is to maximize the interval, which can be formalized into a problem of solving convex quadratic programming, and is also equivalent to the minimization of regularized loss function.The learning algorithm of SVM is to solve convex quadratic programming optimization problem.LSSVM proposed by Suykens and Vandewalle is a modification of standard SVM.Compared to SVM, LSSVM uses a least square cost function which results in solving a series of linear equations instead of a quadratic programming problem that will reduce the calculational complexity [49]. For LSSVM, two parameters, c and σ 2 are considered to be the most important factors for accuracy of forecasting. Lssvm Optimized by Woa In order to overcome the shortcomings of the single algorithm and improve the accuracy and stability of the prediction, this section uses the new optimization algorithm WOA to optimize the parameters of the LSSVM, its pseudo code is shown in Algorithm 1.The informative descriptions of the hybrid WOA-LSSVM model can be given as the following steps. 1. Initialize the parameters of the WOA and determine the objective function Equation ( 5) where M is the number of samples, y i and ŷi are the observed and predictive values of PM 2.5 , respectively.2. Using WOA to iteratively optimize the parameters of LSSVM; 3. See if the maximum iteration or preset error is met.If yes, run 4; Otherwise, continue to run 2; 4. Set the optimal value obtained by WOA to c and σ 2 of LSSVM.Finally, the preprocessed data are used as the input of LSSVM to obtain the predicted value ŷi . Algorithm 1 WOA-LSSVM: optimize the parameters c and g of LSSVM with WOA. Input: -the testing time series Output: Update a,A,C,l and p if p < 0.5 then Data Collection and Experimental Analysis In order to verify the performance of the hybrid prediction model developed, two experiments are conducted in this section, and related experimental datasets, evaluation indicators and experimental designs are introduced. 1 for basic information on the data sets. Performance Estimation In this subsection, five common performance criteria of forecast accuracy including absolute error (AE), mean absolute error (MAE), mean square error (MSE), and mean absolute percent error (MAPE), as well as IA are all listed in Table 2, where N is the number of test samples, y i and ŷi represent the i-th observed and predicted values, respectively.In addition, ȳ is the average value of the sample.Moreover, the roles of these error metrics can be listed as follows.AE can reflect positive and negative errors between predicted and observed values; Conversely, MAE is the mean absolute error, which can reflect the level of error.MSE is the average of the prediction error squares, which can be applied for estimating the change of forecasting models; MAPE is a measure of the prediction accuracy of a forecasting method in statistics; IA is also a useful measure of model performance allowing sensitivity to differences in observed and predicted sequences, as well as proportionality changes [50]. I A The index of agreement of forecasting results AE The average forecasting error The mean absolute forecasting error Testing Method Although the above-mentioned methods have recognized the importance in assessing forecasting performance, statistical tests are used to assess the forecasting performance of a model from a statistical perspective.At present, the main statistical test methods mainly include parameter test [50] and non-parametric test [51,52].As a type of parameter test, DM [50] test is often used to test prediction accuracy. The hypothesis tests are Equations ( 6) and ( 7): The DM test statistic values equal (Equation ( 8)): where ε i denotes the forecast error, N denotes the total number of predicted samples, D denotes the mean of denotes an estimation value for the variance of d i , and L denotes the loss function, which is performed to measure the forecasting accuracy.Here, the loss function we use is the square error loss. The test statistic DM is convergent to the standard normal distribution.The null hypothesis will be rejected if, as shown in Equation ( 9): where α/2 is the critical z-value and α is the significance level. Experimental Setup In order to validate the newly proposed model, two experiments are set up for comparative analysis.Firstly, Experiment I analyzes the newly proposed model VCEEMDAN-SF-WOA-LSSVM longitudinally by comparing with seven benchmark models to elaborate on the advantages of the newly proposed model.Then, Experiment II is designed to compare with better previous models made in the prediction of PM 2.5 concentration (VCEEMDAN-SF-CS-LSSVM, VCEEMDAN-SF-BPNN, VCEEMDAN-SF-GRNN, VCEEMDAN-CS-LSSVM [53], VCEEMDAN-BPNN [22,54], VCEEMDAN-GRNN (Zhou Q. et al. 2014) [39], BPNN, GRNN, ARIMA [55]).It is found that after the feature selection, only a small number of input features can be selected to obtain higher prediction accuracy, and it is also found that WOA used in our model is better than some other meta-heuristic optimization algorithms such as CS in PM 2.5 concentration prediction. Experimental I In this subsection, the performance of the newly proposed model is verified by comparing the seven models (SF-WOA-LSSVM, VCEEMDAN-WOA-LSSVM, VCEEMDAN-SF-LSSVM, VCEEMDAN-LSSVM, WOA-LSSVM, SF-LSSVM and LSSVM), as the benchmark models with the newly proposed model on the two data sets of PM 2.5 concentration in Beijing and Yibin.The forecasting results are shown in Tables 3 and 4. According to the results of eight different prediction models in Tables 3 and 4, it can be seen that the developed prediction model not only has high prediction performance (measured by error criteria), but also achieves the highest accuracy in direction measurement (IA).Therefore, we can conclude that our hybrid prediction model based on feature selection (SF) and WOA is more suitable for PM 2.5 concentration than the other seven models that do not use these techniques. Feature Selection The results of PACF feature selection in the Beijing data set are shown in Figure 3.It can be seen from Figure 3 that the most severe lag variable is the first-order lag variable, and the partial correlation coefficient reaches 0.9822.Next is the second-order lag variable, and its partial correlation coefficient drops to 0.5358.Figure 3b shows the PACF score for 480 lag variables, but only the first 34 lag variables exceed the minimum limit.Since, ranking from large to small, the seventh absolute value of the partial correlation coefficient has dropped to 0.0686, and the partial correlation is already very weak.Therefore, we choose the first seven lag variables with higher partial correlation.They are lag1, lag2 , lag3, lag64, lag65, lag4 and lag24. The results and process of ACF feature selection in Beijing are shown in Figure 4. Figure 4a shows the autocorrelation values of the initial candidate variables in Beijing.We can see that the first linear correlation is the strongest and the others are relatively weak.The strongest linear correlation is at lag1, and the second strongest is at lag2.Since the peak at lag1 is the highest, the first peak is important.We should choose the variable as the input variable.In addition, the ACF graph also reflects daily and weekly cycles, which ensures the importance of feature selection for predicting future PM 2.5 concentrations. The results of the PACF feature selection in Yibin are shown in Figure 5. Figure 5a shows that the lag variable with the strongest partial correlation is the first-order lag variable, and its partial correlation coefficient reaches 0.9894.The second partial correlation is also strong, and the partial correlation coefficient is reduced to 0.6511.The third one with strong partial correlation is the third-order lag variable, and the partial correlation coefficient is 0.1098.Figure 5b shows the PACF score for 480 lag features, but only the first 45 lag variables exceed the limit.Because, from large to small, the sixth of the absolute value of the partial correlation coefficient has dropped to 0.0881, and the partial correlation becomes weak after that.Therefore, we choose the top 8 lag variables with higher partial correlation than the input variables.They are lag1, lag2, lag3, lag4, lag17, lag8, lag6 and lag15, respectively. The results and process of ACF feature selection in Yibin are shown in Figure 6. Figure 6a shows the autocorrelation values of the initial candidate variables in Yibin.We can see that Yibin's data are relatively stable.The first linear correlation is the strongest, and the others are relatively weak.Similarly, the strongest linear correlation is at lag1, and the linear correlation of the second strongest is at lag2.Since the peak at hysteresis 1 is the highest, this means that the first peak is very important, which suggests that we should choose the variable as the input variable. Forecast Results and Analysis In order to show the efficiency of the newly proposed model, we remove some modules to construct some comparing models to predict the concentration of PM 2.5 in Beijing and Yibin in the coming week.The prediction results of Beijing are shown in Figure 7.We can see that the prediction curve of our new model basically goes to the original data curve.The specific evaluation results are shown in Table 3.It can be seen from Table 3 that the proposed model VCEEMDAN-SF-WOA-LSSVM has a prediction accuracy metric of 11.34 on MAPE, which is far lower than that of other models.In addition, its accuracy is improved 43.77% compared with that of VCEEMDAN-WOA-LSSVM without feature selection.In addition, by comparing the newly proposed model with SF-WOA-LSSVM, it is found that the denoising procession of the original data has a certain improvement on the prediction accuracy, but the improvement effect is not particularly obvious.Furthermore, the results of the newly proposed model and VCEEMDAN-SF-LSSVM show that the optimization algorithm WOA has a large impact on the model prediction.After using WOA, the prediction accuracy of the model is improved by 12.84%.Finally, compared with the prediction results of the newly proposed model and VCEEMDAN-LSSVM, the evaluation index MAPE is increased by 59.11%, which is enough to prove the high importance of feature selection and optimization algorithms for model prediction results.The prediction results of Yibin are shown in Figure 8.We can see that the prediction results of our newly proposed model are basically coincident with the real data.The specific evaluation results are shown in Table 4.It can be seen from the quantitative prediction indicators that our newly proposed model is quite accurate for the prediction of 168 data points in the next week, and its MAPE reaches 6.15, which is higher than the prediction accuracy of the models proposed in the existing literature.IA is the indicator of consent for the predictions, which is better when it is close to 1.We can see that the IA of our proposed model has reached 0.9940, which means that our new model is very suitable for predicting PM 2.5 concentration.For MAPE, our proposed model VCEEMDAN-SF-WOA-LSSVM is improved 3.91%, 53.90%, 21.65%, 67.94% and 68.28% compared with the models SF-WOA-LSSVM, VCEEMDAN-WOA-LSSVM, VCEEMDAN-SF-LSSVM, VCEEMDAN-LSSVM and LSSVM, respectively.It can be seen that the benchmark models corresponding to several MAPEs with larger amplitudes do not adopt feature selection techniques or optimization algorithms.Thus, the accuracy of the feature selection and optimization algorithm for model prediction is further reflected. The statistical test results of Experiment I are shown in Table 5.As can be seen from Table 5, the p-value of VCEEMDAN-SF-WOA-LSSVM and VCEEMDAN-WOA-LSSVM is less than 0.025.Therefore, we have a probability of greater than 95% to reject the null hypothesis, and there is a significant difference between the two models.This result demonstrates once again the importance of feature selection from a statistical perspective.When throwing away WOA, the p-value of comparing models VCEEMDAN-SF-WOA-LSSVMA and VCEEMDAN-SF-LSSVM is also much less than 0.025, which reflects the importance of the optimization algorithm.Experiments show that our proposed hybrid model with feature selection and optimization algorithms has the best predictive performance and strong stability. Experimental II By comparing with some of the fine PM 2.5 prediction models, we conducted this experiment on the two completely different data sets to show that our newly proposed model (VCEEMDAN-SF-WOA-LSSVM) is superior to the best performing model for PM 2.5 prediction.The prediction results on Beijing data set are shown in Figure 9.It can be seen intuitively that our newly proposed model VCEEMDAN-SF-WOA-LSSVM fits best with real data, while the ARIMA is the worst in the comparison models.It can be seen from Figure 9a that after the feature selection, the prediction results modeled by BPNN and GRNN are greatly improved, which indicates the high importance of the feature selection in the modeling.It is found from Figure 9b that when the optimization algorithm WOA is replaced by CS, there is a significant difference between the prediction curve and the real value curve.The model's predictive ability is even worse without feature selection.The quantitative evaluation indicators are shown in Table 6.It can be seen that when the optimization algorithm is replaced by the cuckoo optimization algorithm (CS), the MAPE value increases to 13.38, which means that the optimization algorithm WOA performs better than CS, meaning it is more suitable for prediction with PM 2.5 concentration.In addition, BPNN is also ideal for predicting PM 2.5 concentration.In particular, after adding the feature selection technique, the MAPE value is decreased by 56.13% comparing VCEEMDAN-SF-BPNN with BPNN, which is also illustrated in Figure 9a.Similarly, when using GRNN prediction, after adding feature selection, the MAPE value is increased by 38.42%.In general, the MAPE value of our newly proposed model VCEEMDAN-SF-WOA-LSSVM is improved by 15.24%, 60.54%, 46.31%, 65.06%, 50.19% and 71.25%, compared with that of the model VCEEMDAN-CS-LSSVM, VCEEMDAN-BPNN, VCEEMDAN-GRNN, BPNN, GRNN and ARIMA respectively.The prediction results on Yibin data set are shown in Figure 10 and Table 6.As can be seen from Figure 10, the ARIMA model is still the worst prediction, which further indicates that the linear model is not suitable for the prediction of PM 2.5 concentration.Our new model is much more effective than other models.Figure 10a again demonstrates the high performance of feature selections.It can be found from Figure 10b that the model prediction curves with feature selection technology and WOA are near the real data curve, which further confirms the high performance of feature selection technology and optimization algorithm.From Table 7, it can be seen that the MAPE values of the first four models added with feature selection are lower than 15, which fully indicates the importance of feature selection technology to the prediction results.Finally, the MAPE value of our newly proposed model VCEEMDAN-SF-WOA-LSSVM for the prediction accuracy of PM 2.5 concentration in Yibin in the next week reached 6.15, far lower than the other seven benchmark models, and it is the model with the best predicted performance so far.Table 8 is the statistical test results of the proposed model and the benchmark models.It can be seen from Table 8 that after replacing the WOA with CS, the performance of the two models on the two data sets both have significant difference, which indicates that the optimization performance of WOA is better than that of CS in the prediction of PM 2.5 .From the comparison of VCEEMDAN-SF-WOA-LSSVM vs. VCEEMDAN-SF-BPNN, VCEEMDAN-SF-WOA-LSSVM vs. VCEEMDAN-SF-GRNN, VCEEMDAN-SF-WOA-LSSVM vs. BPNN and VCEEMDAN-SF-WOA-LSSVM vs. GRNN, it is found that the the p-values are all minus 0.05, which indicate that there are significant differences between the compared models.By comparing with the evaluation indicators, we can conclude that our proposed model has better forecasting performance in terms of PM 2.5 prediction. Conclusions and Future Study This paper proposes a new combined LSSVM-based forecasting model, by combining CEEMDAN, VMD, SF and WOA algorithms with LSSVM model, namely VCEEMDAN-SF-WOA-LSSVM.In the empirical study of two different perspectives of Experiment I and Experiment II, the proposed model has achieved the best prediction results compared with other single AI models and hybrid models.To overcome the inherent shortcomings of LSSVM parameter selection, a new optimization algorithm (WOA) is adopted for parameter optimization.By taking feature selection, the input variables are chosen to build the model, which makes the operation time of the whole model faster and the operating cost lower.Finally, the proposed VCEEMDAN-SF-WOA-LSSVM model is used on two data sets to achieve significant prediction accuracy. The most important contribution of the research is the newly proposed hybrid model, which can be used on accurate PM 2.5 prediction.Some interesting findings are also worth stating here.Firstly, for the great influence of the model parameters c and σ 2 on the prediction effect, using a group intelligent optimization algorithm to optimize the parameters of LSSAM is a good idea.This paper combines the powerful optimization ability of WOA with the good prediction performance of LSSVM, thus further reducing the prediction error of the model.Secondly, in view of the shortcomings of the general combined model, such as slow running time and high cost, the feature engineering method to reduce the number of input variables is used, which reduces the operation time and calculation cost of the whole model.Finally, it can be found that the proposed model can do automatic prediction if the computer's computing ability allows it.As long as the corresponding raw data are given, our model can automatically find the highly relevant variables as input variables, and automatically predict, without manual intervention.In other words, our model VCEEMDAN-SF-WOA-LSSVM is a fully automated machine learning model.PM 2.5 prediction is very useful for human health and environment management, but it is a challenging job.Abandoning the traditional linear model, noting that the nonlinear relationship exists inside the data time series, thus nonlinear fitting is the necessary choice of prediction.However, the factors causing PM 2.5 are extremely complex, including geographical factors, climate, temperature, rainfall, humidity, etc., predicting the concentration of PM 2.5 based solely on historical data will inevitably affect the accuracy of the forecast.Therefore, people can consider starting from the PM 2.5 generation process, and taking into account various factors causing the increase of PM 2.5 in future research.In other words, studying how to explore suitable and reasonable components to build a model may be the future research direction.In addition, although our model achieves the best prediction accuracy so far, it requires higher computer spending and longer training time than a single model.However, with the rapid development of computers, this problem has now been overcome.Finally, an interesting potential direction is to further improve and optimize performance using this new hybrid model on other complex real problems. Figure 1 . Figure 1.The flowchart and components of the proposed combined model to forecast PM 2.5 in the next week. q+d) )-the forecasting data LSSVM Parameters Iter Max -the maximum number of iterations n-the number of whales F i -the fitness function of i-th whale x i -the position of i-th whale it-current iteration number dim-the number of dimension./*Set the parameters of WOA.*/ /*Initilize population of n whale x 3. 1 . Data Description Data sets from two locations in Beijing and Yibin, China, were used to verify the performance of the proposed model.Beijing (116 • E, 40 • N) is located in northern China with less rainfall and relatively poor air quality.Yibin (104.62 • E, 28.77 • N) is located in central China with adequate rainfall and good air quality.The curves of original PM 2.5 concentrations data in the two areas are shown in Figure 2. It can be seen from Figure 2 that the PM 2.5 concentration values in the two regions have significant differences, but all have periodicity.Using the PM 2.5 data from these two places to verify the performance of the model is more representative.These two data sets are the data of PM 2.5 per hour from 5 January 2015 to 26 April 2015, a total of 2688, of which the first 2520 are used as training sets.After features selection, choose seven of the most relevant data used as model inputs to predict PM 2.5 concentrations at 168 points in the next week.See Table Figure 2 . Figure 2. Raw data in Beijing and Yibin, China. Figure 3 . Figure 3. PACF for each time lag variable and ranked PACF in Beijing.(a): PACF for each time lag variable; (b): ranked PACF.The closer the value is to 1, the greater the partial correlation.Conversely, the closer the value is to 0, the smaller the partial correlation. Figure 4 . Figure 4. ACF of time lag variables and ranked ACF result in Beijing.(a): ACF for each time lag variable;(b): ranked ACF.The closer the value is to 1, the greater the autocorrelation.Conversely, the closer the value is to 0, the smaller the autocorrelation. Figure 5 . Figure 5. PACF for each time lag variable and ranked PACF in Yibin.(a): PACF for each time lag variable; (b): ranked PACF.Its understanding is similar to Figure 3. Figure 6 . Figure 6.ACF of time lag variables and ranked ACF result in Yibin.(a): ACF for each time lag variable;(b): ranked ACF.Its understanding is similar to Figure 4. Figure 7 . Figure 7.The forecast results of Beijing in the next week (20 April-26 April 2015): to highlight the prediction accuracy of the hybrid model comparing with the models without VCEEMDAN, SF or WOA. Figure 8 . Figure 8.The forecast results of Yibin in the next week (20 April-26 April 2015): to highlight the prediction accuracy of the hybrid model comparing with the models without VCEEMDAN, SF or WOA. Figure 9 . Figure 9.The forecast results in Beijing: (a) demonstrate the SF effects on general regression neural network (GRNN) and BP neural networks (BPNN) models and (b) illustrate the superiorities of SF and WOA in the proposed combined model. Figure 10 . Figure 10.The forecast results in Yibin: (a) demonstrate the SF effects on GRNN and BPNN models and (b) illustrate the superiorities of SF and WOA in the proposed combined model. Update the position of the current search agent.*/X t+1 = X rand − A • D else Select a random search agent(X rand ) /*Update the position of the current search agent.*/X t+1 = X * t − A • D end if else /*Update the position of the current search agent.*/X t+1 = D • e bl • cos(2πl) + X * (t) Check if any search agent goes beyond the search space and amend it*/ for each 1 ≤ i ≤ n do Use x t to train the LSSVM and update the parameters of the LSSVM Input the historical data into LSSVM to obtain the forecasting value ŷ. *Set parameters of LSSVM according to X * Table 1 . The basic statistics information of the PM 2.5 raw data in Beijing and Yibin of China. Table 3 . Experiment I forecasting results in Beijing. Table 4 . Experiment I forecasting results in Yibin. Table 5 . Results of DM test for Experiment I. * represents that the test indicates not to accept the null hypothesis under α = 0.025. * Table 6 . Experiment II forecasting results in Beijing. Table 7 . Experiment II forecasting results in Yibin. Table 8 . Results of DM test for Experiment II. ** represents that the test indicates not to accept the null hypothesis under α = 0.025.
8,925
2019-04-24T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Development of a robot that recognizes active landmarks by image processing and performs autonomous movement We propose a robot that recognized active landmarks by image processing for autonomous movement in warehouse. With this proposed method, it is possible to introduce autonomous mobile system easily and it will not cost installation cost. Specifically, we decided to make the landmark which is the landmark of the autonomous movement not to be the fixed type like the conventional one, but to be easily installed in every place. In addition, the robot following the landmark recognizes landmarks using image processing technology. In fact, we developed 2 landmarks and 1 autonomous mobile robot and conducted experiments. As an experimental result, the limit distance that can recognize and follow the landmark was 60 m. Moreover, it was confirmed that it is possible to move autonomously accurately regardless of how the landmark and the robot are arranged. Introduction In recent years, the utilization of Internet mail order sales has increased worldwide, giving us various benefits in everyday life.People all over the world can easily order what they want using the Internet, and a distribution system that delivers to the house in a few days is established.In order to support these convenient systems, we must manage and transport huge amounts of goods accurately in the warehouse that manages inventory.Therefore, in the warehouse, the robot moves autonomously on behalf of human beings and carries the goods. Two methods can be cited as representative methods of conventional autonomous movement.The first is a technique called line tracing [1].This enables autonomous movement by sticking a tape to the path the robot moves and recognizing that tape by the robot.The second method is a robot that performs autonomous movement by scanning a surrounding situation with a laser to create a map [2].The first method, the line tracing method, is the easiest method.The merit of this technique is that you can follow the route accurately.The disadvantage is that when installing an autonomous mobile system or changing the route, it is costly to install such as attaching and peeling the tape on the floor.Also, restrictions such as obstacles must not be placed on the route.The disadvantage of the second method is that it is necessary to acquire data again with a laser when changing the route.The common disadvantage of these two methods is that the place of use is limited.Also, it is difficult to do autonomous movement in any environment, such as a scene where a human and a robot work together, with the current method. Therefore, We focused on the points common to the two conventional methods in order to develop a new autonomous mobile system.Both methods are that the tapes and buildings, etc. which are the markers of autonomous movement are passive markers that do not emit light or signals themselves.Here we define a passive sign as a passive landmark.Because it is a passive landmark, you can not send a signal to the robot himself and change the route freely.So, We thought that if we become active markers that can send and receive signals themselves, we can improve the problems of the conventional method.Define active marks as active landmarks, as opposed to passive landmarks.In this research, we develop an autonomous mobile robot that can follow the landmark by applying image processing to the active landmark and the camera image that light up in red and can exchange information by wireless communication, thereby reducing the cost of introducing the system It is aimed at reducing it. Principle Explain the principle that the robot recognizes red and follows the landmark. OpenCV In this research, a library called OpenCV was used for image processing.OpenCV is an open source library.OpenCV contains many algorithms related to image processing, computer vision, mathematical processing and machine learning, and various functions necessary for processing images and moving images are implemented.Image processing described below was executed using OpenCV.[3], [4], [5] Conversion of RGB color space to HSV color space The conversion formula from the RGB color space to the HSV color space is expressed by equation ( 1 The hue H varies from 0.0 to 360.0 and is expressed by an angle along the color wheel in which the hue is indicated.The saturation S and the lightness V vary within the range of 0.0 to 1.0.By converting to the HSV color space, there is the merit that setting of the threshold at the time of discriminating colors becomes easy. Binarization processing Processing for generating a binary image from grayscale images and color images based on appropriate conditions is called binarization processing.Fig. 1 shows the original image, and Fig. 2 shows the binarized image. By performing binarization processing in this way, it is possible to extract only the target color in the image.There is a series of connected parts in this image, which is called blob.Next, the processing for the blob will be described. Find the center of Blob If we regard the blob as a single uniform plate, we can find the center of gravity.The coordinate of the center of gravity is ( , ), the coordinate value of the white pixel is ( , ), and the pixel value of the (, ) coordinate of the input image is (, ).A is the area of the blob. By using the center of gravity, it can be used for processing when tracking the target object. Particle filter Particle filter is a prediction method of time series data by probability distribution, also called sequential Monte Carlo method.In this research, red color is detected by using a particle filter and is followed in real time.In brief description of the particle filter, it is assumed that a number of possible subsequent states from the current state are represented by a large number of particles, and the weighted average calculated according to the likelihood of all the particles is assumed to be the next state.Real-time object recognition becomes possible by using particle filter System configuration The conceptual diagram of the system of this research is shown in Fig. 3.The model proposed in this research consists of a total of three devices, two landmarks that can communicate with each other and a two -wheeled robot that follows that landmark.The landmark 1 serves as a master unit that plays the central role of wireless communication and the other devices form a star type network that acts as a slave unit. Two wheeled robot The configuration of the two-wheeled robot is classified into five major categories, "control unit", "motor unit", "communication unit", "power supply unit", and "detection unit".The control section consists of m-stick and Arduino UNO 3. Motor part consists of DC motor and motor driver.The communication section uses wired serial communication and wireless communication by Xbee.The power supply unit uses a mobile battery.The detection unit uses a USB camera.Next, I will explain the detailed operation of each part. In the m stick, we mainly perform image processing.First, HSV conversion is performed on the video obtained from the camera.You can detect landmarks in the video by applying a particle filter to the video.Then, depending on the bubble center coordinates, we decide whether the landmark is on the right side, the left side or the front side with respect to the robot.Then, it transmits direction data to Aruduino and performs motor control. Also, in the communication section, you can inform the landmark as to whether it is close to the placemark.With this, you can head to the next landmark. Landmark 1 The structure of the landmark 1 is classified into four major types, "control unit", "communication unit", "lighting unit", and "power supply unit".The control unit is composed of Arduino UNO 3. In the communication section, wireless communication by Xbee is used.In the light section, a red cellophane is wrapped around a commercially available LED lantern.The power supply unit uses a mobile battery.Landmark 1 plays a central role in this autonomous mobile system. Landmark 2 The structure of the landmark 2 is classified into four major types, "control unit", "communication unit", "lighting unit", and "power supply unit".The control unit is composed of Arduino UNO 3. In the communication section, wireless communication by Xbee is used.In the light section, a red cellophane is wrapped around a commercially available LED lantern.The power supply unit uses a mobile battery.The landmark 2 plays a role of a mobile device operating with a signal from the landmark 1. Experiments and result Experiment using an autonomous mobile robot and two landmarks. Measurement experiment of trackable distances Use the two-wheel robot and the landmark 1.When the robot reaches the landmark 1 and the landmark 1 turns off, it is judged that the measurement is completed.Figure 5 shows the experiment.Fig. 3. Overview of the system Fig. 4 . A complete view of the device As an experiment result, the robot followed the landmark straight up to 60 m.It was found that it was possible to follow a considerable distance with only one landmark. Follow-up experiment with various patterns Use the two-wheel robot and the landmarks 1 and 2. When arriving at the first landmark and arriving at the second landmark, it is judged that tracking is completed. First of all, experiments were conducted to accurately track three devices when they were placed in a triangle.This was done when the landmark 1 was lit first and the landmark 2 was lit. Figure 6 shows the layout of the device. (a), (b) in both cases, It was found that it follows accurately.Regardless of which landmark glows first, it follows exactly.In this way, the landmarks themselves transmit signals and flexibly decide the route, and the robot follows it accordingly. Next, experiments are carried out with a pattern which devices are randomly arranged.Figure 7 shows the layout of the device. Even in this case it followed exactly.As you can see, keep track of landmarks anyway.It is also possible to extend the tracking distance by using several landmarks.From the above experimental results, it was found that flexible movement which is not in the conventional method is possible. Examination of results Experimental results showed that the landmarks themselves are operating as active landmarks exchanging signals with each device and issuing commands.In this research we conducted experiments with three devices, but we can expect that more complex movements are possible if we add more numbers. We also found that the robot can follow the red light and reach the landmark.In the experiment of 5.2.1, it was shown that tracking can be performed with only one landmark up to 60 m.The result of this experiment was able to be useful for future development of research.In this experiment, I wrote a program to detect the red light of the landmark, calculate the area of the red color one by one, and determine the threshold by the size of the area so that the correct landmark will follow, but with that algorithm it is insufficient There was a place.However, this makes it possible to accurately track the landmark by blinking the light of the landmark and recognizing it by image processing. Therefore, this study showed that we could develop a mobile system using autonomous mobile robot using active landmark and image processing. Conclusion In this research, we have been conducting research aiming at the development of active landmarks that themselves actively transmit and receive signals and autonomous mobile robots with two wheels.As a result of device development and operation experiment, we showed that the landmark has turned on and turned off by transmitting and receiving signals, and that the two wheeled robot can follow up with the red light.Moreover, it showed that landmark 1 became the parent machine, and it was possible to gather information from all the devices and issue instructions.The system developed this time is more effective when the number of devices increases.It showed that we were able to develop a flexible autonomous mobile system that has never been done since we followed the device in any way.
2,773.4
2018-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Identifying QCD Transition Using Deep Learning In this proceeding we review our recent work using supervised learning with a deep convolutional neural network (CNN) to identify the QCD equation of state (EoS) employed in hydrodynamic modeling of heavy-ion collisions given only final-state particle spectra ρ(pT ,Φ). We showed that there is a traceable encoder of the dynamical information from phase structure (EoS) that survives the evolution and exists in the final snapshot, which enables the trained CNN to act as an effective “EoS-meter” in detecting the nature of the QCD transition. Introduction The QCD phase structure and the search for the critical end point are the central and primary motivations for high energy heavy ion collisions, which allows to study the birth of the universe in the laboratory on earth as well.Large international collaborations, both in theory and in experiment, have been searching for signals of this phase structure at huge accelerator centres worldwide which are constructed specifically for this exciting view into this distant past.The forthcoming program at FAIR (GSI) and the current beam energy scan project at RHIC (BNL) aim at locating the critical end point in the QCD phase diagram.This critical end point separates the crossover transition and the conjectured first order phase transition from hadrons to deconfined quark-gluon matter [1,2].Critical fluctuations [3,4] are used in experiments to locate this critical end point usually.However, currently the observed signals from experiments are too weak to pin down its location.Moreover, it is rather involved to disentangle different physical factors (like initial states fluctuation, transport coefficients, freeze-out and further hadronic cascade) in a heavy ion collision evolution given only the final states information.Thus, it is difficult to clearly extract physics about bulk properties of the QCD matter from the experimental raw data.We are thereby lacking of a direct and reliable bridge between the bulk properties of the matter produced during the collisions and the raw experimental observables. Deep Learning (DL) is a branch of machine learning, which aims at exploring high-level representations of data using a deeper structure of multiple processing layers.Recently, the application of DL to physics research is rapidly growing, such as in particle physics [9][10][11][12], nuclear physics [13], and condensed matter physics [14][15][16].DL is shown to be very powerful in exploring pertinent hidden features especially for complex non-linear systems with high-level correlations beyond conventional technique's capability.This suggests that DL could be adopted to help uncovering hidden physical information from the highly implicit heavy-ion collision experimental raw data. In a recent work [17], we give an exploratory study in directly connecting QCD bulk properties and raw data from heavy-ion collisions using state-of-the-art deep-learning techniques.The relativistic hydrodynamic models are utilized to generate raw data of final state pion's spectra ρ(p T , Φ) in heavy ion collisions, where different QCD transition types embedded in EoS can be applied directly.Then supervised learning using convolutional neural networks (CNN) is performed with labeled spectra, through which we reveal unique and exclusive encoders of the bulk EoS inside ρ(p T , Φ).Here in this proceeding we will review this exploratory study. Training and testing datasets The evolution of strongly coupled QCD matter in heavy-ion collisions can be well described by 2ndorder dissipative hydrodynamics.The EoS of the medium is a crucial ingredient in solving the hydrodynamic equations, via which the nature of the QCD transition (1st order or crossover) strongly affects the hydrodynamic evolution.The input of our CNN training is set to be final charged pion's spectra ρ(p T , Φ) at mid-rapidity, which can be obtained from the Cooper-Frye formula in hydrodynamic simulation: Here N i is the particle number density, Y is the rapidity, g i is the degeneracy, dσ µ is the freeze-out hypersurface element, f i is the thermal distribution.In the following, we employ the lattice-EoS parametrization [18] (dubbed as EOSL) for the crossover transition and Maxwell construction [19] (dubbed as EOSQ) for the first-order phase transition. The testing dataset contains two groups of samples.In the first group, we generate 7343 ρ(p T , Φ) events using the second-order event-by-event hydrodynamic package iEBE-VISHNU [23] with MC-Glauber initial condition.In the second group, we generate 8917 ρ(p T , Φ) events using the CLVisc package with the IP-Glasma-like initial condition [24].The testing datasets are constructed to explore very different regions of parameters (different set up for η/s, τ 0 and freeze-out temperature) as compared to training dataset.The details are listed in Tab.2 in Ref. [17].Note that all the training and testing ρ(p T , Φ) are preprocessed by ρ = ρ/ρ max − 0.5 to normalize the input data, and each being accompanied with its label of EoS type -EOSQ is labeled by (0, 1) and EOSL is labeled by (1, 0). Convolutional Neural Network Being inspired by the excellent performance of CNN [26,27] in tasks such as image and video recognition, here for our purpose we construct the CNN with an architecture shown in Fig. 3 in Ref. [17] We use two convolutional layers each followed by batch normalization, dropout and PReLU activation.Brief introduction information about these technical terms can be found in the supplementary materials in Ref. [17].In a convolutional layer, each neuron only locally connects to a small chunk of neurons in the previous layer by a convolution operation -this is a key reason for the success of the CNN architecture.Such an architecture works efficiently to prevent overfitting that may generate model-dependent features from the training dataset and thus hinder the generalizability of the method.The final output layer is a fully connected layer with softmax activation and 2 neurons to indicate the type of the EoS.Supervised learning with the above CNN structure is performed on the targeting binary classification problem here -EOSQ (0, 1) or EOSL (1,0).The difference between the true label and the predicted label from the two output neurons, quantified by cross entropy [28], serves as the loss function l(θ), where θ are the trainable parameters of the neural network.Training attempts to minimize the loss function by updating θ → θ − δθ.Here δθ = α ∂l(θ)/∂θ where α is the learning rate with initial value 0.0001 and adaptively changed in AdaMax method [29]. Results and Conclusion After training and validating the network, it is tested on the testing dataset of ρ(p T , Φ) events.The percentage during test that deep CNN can correctly identify the input EoS is usually defined as accuracy here to indicate the learning performance.As shown in Tab. 1, high prediction accuracies -on average larger than 95% -are achieved for these three groups of testing datasets, which indicates that our method is highly independent of initial conditions.The network is robust against shear viscosity and τ 0 due to the inclusion of events with different η/s and τ 0 in the training.In the testing stage the neural network identifies the type of the QCD transition solely from the spectra of each single event.Furthermore, in the training only one freeze-out temperature is used, while the network is tolerant to a wide range of freeze-out temperatures during the testing.For simplicity, the exploratory study has not included pions from resonance decays (the hadronic transport module UrQMD is switched off in iEBE-VISHNU to exclude contributions from resonance decays in testing data). The present method yields a novel perspective on identifying the nature of the QCD transition in heavy-ion collisions.By applying state-of-the-art deep CNNs, we firmly demonstrate that, there do exist discriminative and traceable encoder of the dynamical information from phase structure (EoS) inside the collision evolution's final snapshot-final-state ρ(p T , Φ) in heavy-ion collisions, which can survive even though may not be intuitive and thus not captured by conventional observables well.Meanwhile the deep CNN can exclusively and efficiently decode these EoS information directly from the implicit ρ(p T , Φ) after the hydrodynamic evolution.In this way, high-level representations, which help identifying the QCD transition inside EoS in the present method, act as an "EoS-meter" for the QCD matter created in heavy-ion collisions.Our study might provide a key to the success of the experimental determination of QCD EoS and the search for the critical end point.Another intriguing Table 1 . Tesing accuracies for three groups (CLVisc with AMPT initial state, iEBE-VISHNU and CLVisc with IP-Glasma-like initial condition) of the testing dataset.QCD transition classification.The input ρ(p T , Φ) consists of 15 p T -bins and 48 Φ-bins.
1,930.6
2018-02-01T00:00:00.000
[ "Computer Science", "Physics" ]
Diastereoselective Synthesis of Dispiro[Imidazothiazolotriazine-Pyrrolidin-Oxindoles] and Their Isomerization Pathways in Basic Medium Highly diastereoselective methods for the synthesis of two series of regioisomeric polynuclear dispyroheterocyclic compounds with five or six chiral centers, comprising moieties of pyrrolidinyloxindole and imidazo[4,5-e]thiazolo[3,2-b]-1,2,4-triazine of linear structure or imidazo[4,5-e]thiazolo[2,3-c]-1,2,4-triazine of angular structure, have been developed on the basis of a [3+2] cycloaddition of azomethine ylides to functionalized imidazothiazolotriazines. Depending on the structure of the ethylenic component, cycloaddition proceeds as an anti-exo process for linear derivatives, while cycloaddition to angular ones resulted in a syn-endo diastereomer. Novel pathways of isomerization for the synthesized anti-exo products upon treatment with sodium alkoxides have been found, which resulted in two more series of diastereomeric dispiro[imidazothiazolotriazine-pyrrolidin-oxindoles] inaccessible with the direct cycloaddition reaction. For the first series, the inversion of the configuration of one stereocenter, i.e., C-4′ atom of the pyrrolidine cycle, (epimerization) was established. For the second one, configuration of the obtained diastereomer formally corresponded to the syn-endo approach of the azomethine ylide in the case of cycloaddition to the ethylenic component. Introduction Recent trends of organic and medicinal chemistry consist in constructing rigidly oriented spiroheterocyclic structures with high solubility and bioavailability as well as the ability to interact effectively with various biological targets [1].Special attention is paid to the oxindoles spiro-linked with the pyrrolidine cycle, which have become popular since the discovery of valuable pharmacological properties of a number of natural alkaloids, such as spirotriprostatin B [2], horsfilin [3] and mitraphyllin [4] at the end of the XX century.The antitumor activity of synthetic spiropyrrolidineoxindoles is actively investigated [5][6][7][8][9][10][11] (Figure 1).For example, polymerization inhibitors of actin [8] and tubulin [9], as well as MDM2-p53 protein-protein interaction [10-14] were obtained. At the same time, spiropyrrolidineoxindole 1 prepared using cycloaddition appeared to be a less active MDM2-p53 protein-protein interaction inhibitor than the MI-888 pre-pared via base-induced isomerization of compound 1 (Scheme 1A) [12].Therefore, development of the methods for the isomerization of spiropyrrolidineoxindole prepared via a cycloaddition reaction are relevant. Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 2 of 24 peared to be a less active MDM2-p53 protein-protein interaction inhibitor than the MI-888 prepared via base-induced isomerization of compound 1 (Scheme 1A) [12].Therefore, development of the methods for the isomerization of spiropyrrolidineoxindole prepared via a cycloaddition reaction are relevant.Scheme 1. Background and purpose of this work.(A).Zhao Y. et al. [12]; (B).Our previous work [28]; (C).This work. Earlier, we have discovered the skeletal rearrangement of dispiropyrrolidineoxindoles 2 into isomers 3 upon treatment with KOH (see Scheme 1B) [28].Herein, we carried out the cycloaddition of ylidene derivatives of imidazothiazolotriazine and azomethine ylides generated from amino acids and isatins and studied various isomerization pathways of synthesized cycloaddition products 4 in basic medium (see Scheme 1C). Earlier, we have discovered the skeletal rearrangement of dispiropyrrolidineoxindoles 2 into isomers 3 upon treatment with KOH (see Scheme 1B) [28].Herein, we carried out the cycloaddition of ylidene derivatives of imidazothiazolotriazine and azomethine ylides generated from amino acids and isatins and studied various isomerization pathways of synthesized cycloaddition products 4 in basic medium (see Scheme 1C). Earlier, we have discovered the skeletal rearrangement of dispiropyrrolidineoxindoles 2 into isomers 3 upon treatment with KOH (see Scheme 1B) [28].Herein, we carried out the cycloaddition of ylidene derivatives of imidazothiazolotriazine and azomethine ylides generated from amino acids and isatins and studied various isomerization pathways of synthesized cycloaddition products 4 in basic medium (see Scheme 1C). Results and Discussion Target dispiro[imidazothiazolotriazine-6,3 -pyrrolidin-2 ,3 -oxindoles] 4a-k were prepared according to earlier elaborated procedure [28,29] The highest yields of cycloadducts 4a, 4b, 4f and 4g (91, 84, 93 and 95%, respectively) were observed when using N-methyl-and N-ethylglycines 7a,b as amino acid and unsubstituted isatin 8a, while application of more sterically hindered N-isopropylglycine 7c and substituted isatins 8b,c led to some decrease in the yields of corresponding dispyrocyclic structures 4c-e and 4h-j to 74-82%.The introduction of the N-methyl norvaline derivative 7d into the reaction, which has an additional substituent at the α-carbon atom in comparison with sarcosine, was accompanied by a significant decrease in the yield of the corresponding product 4k to 31%.The relative configuration of compound rel-(2′R,3aS,4′S,6R,9aR)-4f was unambiguously assigned via single crystal X-ray diffraction and corresponded to the configuration of previously obtained similar compounds [28,29].The configuration of all other products was assigned by analogy.The configuration of the pyrrolidine cycle C-5′ atom in the structure 4k is adopted by analogy with the examples known in the literature [31,32].The absence of signals from other isomers in the 1 H NMR spectra of the evaporated reaction mixtures indicates a high selectivity of the reaction and the formation of single regioisomer and diastereomer 4. At the same time, the formation of the pyrrolidine cycle during the process of [3+2] cycloaddition of azomethine ylides to the double bond of ethylenic components is ac- The highest yields of cycloadducts 4a, 4b, 4f and 4g (91, 84, 93 and 95%, respectively) were observed when using N-methyl-and N-ethylglycines 7a,b as amino acid and unsubstituted isatin 8a, while application of more sterically hindered N-isopropylglycine 7c and substituted isatins 8b,c led to some decrease in the yields of corresponding dispyrocyclic structures 4c-e and 4h-j to 74-82%.The introduction of the N-methyl norvaline derivative 7d into the reaction, which has an additional substituent at the α-carbon atom in comparison with sarcosine, was accompanied by a significant decrease in the yield of the corresponding product 4k to 31%.The relative configuration of compound rel-(2 R,3aS,4 S,6R,9aR)-4f was unambiguously assigned via single crystal X-ray diffraction and corresponded to the configuration of previously obtained similar compounds [28,29].The configuration of all other products was assigned by analogy.The configuration of the pyrrolidine cycle C-5 atom in the structure 4k is adopted by analogy with the examples known in the literature [31,32].The absence of signals from other isomers in the 1 H NMR spectra of the evaporated reaction mixtures indicates a high selectivity of the reaction and the formation of single regioisomer and diastereomer 4. At the same time, the formation of the pyrrolidine cycle during the process of [3+2] cycloaddition of azomethine ylides to the double bond of ethylenic components is accompanied with the generation of three new stereocenters.Together with the stereocenters available in the initial compounds (C-3a and C-9a), the number of chiral centers can theo-retically determine the formation of 16 diastereomers (16 enantiomeric pairs).The reasons for decrease in the number of possible diastereomers can be the following: (i) the use of compounds with a Z-configuration of double bonds and rigid cis-junction at the C-3a-C-9a bond as ethylenic components; (ii) the synchronicity of the cyclocondensation process; and (iii) nonequivalence in modes of approach of the azomethine ylide to ethylenic component. Azomethine ylide generated in situ via condensation of isatin and amino acid (for example, N-methylglycine) followed by thermic decarboxylation of the intermediate lactone (Scheme 3).companied with the generation of three new stereocenters.Together with the stereocenters available in the initial compounds (C-3a and C-9a), the number of chiral centers can theoretically determine the formation of 16 diastereomers (16 enantiomeric pairs).The reasons for decrease in the number of possible diastereomers can be the following: (i) the use of compounds with a Z-configuration of double bonds and rigid cis-junction at the C-3a-C-9a bond as ethylenic components; (ii) the synchronicity of the cyclocondensation process; and (iii) nonequivalence in modes of approach of the azomethine ylide to ethylenic component.Azomethine ylide generated in situ via condensation of isatin and amino acid (for example, N-methylglycine) followed by thermic decarboxylation of the intermediate lactone (Scheme 3).Meanwhile, the nitrogen containing three atom components involved in the [3+2] cycloaddition reaction can be attributed to the dipolar, zwitterionic, pseudo(mono)-or pseudodiradical type [33][34][35][36][37][38].However, recent experimental and theoretical data obtained for azomethine ylides indicate their pseudoradical nature [33][34][35]. Theoretically possible mechanisms of the cycloaddition reaction could include both one-step and stepwise pathways for the formation of the pyrrolidine ring (Scheme 4).During the stepwise reaction of azomethine ylides, which have both a dipole (Path I) and a pseudodiradical (Path II) character, free rotation around the bond brought from the ethylenic component to the target pyrrolidine ring is unlocked within the resulting intermediates.Therefore, a stepwise process should lead to the limitation of stereoselectivity and the formation of two stereoisomeric products 4a and 5a.The absence of signals from other isomers (including epimeric structures 5) in the 1 H NMR spectra of the evaporated reaction mixtures evidence of one-step process (for example, Path III or IV).Recent theoretical investigations (using the topological analysis of the electron localization function (ELF) at the B3LYP/6-31G(d) level of theory) of the cycloaddition reaction of symmetric azomethine ylide prove synchronous concerted transition state structure, and the process may be electronically classified as pseudo-diradical [2n + 2π] process (Path IV) [34].However, the presence of electron-withdrawing carbonyl C=O group in the isatin derivative can modify its reactivity.By taking into account the presence of conjugated C=O group in an ethilenic component, it can be assumed that the cycloaddition take place through a polar non-concerted two-stage one-step mechanism associated with the nucleophilic attack of the least substituted carbon of azomethine ylide on the β-conjugated position of the ethylenic component 9 [27,[35][36][37].Meanwhile, the nitrogen containing three atom components involved in the [3+2] cycloaddition reaction can be attributed to the dipolar, zwitterionic, pseudo(mono)-or pseudodiradical type [33][34][35][36][37][38].However, recent experimental and theoretical data obtained for azomethine ylides indicate their pseudoradical nature [33][34][35]. Theoretically possible mechanisms of the cycloaddition reaction could include both one-step and stepwise pathways for the formation of the pyrrolidine ring (Scheme 4).During the stepwise reaction of azomethine ylides, which have both a dipole (Path I) and a pseudodiradical (Path II) character, free rotation around the bond brought from the ethylenic component to the target pyrrolidine ring is unlocked within the resulting intermediates.Therefore, a stepwise process should lead to the limitation of stereoselectivity and the formation of two stereoisomeric products 4a and 5a.The absence of signals from other isomers (including epimeric structures 5) in the 1 H NMR spectra of the evaporated reaction mixtures evidence of one-step process (for example, Path III or IV).Recent theoretical investigations (using the topological analysis of the electron localization function (ELF) at the B3LYP/6-31G(d) level of theory) of the cycloaddition reaction of symmetric azomethine ylide prove synchronous concerted transition state structure, and the process may be electronically classified as pseudo-diradical [2n + 2π] process (Path IV) [34].However, the presence of electron-withdrawing carbonyl C=O group in the isatin derivative can modify its reactivity.By taking into account the presence of conjugated C=O group in an ethilenic component, it can be assumed that the cycloaddition take place through a polar non-concerted two-stage one-step mechanism associated with the nucleophilic attack of the least substituted carbon of azomethine ylide on the β-conjugated position of the ethylenic component 9 [27,[35][36][37]. Additionally, the complicated structures, both three atom component and ethylenic component, propose nonequivalence in modes of approach of the azomethine ylide (Scheme 5).The addition of sterically bulky azomethine ylides occurs from the less sterically loaded anti-side relative to the imidazolidine cycle and proceeds via an exo-transition state, where the carbonyl groups of the oxindole fragment and the thiazolidinone ring become relative to the pyrrolidine cycle on different sides. One example shows that the introduction of chiral (R)-2-[(1-phenylethyl)amino]acetic acid 7e into the reaction leads to the formation of a mixture of two diastereomers 4l and 4m in approximately equal amounts instead of racemate (Scheme 6).With an increase in the bulk of the substituent in the reagent, the use of a two-fold excess of amino acid and isatin, as well as a longer reaction time (36 h) are required, and the total yield of the mixture of products 4l and 4m decreased to 41%.It is shown that diastereomers 4l and 4m have different retention times in the chromatographic column and can be isolated individually (see Supplementary Materials). Scheme 5. Modes of approach of the azomethine ylide. One example shows that the introduction of chiral (R)-2-[(1-phenylethyl)amino]acetic acid 7e into the reaction leads to the formation of a mixture of two diastereomers 4l and 4m in approximately equal amounts instead of racemate (Scheme 6).With an increase in the bulk of the substituent in the reagent, the use of a two-fold excess of amino acid and isatin, as well as a longer reaction time (36 h) are required, and the total yield of the mixture of products 4l and 4m decreased to 41%.It Scheme 5. Modes of approach of the azomethine ylide. (R)-2-[(1-phenylethyl)amino]acetic acid 7e into the reaction leads to the formation of a mixture of two diastereomers 4l and 4m in approximately equal amounts instead of racemate (Scheme 6).With an increase in the bulk of the substituent in the reagent, the use of a two-fold excess of amino acid and isatin, as well as a longer reaction time (36 h) are required, and the total yield of the mixture of products 4l and 4m decreased to 41%.It is shown that diastereomers 4l and 4m have different retention times in the chromatographic column and can be isolated individually (see Supplementary Materials).It was noted above that the treatment of structurally related dispiro[imidazothiazolotriazine-pyrrolidin-oxindoles] 2, having an aryl substituent in the pyrrolidine cycle, with KOH is accompanied by hydrolysis of the amide bond and skeletal rearrangement of the thiazolotriazine system, which resulted in regioisomeric products 3 [28].In turn, boiling esters 4a-j in alcohols in the presence of sodium alkoxides mainly led to the formation of a mixture of two new diastereomers 5 and 6 in different ratios (Scheme 7). 1 H NMR monitoring of the reaction showed that the complete conversion of the initial compounds 4 into products 5 and 6 was achieved with the action of 0.25 equivalents of sodium alkoxide for 4 h.It was noted above that the treatment of structurally related dispiro[imidazothiazolotriazine-pyrrolidin-oxindoles] 2, having an aryl substituent in the pyrrolidine cycle, with KOH is accompanied by hydrolysis of the amide bond and skeletal rearrangement of the thiazolotriazine system, which resulted in regioisomeric products 3 [28].In turn, boiling esters 4a-j in alcohols in the presence of sodium alkoxides mainly led to the formation of a mixture of two new diastereomers 5 and 6 in different ratios (Scheme 7). 1 H NMR monitoring of the reaction showed that the complete conversion of the initial compounds 4 into products 5 and 6 was achieved with the action of 0.25 equivalents of sodium alkoxide for 4 h.Herein, an increase in the reaction time had little effect on the yields and the ratio of the products formed, while an increase in the amount of base can accelerate the reaction and slightly change the ratio of products, reducing, nevertheless, their total yield.In some cases, compounds 5c,d,h were isolated from the reaction mixture without impurities of other diastereomers.The signals of their isomers 6c,d,h were observed in the 1 H NMR spectra of evaporated reaction mixtures in trace amounts and were not isolated in Herein, an increase in the reaction time had little effect on the yields and the ratio of the products formed, while an increase in the amount of base can accelerate the reaction and slightly change the ratio of products, reducing, nevertheless, their total yield.In some cases, compounds 5c,d,h were isolated from the reaction mixture without impurities of other diastereomers.The signals of their isomers 6c,d,h were observed in the 1 H NMR spectra of evaporated reaction mixtures in trace amounts and were not isolated in these cases.Each of the isomers 5a-j and 6a,b,e-g,i,j was isolated individually via fractional crystallization from the reaction mixtures or MeCN.The relative configurations of the chiral centers of diastereomers rel-(2 R,3aS,4 R,6R,9aR)-5 and rel-(2 R,3aS,4 R,6S,9aR)-6 were unambiguously determined using single crystal X-ray diffraction for compounds 5b and 6a.For compounds 5, the inversion of the configuration of one stereocenter, i.e., C-4 atom of the pyrrolidine cycle (epimerization), was established in comparison with compounds 4, which was previously described for related spiropyrrolidineindoles on a single example [39].Configuration of diastereomers 6 indicated the inversion of two stereocenters compared to starting compounds 4 (C-3 (C-6) and C-4 atoms of the pyrrolidine cycle) and formally corresponded to the syn-endo approach of the azomethine ylide in the case of cycloaddition to dipolarophile 9 (Scheme 5). We assumed that the presence of an electron-withdrawing ester group at 4 position of compounds 4 makes hydrogen atom at the corresponding α-carbon atom acidic.Therefore, sodium alkoxide causes the primary deprotonation of structures 4 and the formation of carbanion A (Scheme 8), which transforms into a more stable anion B due to the elimination of the thiolate anion.As a result of the opening of the thiazolidine cycle, it is possible to freely rotate the spiropyrrolidineoxindole fragment of the molecule around a single C-C bond, the further addition of the thiolate anion to a double bond of the Michael acceptor, and finally, the spiro node formation in new syn-endo-diastereomers 6 inaccessible via the direct cycloaddition reaction.The processes occurring in this case do not affect other asymmetric centers present in the molecule (C-3a, C-9a and C-2 ); therefore, the corresponding carbon atoms in isomeric structures 4, 5 and 6 have the same configuration.The anti-exo epimer 5 can be form from carbanion A and alcohol.The structures of the prepared compounds were also proven using spectral methods.In the 1 H NMR spectra, a characteristic signal that allows the resulting compound to be assigned to one of the diastereomeric products 4, 5, or 6 is the signal of the 4′-CH proton of the pyrrolidine ring, which experiences different deshielding effects from neighboring carbonyl groups.Due to its closer spatial arrangement to the atom 4′-CH, deshielding effect of the carbonyl group of the oxindole fragment is higher than that of the carbonyl group of the thiazolidinone ring.As a result, the corresponding signal for the epimeric products 5 downfield shifted (4.40-4.45ppm) compared to its location in the spectra of starting structures 4 (4.03-4.10ppm) (Figure 2).In syn-endo diastereomers 6, the carbonyl groups of the oxindole fragment and the thiazolidine ring are on the same side relative to the pyrrolidine ring, which leads to maximum deshielding of the 4′-CH hydrogen atom by both groups and a strong downfield shift of its signal to the region of 5.02-5.07ppm.The structures of the prepared compounds were also proven using spectral methods.In the 1 H NMR spectra, a characteristic signal that allows the resulting compound to be assigned to one of the diastereomeric products 4, 5, or 6 is the signal of the 4 -CH proton of the pyrrolidine ring, which experiences different deshielding effects from neighboring carbonyl groups.Due to its closer spatial arrangement to the atom 4 -CH, deshielding effect of the carbonyl group of the oxindole fragment is higher than that of the carbonyl group of the thiazolidinone ring.As a result, the corresponding signal for the epimeric products 5 downfield shifted (4.40-4.45ppm) compared to its location in the spectra of starting structures 4 (4.03-4.10ppm) (Figure 2).In syn-endo diastereomers 6, the carbonyl groups of the oxindole fragment and the thiazolidine ring are on the same side relative to the pyrrolidine ring, which leads to maximum deshielding of the 4 -CH hydrogen atom by both groups and a strong downfield shift of its signal to the region of 5.02-5.07ppm. the carbonyl group of the thiazolidinone ring.As a result, the corresponding signal for the epimeric products 5 downfield shifted (4.40-4.45ppm) compared to its location in the spectra of starting structures 4 (4.03-4.10ppm) (Figure 2).In syn-endo diastereomers 6, the carbonyl groups of the oxindole fragment and the thiazolidine ring are on the same side relative to the pyrrolidine ring, which leads to maximum deshielding of the 4′-CH hydrogen atom by both groups and a strong downfield shift of its signal to the region of 5.02-5.07ppm.To obtain skeletal isomers of compounds 4, three-component [3+2] cycloaddition reaction of azomethine ylides with functionalized imidazothiazolotriazines 10a,b [30] of an angular structure was also carried out by boiling the starting compounds in acetonitrile.The previously unknown regioisomeric dispirocyclic structures 11a-f were synthesized in 41-73% yields (Scheme 9).The relative configuration of the structure rel-(2 S,3aR,4 S,7R,9aS)-11e was determined via X-ray diffraction analysis and appeared to be corresponding to a syn-endo diastereomer.Chemical shifts (4.72-4.82ppm) of the signal for the 4 -CH hydrogen atom of the pyrrolidine ring in the 1 H NMR spectra of compounds 11a-f allow to assign all compounds to the diastereomers of the same structure.To obtain skeletal isomers of compounds 4, three-component [3+2] cycloaddition reaction of azomethine ylides with functionalized imidazothiazolotriazines 10a,b [30] of an angular structure was also carried out by boiling the starting compounds in acetonitrile.The previously unknown regioisomeric dispirocyclic structures 11a-f were synthesized in 41-73% yields (Scheme 9).The relative configuration of the structure rel-(2′S,3aR,4′S,7R,9aS)-11e was determined via X-ray diffraction analysis and appeared to be corresponding to a syn-endo diastereomer.Chemical shifts (4.72-4.82ppm) of the signal for the 4′-CH hydrogen atom of the pyrrolidine ring in the 1 H NMR spectra of compounds 11a-f allow to assign all compounds to the diastereomers of the same structure. General Information All standard reagents were purchased from Aldrich or Acros Organics and used without further purification. Melting points were determined on a Stuart SMP20 apparatus (Stuart (Bibby Scientific), Stone, UK). General Information All standard reagents were purchased from Aldrich or Acros Organics and used without further purification. Melting points were determined on a Stuart SMP20 apparatus (Stuart (Bibby Scientific), Stone, UK). The starting dipolarophiles 9a,b and 10a,b were prepared according to a procedure described in the literature [30].Amino acid 7e was prepared according to a procedure mentioned in the literature [40]. General Procedure for the Synthesis of Compounds 4a-m A mixture of corresponding compound 9a,b (1 mmol), aminoacetic acid 7a-d (1.5 mmol) and isatin 8a-c (1.5 mmol) in MeCN (20 mL) was refluxed with stirring for 8 h (24 h for 4e,j).After cooling, the precipitate of compounds 4a-k was filtered off, washed with methanol and dried at 50 • C. To obtain the target compounds as mixtures of diastereomers 5 and 6, the solvent was evaporated under reduced pressure, and the dry residue was triturated with a small amount of MeCN.The resulting suspension was filtered, and the filter cake was washed with MeCN and dried at 50 • C. To obtain the individual diastereomers 5 and 6, the resulting precipitate was dissolved in boiling MeCN and the resulting solution was left in an open flask to effect slow crystallization of the precipitate.As the volume of the solution decreased, the crystallizing precipitates were filtered, washed with MeCN, and dried.The filtrate was left in an open flask for further crystallization.This procedure was repeated at least 3-4 times.If necessary, the product contaminated with another isomer could be purified via recrystallization from MeCN. Scheme 3 . Scheme 3. The proposed formation mechanism and structure of azomethyne ylide. Scheme 3 . Scheme 3. The proposed formation mechanism and structure of azomethyne ylide. 24 Scheme 7 . Scheme 7. Isomerization of dispirocompounds 4 into diastereomers 5 and 6. a The ratio of compounds 5 and 6 was determined from the 1 H NMR spectrum of the mixture.b Isolated yield. Scheme 7 . Scheme 7. Isomerization of dispirocompounds 4 into diastereomers 5 and 6. a The ratio of compounds 5 and 6 was determined from the 1 H NMR spectrum of the mixture.b Isolated yield. Scheme 8 . Scheme 8. Plausible mechanism of the isomerization of compounds 4 into isomers 6. Scheme 8 . Scheme 8. Plausible mechanism of the isomerization of compounds 4 into isomers 6.
5,399.8
2023-11-01T00:00:00.000
[ "Chemistry" ]
Transcriptomic analysis of cyanobacterial alkane overproduction reveals stress-related genes and inhibitors of lipid droplet formation The cyanobacterium Nostoc punctiforme can form lipid droplets (LDs), internal inclusions containing triacylglycerols, carotenoids and alkanes. LDs are enriched for a 17 carbon-long alkane in N. punctiforme , and it has been shown that the overexpression of the aar and ado genes results in increased LD and alkane production. To identify transcriptional adaptations associated with increased alkane production, we performed comparative transcriptomic analysis of an alkane overproduction strain. RNA-seq data identified a large number of highly upregulated genes in the overproduction strain, including genes potentially involved in rRNA processing, mycosporine-glycine production and synthesis of non-ribosomal peptides, including nostopeptolide A. Other genes encoding helical carotenoid proteins, stress-induced proteins and those for microviridin synthesis were also upregulated. Construction of N. punctiforme strains with several upregulated genes or operons on multi-copy plasmids resulted in reduced alkane accumulation, indicating possible negative regulators of alkane production. A strain containing four genes for microviridin biosynthesis completely lost the ability to synthesize LDs. This strain exhibited wild-type growth and lag phase recovery under standard conditions, and slightly faster growth under high light. The transcriptional changes associated with increased alkane production identified in this work will provide the basis for future experiments designed to use cyanobacteria as a production platform for biofuel or high-value hydrophobic products. INTRODUCTION The use of fossil fuels is an unsustainable method of energy production with negative long-term environmental impacts, necessitating progress on alternative fuels that are compatible with existing technologies. Production of biofuels by photosynthetic organisms is one such alternative with great potential to recycle carbon dioxide released by burning fossil fuels back into energy-rich compounds. Bacterial-or algalgenerated lipids can be hydrolyzed to fatty acids and glycerol and then converted to biodiesel by methylating the fatty acids to form fatty acid methyl esters. Lipid production is induced in algae by nitrogen starvation, which triggers the accumulation of photosynthate into triacylglyerols concentrated in lipid droplets (LDs). Cyanobacteria such as Nostoc punctiforme are one type of bacteria that can produce LDs, but more research is needed to increase their level of production. LDs in cyanobacteria increase during stationary phase, and unlike algae, do not require nitrogen starvation for their production [1]. Cyanobacterial LDs such as those found in N. punctiforme OPEN ACCESS are unique in that they also contain ~17 % alkanes, relative to total extracted fatty acids, of a length typically found in jet or diesel fuel mixed with triacylglyerols [1], giving added value to the use of cyanobacteria for biofuel production. C15-C19 alka(e)ne production is common among cyanobacteria [2,3], and synthesis occurs via one of two different pathways [4]. The first pathway uses a multi-domain containing polyketide synthesase enzyme (Ole) catalyzing 2-carbon elongation of fatty acyl-acyl carrier protein (acyl-ACP) and subsequent decarboxylation to produce odd carbon alka(e) ns one carbon longer than the C16 and C18 carbon-long fatty acids typically present in cyanobacteria [5]. The second alka(e)ne-producing pathway starts from the same acyl-ACP precursor, but uses the sequential action of two enzymes that first activate and then remove a formyl group to produce odd length alka(e)nes one carbon shorter than the fatty acid substrate [6,7]. All cyanobacteria with a sequenced genome possess one of these pathways for alkane production, indicating a conserved physiological importance [3]. Recent studies have been conducted to determine the role of alkanes in cyanobacteria. Both Synechocystis sp. PCC 6803 and Synechococcus sp. PCC 7002 mutants deficient in alkane production exhibited reduced growth, as well as enlarged cell size and increased division defects, likely caused by reduced membrane flexibility and curvature [8]. Berla et al. [9] showed that a Synechocystis 6803 alkane mutant grew poorly at low temperatures and had enhanced cyclic electron flow, especially at low temperatures. Thus it appears that alkanes may be essential for proper membrane fluidity, and aid in regulation of the ATP : NADPH energy/reductant balance required for cyanobacterial adaptation to daily environmental changes. In N. punctiforme, Npun_R1711 and Npun_R1710 encode acyl-ACP reductase (Aar), and aldehyde deformylating oxygenase (Ado), respectively, which act sequentially to produce only 17 carbon-long alkanes [10]. Orthologues of these enzymes from other cyanobacteria, especially when expressed in an Escherichia coli host, produce alkanes and alkenes in a variety of lengths [6], indicating the possibility of tailoring alka(e)ne production to fit economic needs. When aar and ado were present on a multi-copy plasmid along with the putative lipase Npun_F5141, N. punctiforme displayed a 16-fold increase in C17 alkane production, which also stimulated increased LD formation [10]. Alkanes were found to be highly enriched in LDs, and not in pelleted membranes and cell debris, leading to the hypothesis that increased LDs were formed as a way to sequester excess alkanes, and keep them from interfering with the normal functioning of photosynthetic and cell membranes [1]. To better understand the physiological response to increased hydrophobic compound production in cyanobacteria, we initiated a transcriptomic study to identify changes associated with increased alkane production. Although alkanes are relatively low-value products in our current economy, the transcriptional response, especially of upregulated genes related to stress responses, may be of benefit to future researchers wishing to optimize the production of higher-value hydrophobic compounds in the future. Just as was found for alkanes [10], overproduced high-value compounds or metabolites produced in N. punctiforme will likely partition into LDs, enabling separation from other cell components by floatation following cell lysis. In addition, it may be possible to combine the data presented here with advances in alkane production using metabolic engineering [11][12][13][14][15][16][17], or to enable shorter-chain alkane production in cyanobacteria for direct production of biofuels that can be secreted into the media to make production more economically viable [18]. Overall, these results will aid studies to further utilize cyanobacteria as a production platform for alkanes and other hydrophobic compounds in the future. Impact Statement Nostoc punctiforme is a filamentous nitrogen-fixing cyanobacterum that forms internal lipid droplets containing diacylglycerols, carotenoids and 17 carbon-long alkanes. All cyanobacteria can produce small amounts of alkanes, well below the production levels found in oleaginous algae, but their physiological function remains elusive. To better understand cellular adaptations associated with overproduction of alkanes that could lead to use of cyanobacteria as a feedstock for biodiesel or for production of hydrophobic biomolecules, we determined the transcriptomic changes associated with alkane overproduction. We found many highly upregulated genes in the overproduction strain involved with cellular stress and the production of unique secondary metabolites. When we reintroduced upregulated genes and operons, several were found to reduce alkane overproduction, and one operon resulted in complete loss of lipid droplet formation. These results indicate potential negative regulators of alkane production and lipid droplet formation. The results of this study are useful in understanding the cellular response to alkane overproduction, which is not only important for the development of cyanobacteria as a feedstock for biofuels, but also as production platforms for high-value hydrophobic biomolecules. at 120 r.p.m. Plates were statically grown at the same illumination and temperature in a CO 2 -enriched (5000 p.p.m.) growth chamber. Altered parameters were temperature (15 °C) for cold growth experiments, and illumination (110 μmol photons m 2 s −1 ) for high-light experiments. E. coli, DH5-α MCR, was grown in Luria-Bertani (LB) broth and agar plates at 37 °C using 30 µg ml −1 kanamycin for plasmid selection. Plasmid and strain construction The two-gene Npun_F1710/11 expression (2 g) plasmid was made by PCR-amplifying adjacent genes Npun_R1711 and Npun_R1710 and the upstream intergenic region, and subsequent cloning into pSCR119, a shuttle plasmid capable of replication in both E. coli and N. punctiforme as described previously [10]. PCR fragments of various genes or operons found to be upregulated during comparative transcriptomic analysis in the 2 g strain were generated using the appropriate upstream P1 and downstream P2 primers (Table S1) using Herculase II Fusion DNA Polymerase (Aligent) or Phusion High Fidelity Polymerase (Thermo Scientific), and cloned into the KpnI/SacI sites of pSCR119 to create the 'single-gene' plasmids. These same fragments were similarly cloned into a previously constructed 'three-gene' plasmid consisting of pSCR119 containing Npun_F1710/11 and Npun_F5141 [10] to create a set of 'four-gene' plasmids. The pSCR119 plasmid does not contain a promoter to drive transcription of inserted DNA and so upstream intergenic regions were included in all gene inserts to allow transcription from native promoters. All inserted genes in four-gene plasmids were cloned in the same transcriptional orientation, and downstream from the existing three-genes. All plasmids were verified by Sanger sequencing and transformed into N. punctiforme by electroporation [20]. RNA preparation, RNA-seq library preparation and sequencing Triplicate cultures of log-phase wild-type plasmid-only control (WTC) and 2 g strains were harvested by a 2 min centrifugation at 6000 g. The pellet was suspended in 700 µl of media prior to flash freezing and storage at −80 °C. RNA was harvested as described previously [21]. The samples were then further purified using a RNeasy Mini Kit (Qiagen) with on-column DNase treatment. Samples were aliquoted and stored at −80 °C until used. Five micrograms of each sample was cleaned and concentrated further using an RNA Clean and Concentrator-5 kit (Zymo). rRNA depletion was then performed using Terminator 5′-Phosphate-Dependent Exonuclease (Epicentre) following the manufacturer's protocol using buffer A. Strand-specific RNA libraries were prepared and barcoded using the NEBNext Ultra Directional RNA Library Prep kit for Illumina and NEBNext Multiplex Oligos for Illumina (New England Biolabs) per the manufacturer's instructions. Library quality and size distribution was determined using an Experion 1K DNA analysis kit (Bio-Rad), and then each cDNA library was quantified using a Qubit Fluorometer and dsDNA High Sensitivity reagents (Invitrogen) according to the manufacturer's instructions. Each library was quantified via qPCR prior to pooling using the Library Quantification kit for Illumina platforms (KAPA), and then each library was normalized to 10 nM and pooled for sequencing. The multiplexed pooled libraries containing all six samples were then sequenced as single-end 100 bp reads on an Illumina HiSeq 2500 system at the UC Irvine Genomics High-Throughput Facility. RT-qPCR analysis RNA was independently extracted from cells grown under the same growth condition as described previously for transcriptomic (RNA-seq) analysis. First-strand cDNAs were synthesized using SuperScript II Reverse Transcriptase (Invitrogen) with gene-specific P2 primers (Table S1). qPCR was performed following multiplex reverse transcription using FastStart Universal SYBR Green Master with ROX (Roche) using five fivefold serial dilutions of each cDNA sample to determine PCR efficiency and favourable dilution factor. The manufacturer's protocol was modified for 20 µl reactions. Four technical replicates per duplicate or triplicate biological replicates were subjected to qPCR. The gene-specific primer sets used for qPCR are listed in Table S1 (available in the online version of this article). Lipid extraction and analysis For alkane and fatty acid analysis from whole cells and LDs, triplicate samples of exponential and stationary cultures were subjected to lipid extraction, saponification and methyl esterification to produce fatty acid methyl esters (FAMEs) as described previously [10]. LDs were harvested as described previously [1]. Due to the unavailability of 12 % BCl 3 in methanol from the suppliers used to prepare methyl esters for the above samples, LD samples were converted to FAMEs using the protocol of Ichihara and Fukubayashi [27]. Analysis of all FAMEs was performed using a SHIMADZU/ QP2010S gas chromatography mass spectrometer (GC-MS). The GC was equipped with a SHRX1-5MS column (30 m × 0.25 mm I.D., 0.25 µm film thickness). The oven temperature was held at 180 °C for 1 min and increased to 300 °C at 12 °C min −1 , with the final temperature of 300 °C then maintained for 2 min. Helium was used as the carrier gas and 1 µl of sample was injected in split mode (1 : 75). MS detector voltage was set at 0.25 kV, and the samples were identified using NIST11 and NIST11s libraries. FAME standards (RESTEK cat #35066) were used to confirm identifications. LD staining and analysis Screening strains for altered LD phenotypes was accomplished by fluorescence microscopy using the non-toxic fluorescent nonpolar dye BODIPY 505/515 (Molecular Probes cat #D3921). One microlitre of a working stock solution (235 µg ml −1 in DMSO) was added to 20 µl of a cell culture and incubated for 5 min in the dark before the observation of LDs as a wet mount on a Zeiss Axiolab epifluorescence microscope equipped with a blue (475±20 nm) excitation filter and a green (535±23 nm) emission filter. Comparative transcriptomics and verification of results Comparative transcriptomic analysis of the samples identified 421 genes that were significantly regulated between the wildtype plasmid-only control strain and the alkane overexpressing '2 g strain' bearing pSCR119 containing Npun_R1710 and Npun_R1711. The volcano plot in Fig. 1a depicts the expression levels of all genes present in the N. punctiforme PCC 73102 genome, including significantly altered gene expression (red circles) between the two groups. FPKM values for each replicate were similar and normally distributed, as required for analysis by the Tuxedo suite (Fig. 1b). Using a 2-fold or greater cutoff of significantly expressed genes, we identified 177 upregulated and 121 downregulated genes that responded to overproduction of alkanes. Their expression is visualized as a heat map (Fig. 1c) and a complete list can be found in Tables S2 and S3. Table 1 contains genes mentioned in the text. As an internal control, it was found that Npun_R1710 and Npun_R1711, were upregulated 15-and 23-fold, respectively. This level of induction is in general agreement with the previously found copy number of ~14 copies per chromosome for this plasmid [20]. Most highly regulated genes were found to be upregulated rather than downregulated; only 11 downregulated genes displayed greater than 4-fold change, whereas 85 upregulated genes changed 4-fold or more. Genes encoding proteins of unknown function make up around ~29 % of upregulated genes and ~19 % of downregulated genes. To verify the validity of the comparative transcriptomic data, several up and downregulated genes were tested using RT-qPCR on independently isolated RNA ( Table 2). The RNA-seq transcriptomic data in general exhibited larger changes in gene expression, but overall the trends and relative changes among those tested were confirmed by RT-qPCR on independently isolated samples. Identification of regulated genes Upregulated genes in the alkane overproduction strain The most highly upregulated gene in this study encoded a protein of unknown function, Npun_R1332, differentially Continued expressed >60-fold in the 2 g strain. It contains a NYN domain (Nedd4-BP1 and YacP Nuclease; Pfam01936), predicted to be involved in processing tRNA and ribosomal RNAs [28]. The gene encoding ribonuclease 3 in N. punctiforme, Npun_ R1331, is adjacent to Npu_R1332. In Oscillatoria acuminate, an orthologue of this gene is fused to ribonuclease 3 that is involved in the processing of primary rRNA transcripts [29], further supporting a role for Npun_R1332 in ribosomal assembly. To the best of our knowledge, it is unknown if alkanes interfere with ribosomal processing or assembly that might be alleviated by such large increases in transcription for this protein. The only other predicted RNA processing protein, Npun_R2514, was only twofold upregulated in the 2 g strain. Mycosporines are UV-absorbing secondary metabolites, and mycosporine-glycine has been found to quench singlet oxygen and absorb UV light to protect against photodamage [30]. These small water-soluble cyclic molecules can also function as compatible solutes and nitrogen storage compounds, and for defence against thermal, desiccation and other stress conditions [31]. [32]. Npun_R5598 encodes a ligase that catalyses the condensation of glycine onto DDG to produce mycosporine-glycine [33]. This response likely indicates that mycosporine-glycine ameliorates stress caused by alkane accumulation in cyanobacteria, and its induction is triggered by alkane-induced membrane or photosynthetic signals that overlap with photodamage. The third most upregulated gene in the 2 g strain, Npun_ F4819, is similar to general stress-induced protein B (GsiB) from Bacillus subtilis [34] and was induced ~19-fold in the 2 g strain. The exact function of GsiB-like proteins has not yet been determined; however, an increase of this magnitude implies that this protein may be induced to cope with alkane overproduction and alleviate alkane-induced stress. As such, it may be a good target gene to co-express in order to increase alkane yields and/or cyanobacterial vigour in alkane overexpression strains. The upstream gene, Npun_F4818, encoding a protein of unknown function, showed a fourfold increase in the 2 g strain, and has four predicted transmembrane domains with some structural similarity to transporter proteins. The association of these two genes is not conserved in other organisms in the STRING interaction database, indicating that they may have non-associated functions [29]. Several proteins belonging to the CsbD family of stress response proteins were upregulated in the 2 g strain. csbD gene expression has been shown to be induced as part of the sigma B general stress response in B. subtilis, although the protein's exact role in stress response is unknown [35]. CsbDlike proteins Npun_F0469, Npun_R0959 and Npun_R3254 increased 15-, 5.7-and 2.6-fold in the 2 g strain, respectively. Sig B2 encoded by Npun_R4091 was also upregulated twofold in the 2 g strain and, in line with control by a sigma factor in Bacillus, may represent the potential regulator for their transcription. It is interesting to note that all three CsbD family proteins co-occur in genomes of other bacteria along with AvaK-like PRC-barrel domain-containing proteins Npun_F5452 and Npun_F5451 in the STRING database [29]. These AvaK homologues are also upregulated in the 2 g strain and are presented below. Three upregulated genes encode proteins containing a conserved N-terminal carotenoid-binding domain (NTD) similar to that found in the orange carotenoid-binding protein (OCP). All three, however, lack the C-terminal domain that regulates NTD binding to the phycobilisome that results in quenching of excess excitation energy during high light stress. This NTD-only protein class has been termed 'Helical Carotenoid Proteins' (HCPs), and occur commonly in cyanobacteria [36]. Orthologous HCPs have been studied in Nostoc sp. strain PCC 7120, and some have defined functions. Npun_F5913 and Npun_R5130 induced 5.9-and 4.7-fold in the 2 g strain, are orthologues of All3221, and Alr4783 has been found to quench singlet oxygen [37]. The third protein, Npun_F6242, was upregulated 8.4-fold in the 2 g strain and is orthologous to All1123, an HCP with no determined function [37]. A large gene cluster encoding a 19-gene hybrid polyketide synthase (PKS) and non-ribosomal peptide synthesis (NRPS) assembly line referred to as the pks4 gene cluster [38] were upregulated 2-to 13-fold in the 2 g strain. These include Npun_R3425-3426, Npun_R3429-3436, Npun_R3438, 3440, 3442, 3445-46, and Npun_R3449-52. Among these are 13 encoded proteins containing a variety of PKSs and nonribosomal peptide synthesis (NRPS) domains [39]. These are interspersed with genes encoding predicted dehydrogenases, amino-and glycosyl-transferases and a dioxygenase. The pks4 gene cluster was shown to be expressed in a regular spaced pattern by single or neighbouring cells within a filament using a Npun_R3452-GFP reporter, and this gene cluster was strongly induced when cultures were grown to ultrahigh density [40]. The exported product of the pks2 gene cluster has been shown to affect cellular differentiation [38], although the product of the pks4 gene cluster remains unknown. The bioactive compounds produced by these clusters have a wide range of activities, including cytotoxicity, enzyme inhibition, antibacterial and antifungal agents [41]. It will be interesting to see if these can be extended to alkane tolerance in future work. A second locus encoding NRPSs upregulated five-to eightfold in the 2 g strain include orthologues of NosA, NosC and NosD, encoded by Npun_F2181, Npun_F2183 and Npun_F2184, respectively. The orthologous proteins in Nostoc sp. GSV224 are the non-ribosomal peptide synthases that form the peptide backbone for nostopeptolide A [42]. Interestingly, nosB and four additional downstream genes conserved in this locus encoding peptides with polyketide synthase, dehydrogenase, reductase and transporter activities were not upregulated in the 2 g strain. Nostopepolide A was found to be exported into the extracellular polysaccharide Table 2. Validation of RNA-seq by qPCR. Average fold change from the control strain for 11 selected genes showing 2-fold or higher change in the 2 g alkane overproduction strain were confirmed by independent RT-qPCR. ± standard error, n=3 unless indicated by bold numbers where n=2. Note that the Npun_F2818-Npun_F2819 qPCR primer set spanned the 26 bp intergenic region between these two genes and was therefore used to measure both matrix, and to be an important regulator of hormogonium development in N. punctiforme [43]. The finding of upregulation for only the peptide backbone-forming genes without similar upregulation of the associated transporter may indicate that the peptide backbone likely accumulates in cells for adaptation to alkane stress. We hope future work by others will determine if non-ribosomal peptides or polyketides are capable of sequestering alkanes in vitro, providing evidence for a possible mechanism for this adaptation that would explain the upregulation of the associated genes and operons discovered here. Several upregulated genes with potential direct involvement in ameliorating stress in lipid membranes were identified. Npun_F3787 was upregulated 4.4-fold in the 2 g strain and has an N-terminal signal sequence as identified by SPOCTOPUS [44] as well as a BON domain in the C-terminal portion of the protein. BON domains are thought to bind phospholipids and aid in stabilizing membranes [45], similar to OsmY, an osmotically inducible periplasmic protein providing protection against osmotic shock [46]. This may provide a protective mechanism to cope with increased membrane fluidity caused by alkanes that would interfere with cell and photosynthetic membrane functions. The downstream gene, Npun_F3786, was also upregulated 2.5-fold and encodes a putative signal transduction histidine kinase that may be involved in regulating this stress response. Several genes involved in microviridin synthesis were upregulated. Microviridins are tricyclic members of ribosomally synthesized and post-translationally modified peptides that act as protease inhibitors. Npun_F2189 was over 10-fold upregulated in the 2 g strain and encodes an ATP-grasp peptide maturases, similar to MvdC (MdnB). This gene is preceded by the precursor peptide MvdA with the amino acid sequence MPTN TVKT VDVV AVPF FARF LEEQ ATEG TEVPW T Y K F PSD LEDR [47]. Analysis of the novel microviridins N3-N9 from N. punctiforme identified a core region in this peptide (double underline above) that contained tricyclic linkages (bold) with a variable 1-6 amino acid extension (single underline) instead of acylation normally present in microviridins [40]. Normally, MvdC and the downstream MvdD (MdnC), encoded by Npun_F2190, act to form the intramolecular lactam and lactone linkages, respectively, in the MvdA peptide, resulting in a tricyclic peptide with a unique cage-like structure. However, the downstream gene encoding MvdD was not upregulated, indicating that an altered form of this microviridin with a single amide and lacking ester bonds may be formed due to alkane overproduction. The next two downstream genes -Npun_F2191, encoding a putative lipase, and Npun_F2192, encoding a predicted transmembrane protein of unknown functionwere both induced ~sixfold in the 2 g strain. As presented below, the presence of these four genes (Npun_F2189-2192) on a multi-copy plasmid resulted in loss of LD production. The reason for this unclear, but the core region of microviridins N3-N9 contains 43 % hydrophobic and 21 % neutral amino acids. We speculate that these microviridins may be sequestering hydrophobic compounds normally found in LDs when overexpressed, thus interfering with LD formation. Our results differ from the RNA-seq results for high-density growth used to induce production of microviridins N3-N9. A downstream ABC transporter encoded by Npun_F2193 with 61/79 % identity/similarity to MdnE, thought to export this peptide, was not upregulated in the 2 g strain. This indicates a potential for intracellular localization of this microviridin, similar to the results for nostopeptolide A mentioned above. Three adjacent genes encoding PRC barrel domain-containing proteins were induced 3.6-9.5-fold in the 2 g strain. These include the akinete marker protein, AvaK (Npun_F5452), and its adjacent upstream and downstream genes. AvaK and its upstream paralogous protein, Npun_F5451, were also found to accumulate after butachlor exposure in three different cyanobacteria, and were hypothesized to be involved in tolerance to stress associated with exposure to this herbicide [48]. Both contain a PRCH domain (photosynthetic reaction centre subunit H) that functions to regulate electron passage between quinones in the photosynthetic reaction centre of purple bacteria [49]. Transcriptional upregulation in the 2 g strain may therefore indicate photosynthetic stress associated with alkane accumulation in thylakoid membranes. Interestingly, the remaining four other genes whose proteins were similarly upregulated by butachlor in three strains of Anabaena were also transcriptionally upregulated in the 2 g strain. The N. punctiforme homologues of these were: Npun_R0971, a HHE cation-binding protein; Npun_F3786, a signal transduction histidine kinase homologue containing four predicted transmembrane domains; Npun_F3789, a high light inducible protein; and Npun_R4582, a manganese-containing catalase homologue [48]. No similar parallels between gene expression in the 2 g strain and similar proteomic experiments testing for responses to oxidative stress were apparent [50]. As butachlor is a hydrophobic compound, these gene responses point to parallels between exposure to this herbicide and internal production of alkanes that may be unique to stress associated with hydrophobic compounds. In addition to genes encoding potential structural and enzymatic proteins, several genes encoding regulatory proteins were also upregulated. In addition to SigB2 mentioned above, Npun_BF041, encoded on one of the five naturally occurring plasmids in N. punctiforme, belongs to the AraC family of transcriptional regulators and was found to be 8-fold upregulated in the 2 g strain. Two kinases were also identified; Npun_F1277 is a sensor signal transduction histidine kinase, and Npun_F2818 is an ABC1 kinase similar to ABC1K1 associated with Arabidopsis thaliana chloroplast plastoglobules. In A. thaliana, the latter regulates photosynthetic activity and photoprotection by controlling chloroplast tocopherol and plastoquinone production, and may be involved in integrating sugar/starch metabolism with photosynthetic processes [51]. The Npun_F2818 ABC1 kinase, as in plastoglobules, may be an LD-associated protein and be upregulated in response to the increased number of LDs in the 2 g strain. Only 26 bp downstream is Npun_F2819, similarly upregulated due to co-transcription with Npun_F2818 based upon RT-qPCR results ( Table 2). Npun_F2819 has three predicted Table 3. Percentage changes in area under the curve of FAME and alkane (C17) peaks produced during analysis by GC-MS of single and 4 g overexpressing strains relative to their respective controls. For single-gene strains, the control is the wild-type bearing pSCR119. For the 4 g strains (genes overexpressed in conjunction with Npun_R1710-1711 and Npun_F5141), the control is the 3 g strain. Numbers highlighted and in bold denote a P-value of <0.05 Continued transmembrane domains, and when co-expressed with F2818, caused large decreases in the alkane content of whole cells and LDs, but only when present in a strain that already produced high levels of alkane (see below). These results indicate that Npun_F2818-2819 are potential negative regulators of alkane production. In general, the putative roles of many upregulated genes products indicate that alkane overproduction is indeed stressful to cyanobacteria, and identifies potential novel adaptations to alkane stress. Based upon the large transcriptional increases, processing of rRNA required for ribosome assembly may be a target of alkane toxicity. Increased expression of genes also found after exposure of other (cyano)bacteria to a range of stress conditions may indicate general stress responses that overlap with alkane stress. These include those responsible for the production of mycosporine-glycine, a GsiB-like general stress-induced protein, multiple CsbD-like and HCP proteins, a BON-domain containing protein, and PRC-barrel proteins including AvaK. Novel adaptations to alkane overproduction suggested by this comparative transcriptomic study include accumulation of polyketides and/or non-ribosomal peptides, nostopeptolide A and a potentially internal microviridin-like cyclic peptide. Although not addressed in this study, future work will be required to determine if these compounds are indeed present in alkane-stressed cells, and their role in alkane tolerance in cyanobacteria. Identification of upregulated genes encoding a sigma factor, a DNA-binding protein and kinase proteins offers insights into potential regulatory elements involved in controlling the transcriptional response to alkane overproduction. Downregulated genes in the alkane overproduction strain Genes encoding proteins that may have direct effects on LD composition or abundance may be transcriptionally downregulated in the alkane overproducing strain in an attempt to restore the number of LDs to normal levels. Alternatively, downregulated genes could be in response to stress associated with excess alkane production, or simply a response to the slower growth rate exhibited by the 2 g strain. Genes showing the largest decrease in the 2 g strain are photosynthetic, and include phycobilisome linker proteins and phorphyrin synthetic enzymes that are likely a response to the slower growth of the 2 g strain. Also, in this growthrelated category are multiple subunits of NAD(P)H-quinone oxidreductase, reduction of several cell envelope synthesis proteins, as well as enzymes involved in amino acid biosynthesis, nucleic acid precursors and protein polymerization. An example of a gene that is downregulated that may be directly involved in lipid droplet formation is Npun_F0518, a SpoVK vesicle-fusing ATPase with reduced expression in the 2 g strain. Reduced vesicle fusion could explain the high number of small LDs seen during exponential growth in the 2 g strain. Later, during stationary phase, these fuse into abundant large LDs, higher in abundance than in the control strain [10], indicating the fusion process is only delayed, but not eliminated, in the 2 g strain. In the stress-associated gene category are three genes: Npun_ R0404, one of the carotenoid-binding domain containing proteins that exhibited a large 11-fold downregulation in the 2 g strain; Npun_R0279, which contains a universal stress protein domain; and Npun_R4883, a low temperaturerequirement A-like protein. This indicates that these proteins likely have a specialized function in the cell that is not required for dealing with alkane overproduction. Regulatory proteins with reduced transcription in the 2 g strain include sigma factor SigC (Npun_F0996). This likely accounts for downregulation of the PII protein (GlnB; Npun_F4466), since this nitrogen-regulatory protein has been shown to be in the regulon of SigC [52]. The Npun_R1304 gene encoding a PadR-like transcriptional regulatory protein was also lower in the 2 g strain. Based on transcriptomic studies of other PadRlike repressors, this repressor protein may be responsible for regulating genes related to the cell envelope [53], and its 2.4-fold reduced expression may explain the increased transcription of several glycosyl-transferases and other enveloperelated genes in the 2 g strain. Interestingly, downregulated plasmid genes were largely confined to plasmid D, one of the five naturally occurring plasmids in this strain, whereas upregulated genes were predominantly on plasmid B. Other plasmid-encoded genes include histidine kinases and putative DNA-binding proteins that may have effects on chromosomal gene transcription as well as plasmid-specific effects. Overexpression of selected genes and operons To see if overexpression of genes that were upregulated in the comparative transcriptomic data resulted in enhanced alkane or LD production, 16 different multi-copy shuttle plasmids were constructed containing genes or operons alone (single gene), or in conjunction with the 3 g plasmid bearing Npun_F1710/1711/5141 to form four-gene (4 g) plasmids. The base plasmid chosen for this was the multi-copy shuttle plasmid pSCR119 that has 12-14 copies per genome. The 3 g plasmid was chosen for the second expression platform since it was previously shown to produce the highest quantities of alkanes when combined [10] and we wanted to see if additional genes could further boost production. The third gene in the 3 g plasmid, Npun_F5141, encodes a putative lipase that was hypothesized to increase free fatty acids and promote production of the fatty acyl-ACP substrate for the Aar/Ado enzymes. We hypothesized that if additional genes or operons found to be upregulated in the alkane-overexpressing strain were added to this 3 g plasmid to create 4 g plasmid-bearing strains, further increases in alkane production relative to whole-cell lipids would result. The resulting N. punctiforme set of plasmid-bearing strains, termed either single-gene or 4 g strains, were analysed for changes in their fatty acid and alkane content in both exponential and stationary phases of growth (Table 3). It should be noted that DNA fragments containing genes upregulated in this study sometimes contained several adjacent genes that could possibly be transcribed as an operon, but were classified as single-gene strains, or four-gene strains when added to the 3 g expression platform plasmids in Table 3, for simplicity. Alkane and lipid profiles By harvesting total lipids along with the co-extracted alkanes, and generating fatty acid methyl esters (FAMEs) for the lipids, we had a way of determining the relative amount of alkane production in these strains in addition to changes in fatty acid composition. FAME analysis of whole-cell lipids of several single-gene strains harbouring Npun_R0971, Npun_F1545, Npun_F5066 and Npun_F2818-19 showed a modest but significant decrease in unsaturated fatty acids during exponential phase. Of these, Npun_R0971 showed the largest alteration in fatty acid composition, with increased saturated C18 fatty acids and decreased unsaturated species. Npun_F5066 also increased the proportion of saturated fatty acids, but C16 : 0 rather than C18 : 0 was significantly affected in this strain. During stationary phase, these effects were mostly lost, and were replaced by many small, but significant, fatty acid changes in single-gene strains. It is important to note that the strain containing Npun_F2189-92 that does not produce stainable LDs, had decreased C18 : 0, the fatty acid precursor for synthesis of 17 carbon-long alkanes, the alkane type normally produced exclusively in this strain [1]. The only single-gene strain with increased alkane production harboured Npun_F1545, resulting in only a modest 4 % increased alkane accumulation during stationary phase. Instead of further enhancing alkane production as anticipated, increased expression of several upregulated genes/ operons in 4 g strains resulted in decreased alkane production. Among the 4 g strains, the only significant decrease in alkane production seen in exponentially growing cultures was that harbouring the Npun_F2818-19 operon, resulting in a 9.2 % decrease in alkane content relative to the control strain. During stationary phase, three 4 g strains had a significant reduction in alkane production. The Npun_F2818-19 strain continued to exhibit reduced alkane content, as did two additional strains harbouring Npun_R6442 and F2189, with stationary phase reductions in alkane content ranging from −10.0 to −13.9 % below controls (Table 3). To see if LD composition paralleled the reduced alkane production observed in whole cells, LDs were isolated from these 4 g strains and compared to LDs from 3 g controls. LDs from the 4 g Npun_F2818-19 strain in exponential phase showed a 15.3 % decrease in alkane content, paralleling the 9.2 % decrease in whole-cell alkane content under this growth condition. However, the large 13.9 % decrease in alkanes during stationary phase for whole cells was not apparent in stationary-phase LDs from the Npun_F2818-19 4 g strain, indicating that although the total cellular ratio of alkanes to total lipids was reduced in stationary phase, their proportion in LDs returned to normal. This indicates normal levels of the ABC1 protein kinase encoded by Npun_F2818, and/or the protein of unknown function encoded by Npun_F2819 may be required for proper alkane trafficking between LDs and cellular membranes during exponential growth, but not in stationary phase. There was an insignificant decrease in relative alkane to total lipid content in the LD-less Npun_ F2189-92 single-gene strain, indicating that alkane production does not require the presence of LDs. Phenotypic changes due to gene overexpression Most single and 4 g overexpression strains tested did not produce any visible changes in LD size, localization, or abundance. However, the plasmid-bearing strain containing Npun_F2189 alone appeared to have BODIPY staining, indicative of neutral lipid rafts in the envelope, with reduced numbers of LDs during exponential phase (Fig. 2). When this DNA fragment was further extended to contain three additional upregulated genes, Npun_F2190, Npun_F2191 and Npun_F2192, this single-gene strain containing Npun_ F2189-2192 exhibited no stainable LDs (Fig. 2). As discussed above, these encode structural genes with predicted or proven enzymatic function likely associated with microviridin production, not gene regulatory proteins, making it unlikely they directly repress genes associated with LD production. However, since microviridins can act as protease inhibitors, it may be possible that increased microviridin accumulation in this single-gene strain may increase the protein stability of regulatory proteins controlled by proteolysis. Experiments were initiated to determine if there was a growth phenotype associated with the loss of LDs. The LD-less Npun_ F2189-92-bearing strain exhibited identical growth rates to a wild-type control strain at both normal and cold temperatures under standard light conditions. Recently it was determined that cyanobacterial alkanes are required for normal photosynthetic cyclic electron flow, and growth at colder temperatures [9]. This study, however, used long-term selection for the mutation, so the phenotype might have been due to other secondary mutations. The work of Lea-Smith et al. [8], which quickly isolated a mutant in the same gene, exhibited similar photosystem II O 2 production activity to the wild-type, but slower growth, increased cell size and division defects. Since the LD-less strain produced similar amounts of alkanes to the controls (Table 3), this could explain why no phenotypes were exhibited in earlier studies. To see if LDs that normally accumulate in stationary phase may be used to supply lipids for recovery from stationary phase, late stationary cultures of the LD-less and control strains were inoculated and monitored for alterations in lag phase recovery. No differences in lag phase between strains were detected, indicating that an alternative function exists. To see if LDs might be used to cope with high light stress, as has been suggested for plant chloroplast plastoglobules, the LD-less and control strains were subjected to high light (HL). The absence of LDs led to a small (11 %), but significant (P=0.014) increase in cell biomass after 8 days of HL that was not detected after only 4 days of HL (Fig. 3). There were no observable differences in bleaching after 8 days of HL, supporting our hypothesis that differences in growth rates resulted from a lack of LDs. This increased growth is likely due to relief of the metabolic drain on metabolites in the control strains used to form these inclusions. The physiological role of LDs therefore remains enigmatic, and we hope continued work in this area will elucidate their cellular function in the future. The identification of upregulated genes causing loss of LD production or reduced alkane production provides targets for future mutagenesis studies that we anticipate will increase alkane or LD production in cyanobacteria. Funding information This material is based upon work supported by the National Science Foundation under grant no. MCB-1413583 to M. L. S. K. A. G. P. was supported by National Institutes of Health BUILD-PODER programme grant TL4GM118977.
8,913.8
2020-09-17T00:00:00.000
[ "Biology" ]
Taxonomic review of the genus Tambinia Stål (Hemiptera, Fulgoromorpha, Tropiduchidae) with descriptions of four new species from the Pacific region Abstract Four new species of Tambinia Stål (Hemiptera: Fulgoromorpha: Tropiduchidae), Tambinia conus sp. n. (Papua New Guinea), Tambinia macula sp. n. (Malaysia: Borneo), Tambinia robustocarina sp. n. (Malaysia: Sabah) and Tambinia sexmaculata sp. n. (Australia: Kuranda) are described and illustrated from the Pacific region. The diagnostic characters of this genus are redefined. A checklist and a key to the known species of Tambinia are provided. Only few papers provided valuable information about Tambinia: Wilson (1986) has stated that the Oriental and Australasian genera Nesotaxila and Kallitaxila appear to be most closely related to Tambinia. Asche and Wilson (1989) have indicated that some similarity exists in the aedeagal structure in Tambinia species and Ommatissus Fieber, 1875 (Trypetimorphini). A cladistic analysis is needed, but is beyond the scope of this paper. While sorting and identifying Tropiduchidae from material on loan from the California Academy of Sciences, San Francisco, California, USA (CAS), National Museum of Natural History, Smithsonian Institution, Washington, DC, USA (USNM) and elsewhere, we found four new species of Tambinia from Papua New Guinea, Malaysia (Borneo, Sabah) and Australia (Kuranda). A revised generic diagnosis and a checklist of all known species of Tambinia are provided. A key to known species is also updated. Materials and methods Dry pinned specimens were used for the descriptions and illustrations. External morphology was observed under a stereoscopic microscope and characters were measured with an ocular micrometer. Abdomens were removed and macerated in cold 10% KOH overnight. Precise dissections and cleaning of genitalic structures were finished in distilled water. Observations and drawings were done in glycerine under a compound light microscope. Photographs of the types were taken with a Nikon Coolpix 5400 digital camera. The digital images were then imported into Adobe Photoshop 8.0 for labeling and plate composition. Line figures were drawn with the aid of a camera lucida mounted on a Zeiss Stemi SV-11 stereomicroscope. Specimens examined during the course of this study are deposited in the CAS, USNM and Bernice P. Bishop Museum, Honolulu, Hawaii, USA (BPBM). The terminology follows Bourgoin and Huang (1990) and Wang et al. (2009). Discussion. The genus Tambinia comprises twenty-four species and is distributed in Oriental, Australasian and Afrotropical regions (Distant 1906, 1916, Fennah 1956, 1970, 1982, Ghauri 1976, Matsumura 1914, Melichar 1914, Metcalf 1946, 1954, Muir 1931, Wilson 1986, Wilson and Malenovský 2007. The tropiduchid planthoppers are usually weak fliers and have poor ability for long-distance migration by themselves. So, we indicate that new species have formed through geographical isolation over the disjunct distribution of the genus across widely separated island groups. In external appearance, the genus Tambinia is similar to the Oriental and Australasian genera Nesotaxila, Kallitaxila and Kallitambinia. These four genera form a distinct group within tribe Tambiniini. They can be distinguished from the other known genera in the tribe by the head relatively dorsoventrally depressed, produced in front of eyes, but not extreme produced into a linguiform prolongation, apex not broadly rounded to base of frons, and hind tibia with two lateral spines. The four genera can be distinguished as follows: Frons about as long as broad, forewings with two black elongate spots near bases of sutural margins, nodal line marked with several fuscous spots (see Distant, 1906: 278 Fig. 3 a pair of short sublateral carinae basally between median carina and lateral margins; posterior margin straight. Frons (Fig. 2C) longer in middle than the widest breadth (1.4: 1), disc flat and smooth, covered with sparsely microsetae (Fig. 2B); lateral margins sinuous, diverging from apex, slightly concave at level of eyes, then diverging further to reach their widest point before converging to the clypeus; median carina slender, gradually thinning and obsolete posteriorly, almost reaching to frontoclypeal suture. Clypeus (Fig. 2C) triangular, with broad median carina. Pronotum (Figs 1A, 2A) distinctly shorter than mesonotum in midline (0.4: 1), carinae strongly ridged, lateral carinae diverging posteriorly, median carina distinct, reaching posterior margin. Pronotum and mesonotum together medially 2.2 times as long as median length of vertex. Hind tibiae each with 2 distinct lateral spines; spinal formula of hind leg 5-5-2. Forewings (Figs 1A, 2D) relatively elongate and narrow, 2.7 times as long as maximum breadth, with corium smooth, not granulate, Sc+R forking at 2/5 apical, Cu 1 forking after level of junction of claval veins, cell Sc with a short cross vein at its apical angle, with 13 apical cells and 6 subapical cells, claval veins uniting basad of middle of clavus. Male genitalia. Pygofer (Figs 2F-H) narrow and relatively high, wider ventrally than dorsally, anterior margin moderately concave, posterior margin nearly straight on ventral half in lateral view. Anal tube (Figs 2F, 2G) distinctly elongate, surpassing to apex of gonostylus, ventral margin slightly bent ventrad in lateral view; lateral margins narrowing distad, apical margin distinctly forked in dorsal view; anal styles relatively short and stout, not surpassing apex of anal tube in dorsal view. Gonostylus (Figs 2F, 2H) very narrow, apical part dorsoposteriorly directed in lateral view; median conical process distinctly elongate and strong, sclerotized, nearly reaching to middle part of gonostylus in ventral view. Periandrium (Fig. 2F) distinctly short, ring-shape, with a long process directed caudad at ventral side, surround aedeagusat medially. Aedeagus ( Fig. 2F) with shaft sinuate and apical half dorsoposteriorly directed in lateral view, apical part forking at endosoma, forming two process, which dorsal one distinctly longer than the ventral one; endosoma membranous, slightly expanded. Etymology. This new species is named for the presence of a strong median conical process at apically inner margin of gonostylus (Figs 2F, 2H). Distribution. Papua New Guinea. Remarks. This species is similar to T. languida Stål, 1859 collected from Sri Lanka, but can be distinguished from the latter in the vertex with two short reddish stripes, pronotum with a pair of orange spots outside lateral carinae, carinae of vertex and pronotum orange, mesonotum with a pair of orange spots beside lateral carinae near posterior margin, forewings with many reddish spots marked from basal part to nodal line and the frons with ratio of median length to the widest breadth 1.4:1 (in T. languida, vertex and pronotum without pigmentation, mesonotum sometimes suffused with ochraceous, the frons with ratio of median length to the widest breadth 2:1, see Stål, 1859: 317;Melichar, 1914: 85). Colour. General colour ocherous, vertex (Figs 1B, 3A) with median carina suffused reddish, the reddish extending from the sides, forming two reddish long stripes, its outer margins irregular, pronotum (Figs 1B, 3A) with a pair of reddish spots at disc depression between median and lateral carinae, frons (Fig. 3C) suffused with pale reddish, forewings (Figs 1B, 3D) with basal portion ocherous, with two red elongate marks near bases of sutural margins, many orange or red spots marked from basal part to nodal line, nodal line suffused with one transverse orange to red band, tips of spines on hind tibiae and tarsi black. Tambinia macula Head and thorax. Head (Figs 1B, 3A) projecting before eyes approximately median length of eye, strongly dorsoventrally depressed. Vertex (Figs 1B, 3A) about as long as broad, two times as long as median length of pronotum, anterior margin projected at an obtuse angle in dorsal view, lateral margins ridged and converged anteriorly; median carina thin and percurrent; posterior margin straight. Frons (Fig. 3C) longer in middle than the widest breadth (1.3: 1), disc slightly depressed, covered with sparsely microsetae (Figs 3B, 3C); lateral margins sinuous, diverging from apex, slightly concave at level of eyes, then diverging further to reach their widest point before converging to the clypeus; without median carina. Clypeus (Fig. 3C) triangular, without median carina. Pronotum (Figs 1B, 3A) distinctly shorter than mesonotum in midline (0.3: 1), carinae strongly ridged, lateral carinae diverging posteriorly, median carina distinct, reaching posterior margin. Pronotum and mesonotum together medially 2.1 times as long as median length of vertex. Hind tibiae each with 2 distinct lateral spines; spinal formula of hind leg 5-5-2. Forewings (Figs 1B, 3D) relatively broad, with basal portion semihyaline, thicker than apical portion, without granulation, 2.7 times as long as maximum breadth, Sc+R forking about medially, Cu 1 forking after level of junction of claval veins, cell Sc with a short cross vein at its apical angle, with 12 apical cells and 5 subapical cells, claval veins uniting distad of middle of clavus. Distribution. Malaysia (Borneo). Remarks. This species is similar to T. atrosignata Distant, 1906, but can be distinguished from the latter in vertex with two reddish long stripes, pronotum with a pair of reddish spots, forewings with basal portion ocherous, with two red elongate marks near bases of sutural margins, many orange or red spots marked from basal part to nodal line and nodal line suffused with one transverse orange to red band. Colour. General colour tawny yellow, forewings (Figs 1D, 4D) with two fuscous elongate marks near bases of sutural margins, nodal line suffused with pale brown marks, many fuscous spots marked from nodal line to apex, tips of spines on hind tibiae and tarsi black. Male genitalia. Pygofer (Figs 4E-G) irregular subquadrate in lateral view, anterior margin concave on dorsal 1/3, posterior margin produced caudad in lateral view. Anal tube (Figs 4E, 4F) relatively elongate, ventral margin slightly bent ventrad in lateral view; lateral margins convex medially then narrowing distad, apical margin slightly concave in dorsal view; anal styles relatively long and narrow, surpassing apex of anal tube in dorsal view. Gonostylus (Figs 4E, 4G) elongate, but not surpassing to apex of gonostylus, apical half narrow and basal half broad in lateral view; median conical process very small, sclerotized in ventral view. Periandrium (Fig. 4E) Etymology. This new species is named for the presence of a robust median carina on the vertex (Figs 1D, 4A). Remarks. Based on the following combination of characters: head relatively short, not strongly dorsoventrally depressed, broadly produced anteriorly; vertex with median carina strongly thickened and broad; pronotum with median carina relatively broad and frons with basal part of median carina strongly broad and thickened, this species and the four previously described species, T. menglunensis, T. rubrolineata, T. similis and T. theivora form a very distinct group within Tambinia. In external appearance, this species is similar to T. similis (Fig. 1C) and but differs from the latter in the median carina on vertex long and percurrent, thickened and broad, but not spatula-like, forewings relatively broad, nodal line relatively near middle and cell Sc without a short cross vein at its apical angle. This species is also similar to T. menglunensis (see Men and Qin, 2009: 263, Figs 1, 2), but differs from the latter in the obsolete spots and markings on the vertex, pronotum, mesomotum and forewings, median carinae on vertex, pronotum and frons strongly thickened and broad, and gonostylus with median conical process very small. Tambinia sexmaculata sp. n. urn:lsid:zoobank.org:act:56274E10-6B5F-41CC-9DB7-563446EC4CD2 http://species-id.net/wiki/Tambinia_sexmaculata Figs 1E, 5A-H Description. Body length (from apex of vertex to tip of forewings): ♂ 6.2 mm (N=1), ♀, 6.6-6.8 mm (N=2). Colour. General colour tawny yellow, vertex (Figs 1E, 5A) with six red spots, genae ( Fig. 5B) with orange patch between eye and lateral margin of frons, forewings (Figs 1E, 5D) with two pairs of red spots near bases of sutural margins and distad of level of union of claval veins, relatively, tips of spines on hind tibiae and tarsi black. Etymology. This new species is named for the presence of six reddish markings on vertex (Figs 1E, 5A). Distribution. Australia (Kuranda). Remarks. This species is similar to T. conus but can be distinguished from the latter in the vertex with six red spots, forewings with two pairs of red spots and by the male genitalia structure (Figs 5F-H), especially the shape of anal tube, median conical process of gonostylus relatively small, periandrium relatively long, with a long, sinuate process at left side, dorsoposteriorly directed, and the shaft of aedeagus apical part abruptly curved through approximately 30˚, directed to right.
2,790.2
2011-03-10T00:00:00.000
[ "Biology" ]
Second-Order Unsupervised Neural Dependency Parsing Most of the unsupervised dependency parsers are based on first-order probabilistic generative models that only consider local parent-child information. Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information. We also propose a novel design of the neural parameterization and optimization methods of the dependency models. In second-order models, the number of grammar rules grows cubically with the increase of vocabulary size, making it difficult to train lexicalized models that may contain thousands of words. To circumvent this problem while still benefiting from both second-order parsing and lexicalization, we use the agreement-based learning framework to jointly train a second-order unlexicalized model and a first-order lexicalized model. Experiments on multiple datasets show the effectiveness of our second-order models compared with recent state-of-the-art methods. Our joint model achieves a 10% improvement over the previous state-of-the-art parser on the full WSJ test set. Introduction Dependency parsing is a classical task in natural language processing. The head-dependent relations produced by dependency parsing can provide an approximation to the semantic relationship between words, which is useful in many downstream NLP tasks such as machine translation, information extraction and question answering. Nowadays, supervised dependency parsers can reach a very high accuracy (Dozat and Manning, 2017;Zhang et al., 2020). Unfortunately, supervised parsing requires treebanks (annotated parse trees) for training, which are very expensive and time-consuming to build. On the other hand, unsupervised dependency parsing requires only unannotated corpora for training, though the accuracy of unsupervised parsing still lags far behind that of supervised parsing. We focus on unsupervised dependency parsing in this paper. Most methods in the literature of unsupervised dependency parsing are based on the Dependency Model with Valence (DMV) (Klein and Manning, 2004), which is a probabilistic generative model. A main disadvantage of DMV and many of its extensions is that they lack expressiveness. The generation of a dependent token is only conditioned on its parent, the relative direction of the token to its parent, and whether its parent has already generated any child in this direction, hence ignoring other contextual information. To improve model expressiveness, researchers often turn to discriminative methods, which can incorporate more contextual information into the scoring or prediction of dependency arcs. For example, Grave and Elhadad (2015) uses the idea of disrciminative clustering, Cai et al. (2017) uses a discriminative parser in the CRF-autoencoder framework, and Li et al. (2018) uses an encoder-decoder framework that contains a discriminative transitioned-based parser. For DMV, Han et al. (2019) proposes the discriminative neural DMV which uses a global sentence embedding to introduce contextual information into the calculation of grammar rule probabilities. In the literature of supervised graph-based dependency parsing, however, there exists another technique for incorporating contextual information and increasing expressiveness, namely high-order parsing (Koo and Collins, 2010;Ma and Hai, 2012). A first-order parser, such as the DMV, only considers local parent-children information. In comparison, a high-order parser takes into account the interaction between multiple dependency arcs. In this work, we propose the second-order neural DMV model, which incorporates second-order information (e.g., sibling or grandparent) into the original (neural) DMV model. To achieve better learning accuracy, we design a new neural architecture for rule probability computation and promote direct marginal likelihood optimization (Salakhutdinov et al., 2003;Tran et al., 2016) over the widely used expectationmaximization algorithm for training. One particular challenge faced by second-order neural DMVs is that the number of grammar rules grows cubically to the vocabulary size, making it difficult to store and train a lexicalized model containing thousands of words. Therefore, instead of learning a second-order lexicalized model, we propose to jointly learn a second-order unlexicalized model (whose vocabulary consists of POS tags instead of words) and a first-order lexicalized model based on the agreement-based learning framework (Liang et al., 2007). The jointly learned models have a manageable number of grammar rules while still benefiting from both second-order parsing and lexicalization. We conduct experiments on the Wall Street Journal (WSJ) dataset and seven languages on the Universal Dependencies (UD) dataset. The experimental results demonstrate that our models achieve state-ofthe-art accuracies on unsupervised dependency parsing. Dependency Model With Valence The Dependency Model with Valence (DMV) (Klein and Manning, 2004) is a probabilistic generative model of a sentence and its parse tree. It generates a dependency parse tree from the imaginary root node in a recursive top-down manner. There are three types of probabilistic grammar rules in a DMV, namely ROOT, CHILD and DECISION rules, each associated with a set of multinomial distributions P ROOT (c), P CHILD (c|p, dir, val) and P DECISION (dec|p, dir, val), where p is the parent token, c is the child token, dec is the continue/stop decision, dir indicates the direction of generation, and val indicates whether parent p has generated any child in direction dir. To generate a sequence of tokens along with its dependency parse tree, the DMV model generates a token c from the ROOT distribution P ROOT (c) firstly. Then for each token p that has already been generated, it generates a decision from the DECISION distribution P DECISION (dec|p, dir, val) to determine whether to generate a new child in direction dir. If dec is CONTINUE, then a new child p is generated from the CHILD distribution P CHILD (c|p, dir, val). If dec is STOP, then p stops generating children in direction dir. The joint probability of the sequence and its corresponding dependency parse tree can be calculated by taking product of the probabilities of all the generation steps. Neuralized DMV Models Neural DMV One limitation of the DMV model is that it does not consider the correlation between tokens. Jiang et al. (2016) proposed the Neural DMV (NDMV) model, which uses continuous POS embedding to represent discrete POS tags and calculate rule probabilities through neural networks based on the POS embedding. In this way, the model can learn the correlation between POS tags and smooth grammar rule probabilities accordingly. Lexicalized NDMV Neural DMV is still an unlexicalized model which is based on POS tags and does not use word information. Han et al. (2017) proposed the Lexicalized NDMV (L-NDMV) in which each token is a POS/word pair. The neural network that computes rule probabilities takes both the POS embedding and the word embedding as input. To reduce the vocabulary size, they replace low-frequency words with their POS tags. Second-Order Parsing In our proposed second-order NDMV, we calculate each rule probability based additionally on the information of the sibling or grandparent. We take sibling-NDMV for example to demonstrate the generative story. • We start with the imaginary root token, generating its only child c with probability P ROOT (c) • For each token p, we decide whether to generate a new child or not with probability P DECISION (dec|p, s, dir, val), where s is the previous child token generated by p in direction dir. If p has not generated any child in direction dir yet, we use a special symbol NULL to represent s. • If decision dec is CONTINUE, p generates a new child c with probability P CHILD (c|p, s, dir, val). If decision a is STOP, p stops generating children in direction dir. For parsing, we design dynamic programming algorithms adapted from Koo and Collins (2010). Since the grandparent token is deterministic for each token, the parsing algorithm of our grand-NDMV model is similar to theirs. There are two options for determining the sibling token since the generation process of child tokens can be either from the inside out or from the outside in. Koo and Collins (2010) make the inside-out assumption, but in this paper, we make the outside-in assumption because it makes implementation easier and can achieve better performance empirically. We provide the pseudo code of the second-order inside algorithm and the second-order parsing algorithm in the appendix. Parameterization In a neural DMV, we compute the probability of a grammar rule using a neural network. Below we formulate the computation of CHILD rule probabilities. The full architecture of the neural network is shown in Figure 1. ROOT and DECISION rule probabilities are computed in a similar way. In our second-order neural DMV, each CHILD rule P CHILD (c|p, s, dir, val) involves three tokens: parent p, child c, and sibling (or grandparent) s. Denote the embedding of the parent, child and sibling (or grandparent) by x p , x c , x s ∈ R d , which are retrieved from a shared token embedding layer. We use three different linear transformations to produce the representations of a token as a parent, child, and sibling (or grandparent). e c = W c x c e p = W p x p e s = W s x s We feed e c , e p , e s to the same neural network that consists of three consecutive MLPs. The first and second MLPs are used respectively to insert valence and direction information into the representations, and the last MLP is used to produce final hidden representations h c , h p , h s (see the appendix for the complete formulation). We use different parameters of the first and second MLPs for different values of valence val and direction dir. We add skip-connections to the first and second MLPs because skipconnections have been found very useful in unsupervised neural parsing (Kim et al., 2019). We then follow Wang et al. (2019) and use a decomposed trilinear function to compute the unnormalized rule probability from the three vectors h c , h p , h s . are the parameters of the decomposed trilinear function and × is scalar multiplication. Then we apply a softmax function to produce the final rule probability. where C is the vocabulary. Learning The learning objective function L(θ) is the log-likelihood of training sentences where θ is the parameters of the neural networks. The probability of each sentence x is defined as: where T (x) is the set of all possible dependency parse trees for sentence x. We use c(r, x, z) to represent the number of times rule r is used in dependency parse tree z of sentence x. Then we have where R is the collection of all DECISION, CHILD and ROOT rules. Learning via EM algorithm We can rewrite the log-likelihood of sentence x as follows. where q(z) is an arbitrary distribution and H is the entropy function. In the E-step, we fix θ and set q(z) = p θ (z|x). In the M-step, we fix q(z) and update θ with the following objective: where e(r, x) is the expected count of grammar rule r in sentence x based on q(z), which can be obtained using the inside-outside algorithm. We can use gradient descent to update θ. Learning via direct marginal likelihood optimization We can also use gradient descent to maximize log p θ (x) directly. Based on the derivation of Salakhutdinov et al. (2003) where e(r, x) is the expected count of grammar rule r in sentence x based on p θ (z|x). Traditionally, we use the inside-outside algorithm to obtain the expected count e(r, x). Eisner (2016) points out that we can use back-propagation to calculate the expected count e(r, x). So we only need to use the inside algorithm to calculate log p θ (x) and then use back-propagation to update the parameters directly, without the need for the outside algorithm. Mini-batch gradient descent as online EM In Equation 7, we note that the gradient contains the term e(r, x). If we use mini-batch gradient descent to optimize log p θ (x), it is analogous to the online-EM algorithm (Liang and Klein, 2009). To compute the gradient for each mini-batch, we first need to compute the expected counts from the training sentences in the mini-batch, which is exactly what the online E-step does; we then use the expected counts to compute the gradient and update the model parameters, which is similar to the M-step, except that here we only perform one update step, while in the EM algorithm multiple update steps may be taken based on the same expected counts. According to Liang and Klein (2009), online-EM has a faster convergence speed and can even find a better solution. Empirically, we do find that direct marginal likelihood optimization outperforms the EM algorithm. Agreement-Based Learning In our second-order DMV model, the number of grammar rules is 4 |V | 3 +4 |V | 2 +|V |, which is cubic in the vocabulary size |V |. When our model is lexicalized, the vocabulary may contain thousands of words or more, making the model s ize less manageable. Instead of learning a second-order lexicalized model, we propose to jointly learn a second-order unlexicalized model (whose vocabulary consists of POS tags instead of words) and a first-order lexicalized model based on the agreement-based learning framework (Liang et al., 2007). The jointly learned models have a manageable number of grammar rules while still benefiting from both second-order parsing and lexicalization. Empirically, we do find that the jointly trained models outperform lexicalized second-order models. Following Liang et al. (2007), we define the objective function for our jointly trained first-order L-NDMV and second-order NDMV as where θ 0 is parameters of L-NDMV and θ 1 is parameters of second-order NDMV. Intuitively, the objective requires the two models to reach agreement on the probability distribution of dependency parse tree z. We use joint decoding (parsing) to predict dependency parse tree z predict for sentence x. The inside and parsing algorithms for jointly trained models can be found in the appendix. Learning via product EM algorithm Liang et al. (2007) propose to optimize the objective using the product EM algorithm based on the following lower bound of the objective. The product EM algorithm performs coordinate-wise ascent on L(θ, q). In the product E-step, we optimize L(θ, q) with respect to q. where const does not depend on θ and q. In the product E-step, the maximum can be obtained by setting where const does not depend on θ. It consists of one term for each model. We update the parameters of each model separately based on the expected counts obtained from the product E-step, which can be calculated through the inside-outside algorithm. Learning via direct marginal likelihood optimization O agree can be calculated through the inside algorithm. Similar to Section 3.3, we can benefit from both agreement-based learning and the online-EM algorithm if we use gradient descent to optimize O agree instead of using the product EM algorithm. Setting On the WSJ dataset, for fair comparison, we follow Han et al. (2017) and Han et al. (2019) and use HDP-DEP (Naseem et al., 2010) to initialize our models. Specifically, we train the unsupervised HDP-DEP model on WSJ, use it to parse the training corpus, and then use the predicted parse trees to perform supervised learning of our model for several epochs. On the UD dataset, we use the K&M initialization (Klein and Manning, 2004). We use direct marginal likelihood optimization (DMO) as the training method and use Adam (Kingma and Ba, 2015) as the optimizer with learning rate 0.001. The batch size is set to 64 for WSJ and 100 for UD. The hyperparameters of the neural networks, the setting of L-NDMV and more details can be found in the appendix. We apply early stopping based on the log-likelihood of the development data and report the mean accuracy over 5 random restarts. Result Result on WSJ In Table 1, we compare our methods with previous unsupervised dependency parsers. Our sibling-NDMV model can outperform the previous state-of-the-art parser by 1.9 points on WSJ10 and 3.1 points on WSJ in the unlexicalized setting. Our lexicalized sibling-NDMV achieves further improvement over the unlexicalized sibling-NDMV. On the other hand, our grand-NDMV performs significantly worse than the sibling-NDMV and lexicalization hurts its performance. Why grandparent information is less useful than sibling information in unsupervised parsing is an intriguing question that we leave for feature research. Joint training with a first-order L-NDMV can increase the performance of unlexicalized sibling-NDMV from 77.5 to 79.9 and that of unlexicalized grand-NDMV from 71.4 to 76.0 on WSJ10. The jointly trained models also outperform the lexicalized second-order models. (Berg-Kirkpatrick et al., 2010) 63.0 -PR-S (Gillenwater et al., 2011) 64.3 53.3 E-DMV (Headden et al., 2009) 65.0 -TSG-DMV (Blunsom and Cohn, 2010) 65.9 53.1 UR-A E-DMV (Tu and Honavar, 2012) 71.4 57.0 CRFAE (Cai et al., 2017) 71.7 55.7 Neural DMV (Jiang et al., 2016) 72.5 57.6 HDP-DEP (Naseem et al., 2010) 73.8 -NVTP (Li et al., 2018) 54 Result on UD In Table 2, we first compare our models with models which do not use the universal linguistic prior (UP) 3 . The variational variant of D-NDMV (Han et al., 2019) is the recent state-ofthe-art model without UP. Our method outperforms theirs on six of the eight languages and also on average. We then compare our second-order models with recent state-of-the-art discriminative models, which rely heavily on the universal linguistic prior to achieve good performance (for example, Li et al. (2018) reported bad results if they do not use the universal linguistic prior). We find that sibling-NDMV can outperform these discriminative models while grand-NDMV can achieve comparable results, even though we do not utilize the universal linguistic prior. Effect of Skip-Connections From Table 3 and 4, we find that using skip-connections can achieve higher log-likelihood and better parsing accuracy in most cases. On UD, the performance is much better when using skip-connections except on Basque. Comparison of Training Methods In Table 3, we find that the EM algorithm significantly underperforms DMO. On the other hand, Table 4 shows that the EM algorithm performs comparably to DMO on WSJ. We also compare the learning curves of these two methods. For fair comparison, we use the same batch-size for both methods. First we conduct an experiment using the joint L-NDMV and sibling-NDMV model on WSJ. In Figure 2, we find that DMO converges to a higher log-likelihood compared with EM and the convergence speed is roughly the same. In Figure 3, we find DMO can find a slightly better model compared with EM. Second, we conduct an experiment using sibling-NDMV model on the UD French dataset. In Figure 4, we find DMO converges faster than EM and converges to a higher log-likelihood. In Figure 4, we find that the model accuracy of DMO is much higher than that of EM at (Noji and Miyao, 2015). DV,VV: The deterministic and variational variants of D-NDMV (Han et al., 2019). +sibling: Our second-order sibling-NDMV. +grand: Our second-order grand-NDMV. NVTP: Neural variational transition-based parser (Li et al., 2018). CM: Convex-MST (Grave and Elhadad, 2015). the beginning, but it drops significantly after epoch 23, suggesting that early-stop is necessary. We also find similar phenomena for other languages on UD. It should be noted that we use HDP-DEP (Naseem et al., 2010) for initialization on WSJ and use K&M initialization (Klein and Manning, 2004) on UD. We see that HDP-DEP initialization leads to a very high initial UAS of 75% (Figure 3), while K&M initialization leads to a low initial UAS of 38.5% ( Figure 5). It can be seen that EM is more sensitive to the initialization while DMO can achieve good results even if the initialization is bad. Effect of Joint Training and Parsing In Table 5, we compare the performance with different training and parsing settings. We find that joint parsing is better than separate parsing in both training settings. With joint training, each individual model can achieve better performance compared with separate training, which shows the effectiveness of agreement-based joint learning. Limitations Our second-order NDMV model is more sensitive to the initialization compared with the first-order NDMV model. We fail to produce a good result under the K&M initialization on WSJ: only 58.5% UAS for sibling-NDMV on WSJ10, while the first-order NDMV model can achieve 69.7% UAS. We rely on the parsing result of HDP-DEP to initialize our model in order to reach the state-of-the-art result on WSJ. This is similar to the case of L-NDMV, which performs badly when using the K&M initialization according to Han et al. (2017). Because of the bad performance of L-NDMV with the K&M initialization as well as the time constraint that prevents us from running HDP-DEP on UD, we did not conduct experiments of agreement-based learning with L-NDMV on the UD datasets. We leave this for future work. Our second-order model is also quite sensitive to the design of the neural architecture, which is similar to case of unsupervised constituency parsing reported by Kim et al. (2019). We also try the third-order NDMV model (grand-sibling or tri-sibling) but are not able to get better results compared with sibling-NDMV. Our second-order parsing algorithm has a theoretical time complexity of O(n 4 ), which is higher than the time complexity of O(n) of transition-based unsupervised parsers (Li et al., 2018) Table 5: The effect of joint training and joint parsing complexity of O(n 3 ) of first-order NDMV models, where n is the sentence length. However, transitionbased parsers are hard to batchify, while our model can be parallelized efficiently following the methods introduced by Torch-Struct (Rush, 2020). In practice, our second-order parser runs very fast on GPU, requiring only several minutes to train. Conclusion We propose second-order NDMV models, which incorporate sibling or grandparent information. We find that sibling information is very useful in unsupervised dependency parsing. We use agreement-based learning to combine the benefits of second-order parsing and lexicalization, achieving state-of-the-art results on the WSJ dataset. We also show the effectiveness of our neural parameterization architecture with skip-connections and the direct marginal likelihood optimization method. A.1 Inside Algorithm and Parsing Algorithm We use the dynamic programming substructure proposed for second-order supervised dependency parsing. For grandparent-child model, Koo and Collins (2010) augment both complete and incomplete spans with grandparent indices. They called the augmented span g-spans. Formally, they denote a complete g-span as C g h,e , where C h,e is a normal complete span in the Eisner algorithm, g is the grandparent's index, with the implication that (g, h) is a dependency. Incomplete g-span is defined similarly. For second-order NDMV, we further augment incomplete and complete g-spans with valence information. We distinguish the direction of span explicitly, denoting our augmented complete v-span as C g,v h,e,d , where d is the direction, v is the valence, h is the start index and e is the end index of span compared with g-span. Incomplete v-span is defined similarly. For grand-NDMV, given sentence x, we suppose that x 0 is the imaginary root token and x 1 , ..x n are tokens. We denote D [i, g, d, v, a] = log(p DECISION (decision = a|parent = x i , grand = x g , direction = d, valence = v)), S[i, c, g, d, v] = log(p CHILD (child = x c |parent = x i , grand = x g , direction = d, valence = v)), and R[i] = log(p ROOT (child = x i )). Given these definitions, the inside algorithm of grand-NDMV is shown in Algorithm 1. For sibling-NDMV, g in C g h,e stands for the index of sibling instead of the index of grandparent. Given sentence x, we suppose that x 0 is a special NULL token which stands for no sibling and x 1 , ..x n are tokens. We denote D [i, g, d, v, a] = log(p DECISION (decision = a|parent = x i , sibling = x g , direction = d, valence = v)), S[i, c, g, d, v] = log(p CHILD (child = x c |parent = x i , sibling = x g , direction = d, valence = v)), and R[i] = log(p ROOT (child = x i )). Given these definitions, the inside algorithm of sibling-NDMV is shown in Algorithm 2. For jointly trained L-NDMV and second-order NDMV model, we take jointly trained L-NDMV sibing-NDMV for example. We denote x is the sequence of word/POS pairs which starts indexing at 1. The inside algorithm of jointly trained L-NDMV and sibling-NDMV model is shown in Algorithm 3. Following Eisner (2016), we use back-propagation to obtain expected counts of grammar rules. For the parsing algorithm, we can replace logsumexp with max in Algorithm 1, 2 and 3 to get the Viterbi log-likelihood of the sentence, then use back-propagation to get grammar rules which are used in the Viterbi parse tree, and finally reconstruct the parse tree based on these rules. A.2 Full Parameterization Denote the embedding of the parent, child and sibling (or grandparent) by x p , x c , x s ∈ R d . We use three different linear transformations to produce the representations of each token as a parent, child, and sibling (or grandparent). We feed e s , e c , e p to the same neural network which consists of three MLP with skip-connection layer. The first MLP aims at encoding valence information: where val ∈ [HASCHILD, NOCHILD]. A.3 Hyperparameters We set the dimension of POS embedding to 100. The dimension of all linear layers to calculate hidden representation is set to 100. We set the size of decomposed trilinear function parameters to 30 for child and root rules and 10 for decision rules in the unlexicalized setting. For the lexicalized model, we set the dimension of word embedding to 100. We concatenate the POS embedding and word embedding as input. The dimension of all linear layers to calculate hidden representation is set to 200. We set the size of decomposed trilinear function parameters to 150 for child and root rules and 50 for decision rules. We use an additional dropout layer after the embedding layer to avoid over-fitting since the vocabulary size of the lexicalized model is much larger compared to the unlexicalized model. The dropout rate is set to 0.5. A.4 Setting of L-NDMV The vocabulary consists of word/POS pairs that appear for at least two times in the WSJ10 dataset. We use random embedding to initialize the POS embedding and FastText embedding to initialize the word embedding, which is different from the setting in the original paper (Han et al., 2017). We train FastText on the whole WSJ dataset for 100 epochs with window size 3 and embedding dimension 100.
6,141.8
2020-10-28T00:00:00.000
[ "Computer Science" ]
Quantum SU$(2|1)$ supersymmetric $\mathbb{C}^N$ Smorodinsky--Winternitz system We study quantum properties of SU$(2|1)$ supersymmetric (deformed ${\cal N}=4$, $d=1$ supersymmetric) extension of the superintegrable Smorodinsky--Winternitz system on a complex Euclidian space $\mathbb{C}^N$. The full set of wave functions is constructed and the energy spectrum is calculated. It is shown that SU$(2|1)$ supersymmetry implies the bosonic and fermionic states to belong to separate energy levels, thus exhibiting the"even-odd"splitting of the spectra. The superextended hidden symmetry operators are also defined and their action on SU$(2|1)$ multiplets of the wave functions is given. An equivalent description of the same system in terms of superconformal SU$(2|1,1)$ quantum mechanics is considered and a new representation of the hidden symmetry generators in terms of the SU$(2|1,1)$ ones is found. Introduction The models of supersymmetric mechanics were initially introduced as toy models for supersymmetric field theories [1]. Respectively, their study was mainly limited to the models exhibiting one-dimensional Poincaré supersymmetry defined by the relations where Q A are N real supercharges and H is the Hamiltonian. Some decade ago, there started a wide activity related to the study of field-theoretical models with the "rigid supersymmetry on curved superspaces" (see, e.g., [2]). These studies motivated two of us (E. I. and S. S.) to investigate the one-dimensional (i.e. mechanical) analogs of these theories [3], [4]. The main subjects of our interest were systems in which d = 1, N = 4 Poincaré supersymmetry is deformed by a mass-dimension parameter m into su(2|1) supersymmetry, with the following non-vanishing (anti)commutators 1 : Here, the generator H is the Hamiltonian (coinciding with the U(1) generator), while the remaining bosonic generators I i j (i = 1, 2) define SU(2) R-symmetry. As one of the results of this study, it was observed that the so-called "weak N = 4 supersymmetric Kähler oscillator" models (which are the particle models living on Kähler spaces and interacting with the specific potential field and a constant magnetic field) suggested earlier in [5,6] supply nice examples of SU(2|1) supersymmetric mechanics 2 , and they can be reproduced from the SU(2|1) superfield approach worked out in [4]. This observation further entailed the construction of a few novel SU(2|1) supersymmetric superintegrable oscillator-like models specified by the interaction with a constant magnetic field. They include superextensions of isotropic oscillators on C N and CP N [5], [8], as well as of C N Smorodinsky-Winternitz system in the presence of a constant magnetic field [9] and CP N Rosochatius system [10]. In a recent paper [11] we noticed that switching on an interaction with a constant magnetic field in the ordinary N = 4 supersymmetric mechanics on Kähler manifold breaks either N = 4 supersymmetry or isometries of the initial bosonic system, including their hidden symmetries (cf. [12]). We demonstrated that this drawback can be overcome by performing, instead of the d = 1 Poincaré supersymmetrization, its deformed variant, i.e. SU(2|1) supersymmetrization. Examining the SU(2|1) supersymmetric models listed above, we observed that the SU(2|1) supersymmetrization preserves all kinematical symmetries of the initial systems, and, in a number of cases, also the hidden ones (the explicit expressions for the "super-counterparts" of the hidden symmetry generators were constructed so far for C N oscillator and C N Smorodinsky-Winternitz system in the presence of a constant magnetic field). It should be pointed out that the hidden symmetries play an important role in the quantum domain: e.g., in the standard supersymmetric quantum mechanics they amount to an additional degeneracy of the (higher) energy levels as compared to N -Poincaré supersymmetry which enhances it by the factor 2 N /2 . Keeping in mind these reasonings, it is desirable to better understand, on the concrete examples, how SU(2|1) supersymmetry affects the energy spectrum of the systems considered. The basic peculiar feature of the supersymmetric Kähler oscillator models is that their Hamiltonian is in fact identified with the U(1)-generator in (1.2). As was shown in [4], an extra U(1) R-charge ("fermionic number") can be introduced in addition to the Hamiltonian in this class of models only in the limit 1 These su(2|1) superalgebra relations differ from those in [11] by rescaling Q → √ 2 Q. We mostly follow the conventions of [4]. 2 Another version of this kind of supersymmetric mechanics was studied in [7], with SU(2|1) termed as "weak supersymmetry". m = 0. One can expect that the impossibility to separate the Hamiltonian from the fermionic number in such models at m = 0 could have an essential impact on the structure of the relevant energy spectra. In the present paper we construct SU(2|1) supersymmetric extension of the C N Smorodinsky-Winternitz (in what follows, S.-W.) quantum system in the presence of a constant magnetic field as an instructive and simple example of the general Kähler oscillator models. In particular, we focus on the interplay of SU(2|1) supersymmetry and hidden symmetry in forming the energy spectrum of this system. An analogous analysis of its purely bosonic sector was performed in [9]. An interesting unique feature of the SU(2|1) C N S.-W. model as compared to other models of SU(2|1) supersymmetric Kähler oscillators 3 is its implicit superconformal SU(2|1, 1) symmetry. At the classical level this follows from the results of ref. [13], while here we extend the correspondence with a complex SU(2|1, 1) superconformal mechanics to the quantum domain. As a preamble, let us remind that the C N S.-W. system considered in [9] 4 amounts to a sum of N two-dimensional isotropic oscillators with a ring-shaped potential interacting with a constant magnetic field (orthogonal to the plane). It is defined by the Hamiltonian [9] where B is a constant strength of the magnetic field. Besides the standard Liouville integrals of motion, the model possesses the additional ones generated by the so-called Uhlenbeck tensor, and thus provides an example of superintegrable system. The C N S.-W. system interacting with a constant magnetic field belongs to the class of Kähler oscillators with the following Kähler potential [11] It admits SU(2|1) supersymmetric extension given by the superalgebra (1.2), with the deformation parameter m = 4ω 2 + B 2 . It is convenient to summarize here at once the basic results of the present paper. • The energy spectrum of the deformed SU(2|1) supersymmetric system constructed reveals an interesting feature: SU(2|1) supersymmetry gives rise to the separation of bosonic and fermionic states in the spectrum. Bosonic states are associated with the even levels, while all fermionic states with the odd ones. So the energy spectrum exhibits "even-odd" feature: the adjacent bosonic and fermionic states carry different energies shifted by half-integer numbers. The intrinsic reason for such a splitting is that the Hamiltonian in (1.2) hides in itself the fermionic number operator F and so does not commute with the supercharges. The ground state belongs to a non-singlet representation of SU(2|1), so the latter is spontaneously broken. These features are in contrast with the standard supersymmetric mechanics models where there is a degeneracy between the fermionic and bosonic wave functions. • The system exhibits superconformal SU(2|1, 1) symmetry with the central charge given by the sum of the generators of kinematical U(1) symmetries. The SU(2|1) supersymmetric Hamiltonian H can be split into a sum of superconformal Hamiltonian and a central charge generator accounting for the constant external magnetic field. The whole set of wave functions is closed under the action of the superconformal algebra su(2|1, 1), which can be treated as a spectrum-generating algebra of the model considered. • Due to an additional hidden symmetry generated by the proper superanalog of Uhlenbeck tensor commuting with the Hamiltonian, the spectrum of quantum SU(2|1) C N S.-W. system reveals an extra degeneracy. The superextended Uhlenbeck tensor is shown to admit a nice representation in terms of bilinears of the SU(2|1, 1) generators. • Furthermore, a generalization of the super Uhlenbeck tensor was found, such that it commutes with the full set of the generators of the properly rotated SU(2|1) supergroup. It is also expressed through SU(2|1, 1) generators and gives rise to an additional degeneracy of the eigenvalues of the SU(2|1) Casimirs. The paper is organized as follows. In Section 2 we formulate SU(2|1) supersymmetric extension of the superintegrable C 1 S.-W. quantum system and determine its energy spectrum and wave functions. We also analyze the space of quantum states of this supersymmetric system from the standpoint of the SU(2|1) representation theory. In Section 3 we show that the system considered admits an equivalent description in terms of some quantum superconformal SU(2|1, 1) mechanics. In Section 4 we define the SU(2|1) supersymmetric C N S.-W. quantum system as a sum of N copies of the C 1 -systems. We reveal its hidden symmetry given by the supersymmetric counterpart of the Uhlenbeck tensor and show that it is responsible for an additional degeneracy of the spectrum of the SU(2|1) Casimir operators. A new representation for the superextended Uhlenbeck tensor in terms of the superconformal SU(2|1, 1) generators is found. The summary and outlook are the contents of Section 5. In Appendices A and B we present some details related to the non-linear algebra of hidden symmetries. In Appendix C, it is briefly discussed how conformal SL(2,R) symmetry manifests itself in the spectrum of quantum bosonic C N S.-W. system. Supersymmetric C 1 Smorodinsky-Winternitz system We proceed from the SU(2|1) supersymmetric S.-W. system on the complex Euclidian space C 1 . The quantum Hamiltonian constructed according to the generic prescription of [4,11] for the specific N = 1 Kähler potential (1.5), is defined by the expression, with the (anti)commutators 5 The operators of bosonic momenta are represented in the standard form while the fermionic operators ξ i ,ξ i can be represented as a 4 × 4 Euclidian gamma-matrices acting on four-component wave functions. However, we prefer a more formal, though equivalent representation, see, e.g., [15]. Namely, we consider one-component wave functions depending on z,z, ξ i (with ξ i , i = 1, 3 , being a doublet of Grassmann variables), and representξ j as a differential operator,ξ Respectively, the wave functions contain two bosonic components, ψ (z,z), ξ k ξ k ψ ′ (z,z), and two fermionic ones, Ψ i = ξ i ψ ′′ (z,z). In what follows, it will be convenient to equivalently replace the parameters B and ω by λ and m, where m is a mass-dimension contraction parameter defined in (1.6), and λ is a dimensionless angle-type parameter defined by the relations (2.5) In the new notation, the Hamiltonian (2.1) is rewritten as where L is the angular momentum (or U(1)) operator The Hamiltonian (2.6) is manifestly invariant under U(1)-transformation z → e iκ z, ξ i → e iκ ξ i generated by this operator. The rest of SU(2|1) generators is expressed as It is straightforward to check that they satisfy the su(2|1) superalgebra relations (1. Wave functions and spectrum We define super wave functions as a ξ i -expansion of the wave functions depending on (z,z, ξ i ), with ξ j being an annihilation operator. This expansion amounts to the fermionic wave function Ψ i ∼ ξ i ψ ′′ (z,z) and two bosonic wave functions ψ (z,z), ξ k ξ k ψ ′ (z,z). As distinct from the fermionic function, the bosonic ones are not eigenstates of the Hamiltonian (2.6). The correct eigenstates are represented as their proper combinations: (see eq. (2.24) below). The simplest way to solve the eigenvalue problem is to firstly consider the action of (2.6) on the fermionic wave functions Ψ i ∼ ξ i ψ ′′ (z,z). On the bosonic factor ψ ′′ (z,z) the Hamiltonian (2.6) acts exactly as the bosonic one [cf. (1.3)], and this action reads (2.10) After solving the eigenvalue problem as in the pure bosonic case [9], the fermionic states are written as where L (l ) n−1 are generalized Laguerre polynomials, i.e. n = 1, 2, 3 . . . is a positive integer 6 . The integer number l is an eigenvalue of the angular-momentum operator L, The energy spectrum of the fermionic states is directly calculated to be The double degeneracy of the fermionic levels is due to the unbroken su(2) ⊂ su(2|1) with respect to which the wave function (2.11) transforms as a doublet. The bosonic wave functions can be now obtained by action of the supercharges on the fermionic wave function Ψ i : 2m Ω + (n,l) . (2.14) They are explicitly expressed as , L Ω ± (n,l) = l Ω ± (n,l) . (2.15) On the other hand, the action of supercharges on the bosonic wave functions yields the original fermionic wave functions Ψ i : Using these expressions, we derive the energy spectrum for the bosonic states Ω ± (n,l) and find that it differs from that for the fermionic states, Thus SU(2|1) supersymmetry creates additional energy levels, with the bosonic and fermionic states being separated. Note that the discrete energy spectrum bounded from below in the model under consideration is ensured by the oscillator term ∼ m 2 in (2.10). So the limit m = 0 cannot be taken in the quantum case (see also Section 3). 6 One could work with the Laguerre polynomials L (l ) n ′ (mzz) where n ′ = 0, 1, 2 . . . , but for further convenience we deal with n = n ′ + 1, such that n = 1, 2, 3 . . . (see the next Subsection). SU(2|1) representations The SU(2|1) representations are specified by the eigenvalues of Casimir operators [16] On the quantum states Ψ j (n,l) , Ω ± (n,l) these eigenvalues are: The quantum number n uniquely defines SU(2|1) representation for a fixed l. Therefore, the ground states for each l belongs to a non-trivial four-fold SU(2|1) multiplet. This means that SU(2|1) supersymmetry is spontaneously broken at any l. The quantum states within the given SU(2|1) multiplet occupy different energy levels according to (2.13) and (2.17). Since the supercharges do not commute with the Hamiltonian (2.6), they are capable to decrease and increase its eigenvalues. One can designate levels occupied by bosonic states as even levels (including the zeroth level for the ground state), then the fermionic states occupy odd levels. As distinct from the standard d = 1 Poincaré supersymmetry, there is no any degeneracy between these two types of the levels. The relevant picture is drawn on Figure 1. One can see that the bosonic states Ω + (n,l) and Ω − (n+1,l) have the same energy though they belong to different SU(2|1) representations, For example, the energy level E (1,l) + m/2 on Figure 1 can be alternatively denoted as E (2,l) − m/2. The hidden symmetry responsible for this degeneracy will be presented in the next section. As was already mentioned, the two-fold degeneracy of fermionic states is due to the SU(2) generators (2.8) acting on the fermionic variables ξ i . The wave functions Ω + (n,l) and Ω − (n+1,l) for n = 1, 2, 3 . . . can be represented as , where , These functions have just the structure (2.9) and are eigenfunctions of the Hamiltonian (2.6) with the eigenvalues (2.22), but they are not eigenfunctions of Casimirs (2.18) and (2.19). The only exception is the wave function Ω 1 (n,l) which can be naturally continued to n = 0 as It is directly related to the lowest bosonic state Ω − (1,l) 8 : In the next section we will show that the state Ω 1 (0,l) can be interpreted as a singlet ground state with respect to some new SU(2|1) superchargesQ,Q . Let us summarize the basic peculiarities of the energy spectrum. Superconformal symmetry In this section, following [13], we relate the generic supersymmetric C 1 S.-W. system to the superconformal mechanics with the Hamiltonian We will show that the Hamiltonians of these two systems differ by a central charge generator, and, as a consequence, the Hamiltonian of conformal model inherits all the symmetries of the original Hamiltonian (and vice-versa). As was proved in [13], the superconformal Hamiltonian of deformed supersymmetric mechanics should be an even function of the deformation mass parameter m : m → − m, H conf → H conf . In accord with this proposition we define the superconformal Hamiltonian (3.1) as Such a change of the Hamiltonian amounts to the effective elimination of the magnetic field 9 . One can equally choose the basis in which the conformal Hamiltonian is not deformed by the oscillator term, Then we introduce the dilatation generator D and the generator of conformal boosts K: These generators, together with the Hamiltonian (3.3), close on the conformal algebra so(2, 1) ∼ sl(2, R) [17]: The trigonometric type of (super)conformal mechanics involving the parameter m is defined by the following linear combinations [13]: The algebra (3.5) is then rewritten as With B = 0, i.e. λ = π/4, the deformation parameter coincides with the frequency, and the central charge Z 1 vanishes: Therefore, at this special choice of parameters the C 1 S.-W. Hamiltonian (2.10) just coincides with the superconformal Hamiltonian (3.1), H = H conf . One could come back to the original Hamiltonian H according to (3.2), but in this case the conformal algebra will be deformed by a central charge. So, irrespective of whether we deal with superconformal symmetry or its bosonic limit, it is appropriate to use the Hamiltonian H conf containing no magnetic field. Note that the first relation in (3.6) implies that just H conf with m 2 = 0, as opposed to H conf , is the correct quantum Hamiltonian with the spectrum bounded from below, in accordance with the assertion in the pioneering paper [17]. The conformal algebra (3.7) can be extended to the superconformal algebra in the following way. Applying the discrete transformation m → − m to the supercharges Q i defined by (2.8) we obtain the new fermionic generators which can be identified with the conformal supercharges, (3.8) These new generators extend the superalgebra su(2|1) to the centrally extended superalgebra su(2|1, 1) : The superconformal algebra contains three central charges: Note that the quadratic operator constructed out of the central charges is reduced on the states to the square of the quantum numberl = l 2 + g 2 defined in (2.11), When g = 0 , this operator coincides with L 2 . Below we will show that the set of three central charges in (3.9) can be reduced to a single central chargeZ 1 =L . This agrees with the fact that the superalgebra su(2|1, 1) contains 15 generators: eight supercharges, three su(2) generators, three generators of so(2, 1) and one central charge, in accord with the decomposition su(2|1, 1) = psu(2|1, 1) ⊕Z 1 , where psu(2|1, 1) is a centerless superalgebra. Before proceeding further, we point out that the SU(2|1, 1) trigonometric superconformal model of the multiplet (2, 4, 2) with a superpotential term resulting in the Hamiltonian (3.1) was constructed within a manifestly SU(2|1) covariant (N = 4 deformed) superfield approach in [13]. In the N = 2, d = 1 superfield formalism, the model amounts to a system of the coupled (2, 2, 0) and (0, 2, 2) multiplets, with only N = 2 superconformal symmetry SU(1|1, 1) ⊂ SU(2|1, 1) being manifest. In such a formulation, the inverse-square terms with g = 0 in (3.1), (2.1) come out solely from the coupling of these two multiplets and disappear after decoupling of the fermionic multiplet (0, 2, 2). So in this limit (still respecting SU(1|1, 1) invariance) our model is reduced to the two-dimensional N = 2 superconformal oscillator model based on the chiral multiplet (2, 2, 0). In ref. [18] (see also [19]) there was considered a different SU(1|1, 1) superconformal model of the multiplet (2, 2, 0), with the Hamiltonian involving an inverse-square potential induced by some spin coupling. It cannot be obtained as any truncation of our model. Casimir operators Let us consider the quadratic Casimir operator of D(2, 1; α) [20]: The definition of the superalgebra D(2, 1; α) in terms of these generators was given in [13], with m = −αµ. The limit α = −1 gives rise to the superalgebra D(2, 1; α = −1) → psu(2|1, 1) ⊕ su (2) , where psu(2|1, 1) is a centerless superalgebra and su (2) is an external automorphism generated by the generators F , C andC. Hence, the Casimir operator of psu(2|1, 1) reads The automorphism su (2) generators F , C andC commute with this operator. In our case we deal with the centrally extended superalgebra (3.9), so we are led to perform an alternative way of reaching the limit α = −1. It implies the following preliminary redefinition of the extra su(2) generators: Then, multiplying (3.12) by the factor ∼ (α + 1) and taking the limit α = −1 afterwards, we observe that only the last piece of (3.12) survives this limit: The generators (3.14) commute among themselves and with all other generators in the limit considered, so they form the triplet of central charges. Thus we are left with the superalgebra (3.9) for which the invariant operator (3.11) is a proper limit of the quadratic Casimir operator (3.12) and it is the genuine Casimir for the centrally extended su(2|1, 1) superalgebra. Below we will show that SU(2|1, 1) is a spectrum-generating supersymmetry acting on infinitedimensional irreducible SU(2|1, 1) multiplets labeled by the eigenvaluesl 2 of (3.11). However, we must take into account that the angular momentum operator L can take two eigenvalues l = ±|l| leading to the same (l) 2 . So, the operator L commuting with all su(2|1, 1) generators can be treated as the second Casimir operator of su(2|1, 1) and the full space of quantum states of the model is the collection of two copies of SU(2|1, 1) multiplets, with the same value ofl and two opposite-sign values of the quantum number l. Passing to the new basis in SU(2|1, 1) It is useful to bring the superconformal algebra (3.9) to the form containing only one central charge. To eliminate two of the original central charges, we perform the rotatioñ where 10 cos 2ϕ = g The only central chargeZ 1 we are left with reads and it takes the valuel = l 2 + g 2 on the quantum states (see (3.11)). In the new basis, the superalgebra is rewritten as form a different su(2|1) subalgebra, with the same (anti)commutators as in (1.2). So one can construct the new space of quantum SU(2|1) states just with respect to this transformed su(2|1) superalgebra. It should be pointed out that the su(2|1, 1) generators in the original and new bases, taking into account their explicit form, can be realized on the full set of the quantum states defined in Section 2, so that this set is closed under the action of these generators. Thereby, the construction using the transformed su(2|1) superalgebra will give rise to the same total set of quantum states, although with the energy spectrum calculated with respect to the HamiltonianH defined in (3.20). The new SU(2|1) supersymmetry generated by the superchargesQ i andQ j is not spontaneously broken and the corresponding ground state is given by the SU(2|1) singletΩ + (0,l) . To prove this, let us consider the relevant Casimir operators On the statesΩ ± (n,l) and Ψ i (n,l) , they take the eigenvalues The states with n = 1 correspond to an atypical representation, since Casimir operators are also zero on these states. This representation is spanned by the three states The states Ψ i (1,l) ,Ω + (1,l) form the fundamental SU(2|1) representation. All excited states with n > 1 correspond to the simplest four-fold typical SU(2|1) representations. Whereas the supersymmetry associated withQ i andQ j is not broken, the second pair of SU(2|1) superchargesS i andS j corresponds to the spontaneously broken supersymmetry (see Figure 2), with the minimal energyl m as the lowest eigenvalue of the relevant shifted Hamiltonian H conf + mZ 1 /2. On Figure 2, we demonstrate how an action of SU(2|1, 1) supercharges mixes all bosonic and fermionic states with a fixed number of the angular momentum l. Thus, all these states at fixed l belong to a single infinite-dimensional SU(2|1, 1) representation labeled byl 2 = l 2 + g 2 as the square of the central charge. The states with − l belong to the other SU(2|1, 1) representation labeled by the samel 2 = l 2 + g 2 . These two representations are distinguished only by the eigenvalue of the operator L (see discussion in Subsection 3.1) and they exhaust the whole space of the quantum states of the model. Thus the conformal supergroup su(2|1, 1) acts as the spectrum-generating algebra on this space of quantum states 11 . P P P P P P ✐ P q P P P P P P q P ✐ P P P P P P ✐ P q P ✐ P P P P P P q P ✐ P P P P P q These generators act on the bosonic wave functions only and form an exotic SU(2) symmetry [3] belonging to the universal enveloping of su(2|1, 1): The ground stateΩ + (0,l) is annihilated by these SU (2) generators. This SU(2) symmetry is responsible as well for the double-fold degeneracy of the initial wave functions Ω ± (n,l) , since the latter are related toΩ ± (n,l) via (2.23) and (3.25). Invariant operators The Casimir operator (3.13) allows us to guess the form of some new invariant operatorsĨ,M of the su(2|1) superalgebra. They commute withQ i ,Q j , I i j ,Z 1 , H conf and read 12 On the SU(2|1) representations generated byQ i andQ j they take the values The quadratic SU(2|1) Casimir can be written in terms of these operators and the central charge: These expressions will help us to construct generalizations of Uhlenbeck tensor in the next Section. Note that such invariants can be constructed only for su(2|1) generated by the transformed superchargesQ i andQ j . No their analogs commuting with Q i andQ j exist. Summary of Section 3 The superconformal Hamiltonian H conf , eq. To avoid a possible confusion, we point out that the complete quantum consideration of the SU(2|1) supersymmetric C N S.-W. system, including energy spectrum, the structure of the Hilbert space of wave functions and their SU(2|1) representation contents, has been already given in Section 2. The basic aim of Section 3 was to demonstrate that the same results can be restored, starting from an equivalent description of this model in terms of complex SU(2|1, 1) superconformal mechanics associated with the supermultiplet (2,4,2). Many peculiar features of the original formulation become simpler in the superconformal formulation, including, e.g., simplifying the formula for the energy spectrum. The phenomenon of disappearance of the dependence on the parameter λ in the second formulation also deserves an attention. Supersymmetric C N Smorodinsky-Winternitz system We define the quantum SU(2|1) supersymmetric C N S.-W. system as a sum of N copies of the C 1 system with the Hamiltonians (2.1) involving the same parameters B, ω (equivalently, m, λ), The supercharges and R-charges which form, together with the Hamiltonian H, su(2|1) superalgebra, are also defined as sums of the relevant quantities of each particular C 1 -system. Clearly, the generators H a commute with each other, and thus define the constants of motion of the supersymmetric C N S.-W. system. In addition to N commuting integrals H a , this system possesses N manifest U(1) symmetries z a → e iκ z a , ξ a i → e iκ ξ a i , with the generators Hence, these generators provide the system to be integrable. The wave functions of this system are obviously given by the products of those of N onedimensional copies, and the energy spectrum -by the sum of energies (2.27), One observes the same distinction between the spectra of bosonic and fermionic wave functions as in the C 1 case. The SU(2|1) supersymmetric C N S.-W. system has an additional degeneracy of the spectrum. It is due to the existence of the additional constants of motion given by the components of the supersymmetric extension of the Uhlenbeck tensor generating a hidden symmetry in the bosonic case. The classical version of this supersymmetric Uhlenbeck tensor was constructed in [11], while its quantum counterpart can be written in the form 5) where no sum over a and b is assumed. These constants of motion, together with (4.2) and H a , provide the system with the superintegrability property. It turns out that this tensor admits a convenient representation in terms of the generators of the associated superconformal algebra su(2|1, 1). Superconformal view Let us define the superconformal Hamiltonian on C N as a sum of N copies of superconformal Hamiltonians on C 1 , where H (conf) a is given by (3.1), with different parameters g a for each a (a = 1 . . . N ), but with the common parameters λ and m . So we deal with a direct sum of su(2|1, 1) algebras labeled by the index a. Then, we take sums of all these generators and obtain, once again, the conformal superalgebra (3.19) with the superconformal Hamiltonian (4.6). Here,Z 1a are defined asZ 1a = (L a ) 2 + g 2 a . (4.8) The wave eigenfunctions of (4.6) are obviously the products of N wave functions corresponding to a = 1 . . . N . Taking into account (4.4), the energy spectrum of the SU(2|1) HamiltonianH = H conf − mZ 1 /2 is given by the obvious generalization of the formula (3.27) where σ is a sum of σ a = 0, 1/2, 1 and it takes integer and half integer values ranged from 0 to N . Bosonic and fermionic states still occupy separate levels with integer and half-integer values of the energy (modulo the overall parameter m), respectively. The Uhlenbeck tensor (4.5) commutes with the superconformal Hamiltonian (4.6). In terms of the generators of the conformal algebra so(2, 1) it can be represented in the very simple form 10) or, in the basis (3.6), where M ab := 1 2m 2 T aTb . (4.12) The second form of I ab makes obvious its commutativity with the superconformal Hamiltonian (4.6), as well as with the Hamiltonian of C N S.-W. system (4.1). The non-linear algebra generated by I ab reads (no summation over repeated indices), where the function T cbd has a simple representation through the generators of conformal algebra: (4.14) Notice that for calculating the commutation relations (4.13) we do not need the explicit expressions for I ab in terms of the variables (z a ,z a , ξ ai ,ξ a i ) as in (4.5), now it suffices to make use of the standard commutation relations (3.5) or (3.7) of the conformal algebra so (2,1) . Looking at the expressions (4.11) and (4.14), we observe that they involve, apart from N Hamiltonians H (conf)a , also N 2 bilinear generators M ab that commute with (4.6) and (4.1). Thus, what actually matters is the nonlinear closed algebra generated by M ab and H (conf) b : (4.16) (no summation over indices). One can add to this set the U(1) generators L a , which commute with everything. Note that the symmetric combination M ab + M bā can be directly expressed through H (conf) b and I ab from (4.11), but it is not true for the antisymmetric one M ab − M bā entering T abc . However, it is possible to express (M ab − M bā ) 2 through the rest of constants of motion: Thus the quantity T cbd defined in (4.14) is a function of the original hidden symmetry generators H (conf) a , I cd and so the relations (4.13), (4.14) constitute a closed non-linear algebra which is equivalent to the algebra (4.15), (4.16). Like in the bosonic C 1 model (see (C.10)), the diagonal integrals I aa are yet expressed through other integrals: 20) The operators I 2 a are Casimirs for N copies of SU(2) symmetries acting only on fermionic variables. The additional new integrals of motion A a can be written in terms of the superconformal SU(2|1, 1) generators as In Appendix A we adduce some further details on the structure of these extra constants of motion. Degeneracy of the energy spectrum (4.9) can be attributed to any operator commuting with the Hamiltonian. One can construct many examples of such operators like (4.11), (4.12) or (3.33). Let us illustrate, on the simplest N = 2 example, how an action of the operators (4.12) creates (n + 1)-fold degeneracy of the bosonic wave functionsΩ + (n1,l1) ⊗Ω + (n2,l2) , with n = n 1 + n 2 . The action of M 12 on them is simple: It just increases the number n 1 as n 1 → n 1 + 1 and decreases n 2 as n 2 → n 2 − 1, so that the total number n = n 1 + n 2 is not altered. The action of M 21 is opposite: n 1 → n 1 − 1, n 2 → n 2 + 1. A slight modification of the Uhlenbeck tensor (4.11) by other superconformal SU(2|1, 1) generators yields a generalization of the operator (3.36), such that it commutes also with the SU(2|1) generatorsQ i ,Q j , I i j andZ 1 defined by (4.7): In a similar way, the bilinear operator (4.12) is modified as Once again, these invariants can be constructed only for new superchargesQ i andQ j . No their analogs can be defined for su(2|1) generated by Q i andQ j . The algebra of the operators (4.25) and (4.26) is nonlinear and its closure lies in the universal enveloping of the superconformal algebra su(2|1, 1) (3.19). The non-zero commutators of the generators (4.25) and (4.26) are presented in the Appendix B. It is worth to point out that the crucial property for revealing various degeneracies of the su(2|1) multiplets of the wave functions is the commutativity ofM ab andĨ ab with the SU(2|1) generatorsQ i ,Q j , I i j andZ 1 and, hence, with the relevant Casimir operators. The precise structure of the closure of the hidden symmetry generators is not too important from this point of view. Products of SU(2|1) representations One can consider the degeneracy of eigenvalues of the Casimir operators (3.29) of the N dimensional system, though these eigenvalues cannot be presented by a generic formula and so each particular N ≥ 2 model requires a separate analysis. An additional degeneracy, besides the degeneracy with respect to SU(2|1) generators, comes out with respect to the hidden symmetry operators (4.25) and (4.26). Below we present their action as the hidden symmetry operators on the SU(2|1) multiplets of wave functions. We will always deal with the "superconformal" SU(2|1) generated by the generators Q i ,Q k and the relevant SU(2|1) multiplets. The product of N one dimensional SU(2|1) representations can be decomposed as a non-trivial sum of irreducible SU(2|1) representations [22]. For simplicity we consider here only N = 2 case and present the decomposition for the levels n = 0, 1, 2. The level n = 0. The lowest state with n = 0 is a product of the single states with n 1 = 0 and n 2 = 0: |0 ≡ |0, 0, 0, 0 = Ω + (0,l1) ⊗Ω + (0,l2) . The product of the fundamental representations with n 1 = 1, n 2 = 1 has the total dimension 3 × 3 = 9: With respect to the generators (4.7) it splits into a sum of 4-dimensional typical representation and 5-dimensional atypical representation. The atypical representation is spanned by the triplet of bosonic states and the doublet of fermionic states: The operatorĨ 12 annihilates this state. The typical representation encompasses the states l2) . Summary and outlook In this paper we studied the quantum mechanics of SU(2|1) supersymmetric extension of the S.-W. system on the complex Euclidian space C N interacting with an external constant magnetic field [9]. This supersymmetric system can be considered as a unification of N non-interacting C 1 S.-W. systems. Accordingly, we first quantized the model on C 1 and then generalized the consideration to the case of C N . We constructed the complete space of the wave functions and found the relevant energy spectrum. We studied how all bosonic and fermionic states are distributed over the irreducible representations of the supergroup SU(2|1). We also showed that the bosonic S.-W. model possesses conformal symmetry SO(2, 1) (see Appendix C). In the supersymmetric case we redefined the Hamiltonian as (3.2) and showed that it exhibits SU(2|1, 1) superconformal symmetry which serves as the spectrum-generating symmetry on the full set of the quantum states. The wave functions of supersymmetric quantum S.-W. system on C N were constructed as products of N wave functions of the C 1 models. Correspondingly, on these products irreducible SU(2|1) representations are realized. For simplicity we considered the case of N = 2 that already amounts to a non-trivial sum of irreducible SU(2|1) representations. Also, the generalization to C N reveals hidden symmetry generators (4.25) which correspond to a supersymmetrization of Uhlenbeck tensor [9,11]. It is responsible for the degeneracy of the wave functions belonging to irreducible SU(2|1) representations. The general expressions for the hidden symmetry generators in terms of products of the generators of the superconformal algebra su(2|1, 1) were found. It would be interesting to consider, along the same lines, quantum deformed SU(2|1) extensions of other Kähler oscillator models, e.g., of the CP N one. These models are not superconformal, so their quantum analysis should be similar to what has been performed in Section 2. On the other hand, a non-trivial multi-particle extension of the SU(2|1) C 1 S.-W. model could be a complex N -particle interacting system of the Calogero-Moser type, hopefully preserving the superconformal invariance of the one-particle C 1 model. Then the whole consideration of Sections 3 and 4 based on the superconformal group SU(2|1, 1) could be applicable. Finally, let us notice that the quantum C 1 S.-W. system without magnetic field (also known as a "circular oscillator with ring-shaped potential") was used in a more phenomenological setting for the study of the particle behavior in the two-dimensional quantum ring [23]. Respectively, the C N S.-W. system with coincident parameters g a can be interpreted as an ensemble of N free particles in a single quantum ring interacting with a constant magnetic field orthogonal to the plane. It would be interesting to reveal possible physical implications of the quantum SU(2|1) supersymmetric version of this system within such an interpretation. where the totally antisymmetric tensorT abc is defined as with L playing the role of a central charge. The conformal algebra has the standard form (3.7) just in the basis with the conformal Hamiltonian, with L becoming an external generator commuting with all conformal generators. The generators T ,T and H conf act on the wave functions Φ (n,l) , as T Φ (n,l) = m Φ (n+1,l) ,T Φ (n,l) = n n +l m Φ (n−1,l) , H conf Φ (n,l) = m n +l 2 + 1 2 Φ (n,l) . (C. 8) We observe that the explicit dependence on the external magnetic field disappears in the spectrum of the conformal Hamiltonian H conf . The relevant Casimir operator of so(2, 1) defines an irreducible representation of the tower of states Φ (n,l) for n = 0, 1, 2, . . . and a fixed l: TT +T T Φ (n,l) = l2 − 1 4 Φ (n,l) . (C.9) So, at each l the quantum states constitute an infinite-dimensional irreducible representation of so(2, 1) ∼ sl(2, R). Hence the conformal algebra serves as the spectrum-generating algebra of the quantum C 1 S.-W. model. Note that the relation (C.9) immediately follows from the operator identity which holds upon substitution of the explicit expressions for the involved generators. In the general case of C N S.-W. quantum system, the degeneracy with respect to (4.12) is given by the binomial coefficient C n n+N = (n+N )! n!N ! .
9,423.4
2020-09-29T00:00:00.000
[ "Physics" ]
Monitoring of the framings stress-strain with strain gauges The article describes the research methods of the stress-strain of reinforced concrete framings (piles and pylons) using embedded strain gauges. The relations of load to indirect reactive characteristics displayed by the weighing device which were obtained through laboratory tests of framings reference specimens are given. Summary tables of framings stress-strain monitoring results gained during the II Phase of construction project (after base plate concreting for piles and floor slab concreting for pylons) are included. The study of obtained results of actual framings stress will allow reducing construction material consumption through the reduction of the safety factors on reliability. Introduction A structural analysis calculation in the design of non-unique structures is based on the experience of construction of similar objects. Yet sufficiently large safety factors on reliability which do not take into account the non-uniformity of structural behavior are set. Measuring of actual piles and pylons stress gives an opportunity to further reduction of framings material consumption when constructing similar objects. Monitoring of the framings stress-strain is being carried out as a part of the R&D support during the construction of a multi-purpose residential complex with underground parking. The complex consists of two bays with varying number of storeys conjoined with two-level parking space. The constructive scheme of the building is a reinforced concrete cross-wall structure. The foundation is a cast reinforced concrete slab on solid reinforced concrete pre-cast bearing piles of square section measuring 400 × 400 mm (6 th segment) and 300 × 300 mm (1 st segment). The TZB-100 and TZB-200 embedded strain gauges for concrete are used for taking of framings stress readings [ Fig. 1]. They work as follows: tensile deformation in the thickness of the monitored object increases the distance between strain gauge flanges, and they stretch the rod [ Fig. 2]. This stretching is transformed by strain gauge bridge into output (operating factor), which is displayed at the screen of weigh digitizer [ Fig. 3] connected to the strain gauge through a power lead. The rod of the strain gauge is covered with plastic film non-adhesive to concrete, therefore shear stresses are not transmitted from concrete to the rod, and the strain gauge signal depends only on displacement of flanges, which increases the measurement accuracy. The stiffness of the strain gauge can be adjusted to the stiffness of concrete surrounding it. In this case, the strain gauge does not affect the stressstrain of the controlled object, which significantly increases the reliability of measurements [1][2][3][4][5][6][7]. Materials and methods Framings reference specimens were selected in the amount of 4 pieces and strain gauges were installed in them in order to establish a relationship between indirect reactive characteristics displayed by the device and stresses in concrete expressed in kN/sq. cm. As a result of laboratory tests of these specimens the following calibration curves were obtained: • Geometrical dimensions of the specimen No. 1: 400 × 400 × 600 mm. Type of strain gauge: TZB-200. Concrete design rating: B30. Then the strain gauges are being installed directly into the framings of residential complex: after pile sinking a hole with a diameter of 80 mm and a depth of 400 mm was drilled by means of boring tool on the surface of each pile. Then the strain gauge was installed there. The holes with strain gauges were grouted using the concrete repair mortar with project age strength equal to 100 % of pile concrete design rating (400 × 400 mm -B30; 300 × 300 mm -B40). Pylon strain gauges were installed before the mounting of cheek boards in the place where they are jointed with the base plate. Before being cased each device was fixed with wires to the reinforcement in an intended orientation [8][9][10][11][12][13][14][15]. Stress monitoring is carried out in 6 stages: 1. After framings concrete is hard, the initial values are documented, 2. After concreting of the base plate for the piles and the floor slab for the pylons, 3. After the construction of the underground part of the building, 4. After the construction of 50 % of superstructure concrete components, 5. After the construction of 50 % of superstructure concrete components, 6. After the construction of interior walls and partitions, facades (when structural works at the monitored segment are completed). Results and discussion Documented indirect reactive characteristics readings of the strain gauge are summarized in the table. This table includes stress occurring in the framings as a result of imposed load which were calculated from the obtained calibration curves for different types of framings and concrete grades, as well as the excess of indirect characteristics in relation to previous load step. Conclusion The R&D support of the construction and monitoring of the object, as well as subsequent studies of obtained data, will allow project designer to estimate the differences between project and actual values of stresses. During the subsequent project installation engineers will be able to make changes to the calculation of the building frame. The reduction of the framings cross section will significantly reduce the cost of construction.
1,177.4
2020-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Design and implementation of an automatic pressure-control system for a mobile sprayer for greenhouse applications This article presents the design and development of an embedded automatic pressure-control system for a mobile sprayer working in greenhouses. The pressure system is mounted on a commercial vehicle, it is composed of two on/off electro-valves and one proportional electrovalve. The hardware developed is based on an embedded microprocessor and provides a low-cost and robust solution. The resulting embedded system has been tested on a spraying system mounted on a manned vehicle. Furthermore, an easy-tuning non-linear PI (Proportional Integral) controller to achieve the desired pressure profile is designed and implemented in the embedded system. Many physical experiments show the best performance of such controller compared with a typical PI controller. Experiments covering the pressure range from 2 to 14 bar obtained a mean error less than 0.3 bar. Summing up, a low-cost automatic pressure-control system is developed, it ensures a uniform decomposition of the liquid sprayed on plants, and it works properly over a wide variable-pressure range. Introduction Spraying constitutes one of the most important tasks related to any agricultural production system, especially in greenhouses.Generally, spraying tasks in greenhouses are performed manually where a human operator moves between crop rows using a hand-held sprayer.The main drawbacks of these manual tasks are that the deposition over the canopy is not uniform and causes large losses to the soil (Sánchez-Hermosilla pressure range (4-12 bar); (iv) the proposed control approach should be easily adapted to any kind of vehicle working in a greenhouse. Spraying system The spraying system considered in this work was mounted on a commercial vehicle called Tizona ® (Fig. 1a) developed within the framework of a development project between the University of Almería and the company Carretillas Amate ® (Project 400567: "Selfpropelled Platform for Spraying and Transportation Tasks").It constitutes an articulated vehicle with four powered wheels; its dimensions are 0.8 m wide, 2.25 m long and 1.9 m high at the top of the nozzles.The vehicle is driven by hydraulic motors fed by one variabledisplacement pump powered by 19 HP petrol engine, allowing a maximum velocity of 2 m s -1 .A hydraulic cylinder in the joint part permits turning motions with a minimum turning radius of 1 m.The mass with no load is 410 kg, reaching 1,040 kg with the pesticide tank full. As mentioned above, this platform carries the spraying system, with a 500 L tank used to store the chemical products, two vertical boom sprayers with ten nozzles (Teejet DG 9502 EVS, Spraying Systems, Co., Wheaton, USA), two on/off electrovalves to activate the spraying (481414202, Arag, Rubiera, Italy), a proportional electrovalve (463022S, Arag, Rubiera, Italy) to regulate the output pressure, a double-membrane pump with pressure accumulator (Inmovilli M50, C-Dax, Turitea, New Zealand) providing a maximum flow of 49 L min -1 and a maximum pressure of 40 bar, and a pressure sensor (466112500, Arag, Rubiera, Italy) for closed-loop control purposes.Fig. 1b shows the spraying system mounted on the vehicle.A block diagram of this spraying system is displayed in Fig. 1c. In this work, the pressure is controlled instead of the flow (both are directly related) because better spraying conditions can be achieved, mainly regarding the droplet size.Furthermore, pressure sensors have better accuracy when compared with flow sensors of similar cost.The main drawback is that the pressure signal becomes noisy.The noise source comes from a membrane pump, which produces continuous pulses in the flow and thus in the pressure.As detailed in the following subsection a low-pass filter attenuates this effect.et al., 2011), and health hazard for humans and the environment (Martinez et al., 2002;García & Gadea, 2004;Nuyttens et al., 2009).As an alternative to the spray guns, vehicles equipped with spraying systems with vertical spray booms that move through the crop rows give better spray distribution over the plant canopy and reduce the human risks.Nuyttens et al. (2009) notes that for a constant spay volume a human-driven vehicle reduces 60-fold the potential dermal exposure in comparison to a standard spray gun.Furthermore, the potential dermal exposure varied from 19.7 mL h -1 for a vehicle to 460 mL h -1 for a spray lance. This paper focuses on research into developments automatic sprayers mounted on vehicles.In the literature, there are two main approaches: mobile robots and human-driven vehicles.An autonomous robot was reported in Mandow et al. (1996), where the objective was to demonstrate the autonomous navigation capabilities of the robot in a greenhouse using a constantpressure spraying system.In Adams et al. (2003), an inductive guidance system was developed for spraying at constant pressure.Subramanian et al. (2005) developed a mini-robot to perform spraying activities.In Guzmán et al. (2008), a variable-pressure automatic spraying system was developed for a tracked mobile robot working in greenhouses.The main advantage of previous developments is that mobile robots almost completely eliminate the need for the human presence in greenhouses.However, in general, these projects do not completely solve the problem of autonomous navigation in a full-scale greenhouse, especially with different lane configurations.Therefore, the current commercial solutions require a human operator who drives a vehicle and handles the spraying system.In this sense, however, Fumimatic ® and Tizona ® vehicles developed by IDM-Agrometal (www.idm-agrometal.com)and Carretillas Amate (www.carretillasamate.com),respectively, deserve special mention.In these vehicles the steering tools are in the front part of the vehicle far from the spraying bars.This configuration reduces human exposure to pesticides. This work aimed at the design, implementation, and testing of an automatic spraying system with safe and efficient operation, minimizing human exposure to the chemicals sprayed.The main objectives are: (i) developing a low-cost solution in terms of the equipment required for control purposes; (ii) ensuring a uniform decomposition of the liquid sprayed on plants; (iii) implementation of a non-linear PI (Proportional Integral) control law that ensures good performance over the Automatic pressure-control system for a mobile sprayer for greenhouse applications Embedded system design In this section the embedded controller board (Pawlowski et al., 2006) is described.The embedded controller designed interacts with the spraying-system actuators and the sensors to regulate the system pressure.Additionally, the information from spraying systems is completed with a velocity sensor.The data gathered is saved in a memory card and to be available to the user for analysis.Furthermore, the system developed incorporates one additional system to detect crop rows in order to open/close spraying bars automatically.This problem is solved using two ultrasonic sonars located on both sides of the vehicle.For security reason, the system is equipped with manual control of the spraying bars that enables to the operators to react under abnormal situations.These sensors detect plant lanes and thus chemical products are applied only to the desired area, avoiding the zones without plants.The system should facilitate the configuration of specific parameters to set up all work variables.In this case, an LCD (liquid crystal display) display and a small keyboard are used to handle the user interface.Another subsystem that needs to be considered is a communication interface.It is required to share work rapports generated at the end of the spraying tests.Finally, a power supply-subsystem is designed to ensure all voltage and current requirements for all the components and other subsystems considered in the hardware system developed.For meeting these requirements, each component and its characteristics (i.e.power requirement, output signals, required measure accuracy, etc.) were carried out.For cost reduction of the final product and fulfilment of specific features, the hardware is grouped into three separate boards.In this way, the power supply subsystem concerns elements having interfacing actua-tors.The second group deals with ultrasonic sonars and its driving electronics.The remaining subsystems are integrated with the main embedded controller board. The next design step is dedicated to laying out electronic schemes, making the required connection between corresponding elements of each board.Then, the printed circuit boards (PCB) are generated for each systems group.The functionalities for each board are described below. Valve-driver board The main task of this board is to drive valve DC motors that are used to move the mechanical parts of the valves based on the low-power control signals from the microcontroller.The on/off valves are controlled using electromechanical relays.The proportional valve motor needs to be controlled in a different way, since it requires two-directional regulation.For this, an "H-bridge" based on MOSFET transistors (Luecke, 2004) was designed.This solution allows bidirectional control of the proportional valve, using only two logic signals.The board is connected with the microcontroller through four digital lines; two of these are used for the proportional valve and two for on/off valves.The same board is employed to locate power-supply elements used to convert 24 V from vehicle battery into 12 V and 5 V, which are needed for the different components of the board.Additionally, all high current paths are protected with fuses to protect active elements against overload or possible short-circuits.The valve driving board should be set into the ventilated housing due to thermal characteristics of the elements used.Fig. 2a shows the final board prototype. Ultrasonic sonar board The core idea of the use of ultrasonic sonar is to detect objects in the range of {0,1.5}m from both sides of the vehicle.As mentioned above, it enables chemical products to be applied only in desired areas, reducing costs and improving uniformity of the spray distribution.Taking into account the system requirements, we decided to build cost-effective ultrasonic sonar.This ultrasonic sonar composed of a transmitter circuit and a receiver circuit.To transmit sound waves, the active element must be excited by square signal at 40 kHz provided by the microcontroller.The signal emitted is reflected by the objects located in the detection area.The signal reflected is captured by the receiver, which converts sound vibration into electrical waves.Then, the signal received is amplified and filtered to separate the useful signal from noise.The following action consists of comparing the filtered signal with the reference.This action is carried out only for an established time window.The size of the time window determines the maximum sensing length, which for this project was set to 1.5 m.If the signal received has sufficient amplitude (valid reflected sound wave), a comparator sets a simple-state storage element to logical high state, flip-flop in this case, which maintains the high state until detected by the microcontroller.When microcontroller reads sonar states correctly, a reset signal is sent to the state storage element and the whole procedure is repeated again.The ultrasonic sonar board is shown in Fig. 2b.The board developed is placed close to the spray nozzles and therefore requires a special watertight housing to guarantee adequate protection. Main embedded controller board The most important element of the entire spraying system is the controlling hardware element, which in our case it is based on a microcontroller.Currently, the microcontroller market contains a wide variety of products with different hardware configurations.For our system, the following sensors and actuators must be interfaced within the embedded controller: a pressure sensor, a velocity sensor (encoder), two proximity sensors based on ultrasonic sonar, two on/off electromechanical valves, and one proportional electro-mechanical valve for pressure control with digital controls.Other important components are: one LCD graphical display; a keyboard to create a user interface that permits onsite spraying system configuration; a real-time clock (RTC) to create time stamped measures as well periodical rapports; a SD (secure digital) memory card which is used to save all sensor measurements and work parameters; RS-232 and USB interfaces for communication purposes; JTAG (Joint Test Action Group) interface for programming/debugging the programming loaded in the microcontroller; and all corresponding connectors for interfacing external system components. In this case, an 8-bit ATmega64 AVR microcontroller from the company Atmel Corp. ( 2009) was selected.This device contains all peripherals needed to handle previously described components and covers all signal requirements for sensor/actuator interfacing.The final board for main embedded controller is shown in Fig. 2c. Notice that all interfaces are embedded into one microcontroller chip, so that a small board is easily incorporated in the vehicle control panel.Another important advantage of the hardware built is the use of JTAG interface, which allows an easy onsite firmware update of the embedded controller.The software was written in C++ language with short assembler routines for critical section.The developed system prototype was first tested under laboratory condition in order to detect eventual hardware and software bugs.Afterwards, the developed embedded Modelling For adequate control system design, the dynamic behaviour of the plant to be controlled must be analysed and the corresponding dynamical model must be obtained.After analysing the process dynamics, we observed that the system presented a non-linear behaviour (see Fig. 3).Then, different tests were carried out to check whether the process dynamics could be approximated around the desired operating point.Firstly, we carried out several open-loop experiments checking the response of the spraying system with different proportional valve-opening steps (5%, 10%, 20%) covering the full working ranges of 0% to 100% and 100% to 0%.Note that these ranges cover pressures from 0 to 40 bar. As shown in Fig. 3a, a noisy pressure signal results with large oscillations at maximum pressure (range 30-40 bar).In order to attenuate such undesired noise, a low-pass filter has been designed with a crossover frequency of 4.4 Hz. From the experiments shown in Fig. 3b, it can be observed that the pressure response to an open-loop step input in the valve aperture around a particular operating point can be approximated by a first-order system.It can be modelled using the following transfer function (Åström & Murray, 2008): where k is the static gain, which is the ratio between the change in the output amplitude in steady state and the input step amplitude; t r is the delay time, or time lapse during which the output of the system does not react after the step is made in the input (in this work the delay time is zero for all operating points); and τ is the time constant, which is the time lapse since the instant at which the output starts to evolve until it reaches 63% of its new steady-state value. Once the model structure is defined, the next step is to choose the correct value for its parameters, which are usually operating-point dependent.The reaction curve method was used for identifying these parameters (Åström & Murray, 2008) based on the open-loop-step responses. As observed in Fig. 3b, the output pressure presents different dynamics at each operating point.In this work, a model is obtained for each specific operating point belonging to the operational range considered (4-16 bar).Therefore, the reaction curve method is used around the desired operating range in order to achieve the different parameter sets. Non-linear control system Most of non-linear control problems are solved using straightforward control strategies such as gainscheduling or mean values controllers.However, these solutions behave properly only at specific operation points and may result in overshoot (Lee et al., 2000).More specifically, mean value controllers usually give worse results near domain limits, and gain-scheduling controllers may have behaviour problems if there is a controller change during reference tracking.A non-linear controller is developed to avoid splitting into intervals the controller domain (Rodríguez et al., 2011).In this case, process model parameters are approximated as output-dependent functions (depending on spraying pressure).In this case, process model parameters are approximated as output-dependent functions (depending on spraying pressure): where k(p) and τ(p) are the system static gain and time constant, respectively, interpolated with respect to spraying pressure, p. Adjusting these functions correctly is essential in order to get a proper closed-loop response and to avoid unexpected situations such as unstable behaviour.In this work, a PI controller was used to control the system because of its simplicity and good results in industry.Note that a PID (Proportional Integral Derivative) controller is discarded due to the negative influence of the noise in the derivative action.This controller is defined as (Åström & Murray, 2008) where K p and τ I are the PI controller proportional gain and integral time, respectively.In this case, the pole-zero cancellation method was for tuning purposes (Åström & Murray, 2008).Therefore, according to this method the controller tuning parameters are expressed as τ τ τ τ where τ BC is the time constant of the closed-loop system.Finally, the proportional gain is It is clear that using the non-linear model described in [3], controller tuning parameters depending on it also change according to pressure-output operating point described, in this case, by both polynomial functions for static gain and time constant, as explained in the Results section. System modelling Several open-loop experiments were performed to obtain the dynamic model of the spraying system using different amplitude opening steps (5%, 10% and 20%) over similar operating points.The analysis of the results shows that the output-pressure behaviour changes when the same valve-opening steps are made at different operating points, as shown in Table 1, confirming the non-linear characteristics of the system. In a preliminary approach, the system was modelled as a first-order dynamical system with no delay with fixed parameters set as the mean of the different measured ones at several operating points using the reaction curve method.In this way, the gain is given by As previously commented, a non-linear approach was followed -that is, a second-order polynomial used to adjust the model parameters depending on the operating point was calculated.Fig. 4a,b shows the quadratic polynomials adjusting the gain and the time constant, respectively.According to Fig. 4a, the equation that relates the gain to the operating point is where p refers to operating point.Then, according to Fig. 4b, the equation that deals with time constant is given by Once the system was characterized taking into account the different system behaviour, two polynomials adjusting the gains and constant times were obtained.Afterwards, the control strategy previously presented was tested through simulations and physical experiments in order to check the performance over the pressure range considered, and hence, control performance was validated over the different operating points with different system dynamics. Control results As previously discussed, we designed a non-linear PI controller where proportional gain and integral time depends on polynomials addressed in Eqs.[8] and [9].Hence, the non-linear PI controller is defined as At this point, we are showing simulation experiments, comparing a typical fixed-parameters PI controller, where the mean system gain and time-constant values were considered, as well as the proposed nonlinear PI controller.The non-linear split model was used as a reference comparing the different control strategies developed.As a result, Figs. 5 and 6 show the performance for all operating points (Fig. 5a deals with pressure, Fig. 5b displays the control inputs, the evolution of the proportional gain and the integral time are plotted in Figs.6a and 6b, respectively).Note that the second-order polynomials obtained fit well in the chosen working domain, but having values outside this domain can make the system unstable.In order to avoid this, we included a restriction in the non-linear PI controller when the pressure output falls below a minimum value.After simulations, physical experiments were conducted comparing the performance of a fixed-parameter PI controller and the proposed non-linear PI controller.Note that these controllers were running on the low-cost embedded hardware system detailed in the M&M section.In this sense, the proposed low-cost embedded control was tested through many physical experiments under real conditions.The sampling period was selected according to the system dynamics (system time constant) as 0.34 s.A dead zone, in relation to the error between the pressure set-point and the actual pressure estimate, was used to smooth the control input.In particular, we tested several values, obtaining the best results for ± 0.2 bar. It should be noted that, before achieving the autonomous control and due to non-linear behaviour of the process, the system is firstly moved manually to an 947 Automatic pressure-control system for a mobile sprayer for greenhouse applications operating point in the range for which the control strategy was designed.In general, the starting pressure was 2 bar. The proposed control system implemented on the low-cost embedded hardware is able to reach desired specification satisfactorily despite variations in the (open-loop) plant dynamics.As mentioned above, in the pressure system presented in this work, such variations appear along the different operating points of the process.Therefore, the system was tested through a group of different steps in order to verify that the control approaches, fixed PI (using the mean system gain and time constant) and the non-linear PI controller, achieve a small tracking error.Fig. 7 shows one experiment with steps of 5% and a pressure range from 2 to 14 bar.It can be observed that the pressure correctly follows the proposed references by the nonlinear PI control approach (Fig. 7a) and the control signals performed well (Fig. 7b).Note that despite large perturbations due to the use of the membrane pump, the controller achieved a good disturbance rejection.It is important to point out the small steady-state error in steps around 10 bar, this being due to the dead-zone included for the control error in order to avoid excessively fluctuating control signals (see Fig. 4).The evolution of the pressure-dependent parameters of the non-linear PI controller is plotted in Fig. 8.Note that when pressure increases, the value of the proportional gain also increases (Fig. 8a); as confirmed in Table 1, greater pressure means a smaller system gain (in our case the proportional gain is the inverse to the system gain).For the case of the integral time constant, when the pressure increases, the value of the integral time decreases (Fig. 8b).The constant values at the beginning and end of the experiment are because the polynomials adjusting the system gains and time constants were defined for pressures higher than 4 bar. After 15 physical experiments comparing the performance of the fixed PI controller (nine experiments) and the proposed non-linear PI approach (six experiments) the mean error using the fixed PI approach was 0.33 bar with a standard deviation of 0.72 bar, which implies an error of 2.75% and 6% within the range considered (from 2 to 14 bar).In the case of the nonlinear controller, a mean error of 0.33 bar with a standard deviation of 0.65 bar was found, implying 2.75% and 5.4% in the range. Discussion The main novelty of this work is that it faces the problem of phytosanitary applications within a general control framework, both a low-cost embedded hardware system, and a modelling and control approach based on widely accepted control paradigms that have been developed and integrated.Regarding the first element, embedded hardware, we developed a low-cost microprocessor-based system which controls a spraying system.For instance, in Guzmán et al. ( 2004,2008), a PC-based workstation with several I/O cards was used to handle a spraying system.This constituted a bulky and expensive solution compared with the hardware developed here.Furthermore, in both papers a Windows-based operating system was employed, and hence, real-time specifications were not completely ensured.Here, an embedded program runs directly in the microprocessor, which ensures real-time operation. On the other hand, regarding the control system, different control strategies have been studied in order to provide a good tracking performance around the desired pressure.In this case, a non-linear PI approach has been selected as the best option.This strategy shows the proper performance for different set-points.In this context, the works of Moltó et al. ( 2001) and Solanelles et al. (2006) are similar to the one presented here.The main difference is that these works are designed to operate in open field, so that a set of ultrasonic sensors are used to regulate the pressure applied to the trees based on the actual amount of leaf mass and the canopy width of tree crops.However, these works do not address the issue of variable-pressure feedback control because they simply open or close electrovalves depending on the tree width and predefined pressure values. Regarding the physical experiments conducted in this work, we confirmed that for experiments covering the range from 2 to 14 bar (desired pressure range) the mean errors were less than 0.3 bar.According to our experience, these errors can be considered acceptable.Although a smaller error could be expected in relation to the non-linear PI controller, it is important to remark that the limited range in which the pressure-dependent parameters are employed (recalling that polynomial functions cover the range from 4 to 14 bar), we expect that for a larger range the difference will be higher. Future works will deal with the use of the vehicle speed within the pressure control strategy to change the pressure set point depending on such vehicle speed. Figure 1 . Figure 1.Automatic spraying system used in this research.(a) Manually driven vehicle and spraying system.(b) Detail and (c) block diagram of the spraying system.Pipes with pressurePipes without pressure Figure 2 . Figure 2. The valve driver board (a), the ultrasonic sonar board (b) and the embedded controller board (c). Automatic pressure-control system for a mobile sprayer for greenhouse applications controller was integrated and wired into the vehicle electric installation. Figure 3 . Figure 3. Open-loop test of the spraying system showing the effect of the software low-pass filter attenuating the noise of the real pressure signal.(a) Opening test of 5%: noisy signal without filter and filtered signal using low-pass filter and (b) zoom of the filtered signal. Figure 4 . Figure 4. Polynomials adjusting model parameters depending on pressure.(a) Quadratic polynomial adjusting (a) gains in relation to pressure, (b) time constants in relation to pressure. Figure 5 .Figure 6 . Figure 5. Closed-loop simulation comparing the performance of the fixed parameters PI controller and the non-linear PI controller proposed in this work.(a) Pressure, (b) control inputs. Figure 7 . Figure 7. Closed-loop real experiments using the developed low-cost embedded PI non-linear control system.(a) Pressure and (b) control inputs. Figure 8 . Figure 8. Tuning parameters of non-linear PI controller: (a) evolution of the proportional gain and (b) of the integral time. Table 1 . Results of open-loop tests using steps of 5%: output pressure and model parameters (k, τ, t r ) at each operation range
5,973.8
2012-11-02T00:00:00.000
[ "Computer Science", "Engineering" ]
Identification of metabolic biomarkers in patients with type 2 diabetic coronary heart diseases based on metabolomic approach Type 2 diabetic coronary heart disease (T2DM-CHD) is a kind of serious and complex disease. Great attention has been paid to exploring its mechanism; however, the detailed understanding of T2DM-CHD is still limited. Plasma samples from 15 healthy controls, 13 coronary heart disease (CHD) patients, 15 type 2 diabetes mellitus (T2DM) patients and 28 T2DM-CHD patients were analyzed in this research. The potential biomarkers of CHD and T2DM were detected and screened out by 1H NMR-based plasma metabolic profiling and multivariate data analysis. About 11 and 12 representative metabolites of CHD and T2DM were identified respectively, mainly including alanine, arginine, proline, glutamine, creatinine and acetate. Then the diagnostic model was further constructed based on the previous metabolites of CHD and T2DM to detect T2DM-CHD with satisfying sensitivity of 92.9%, specificity of 93.3% and accuracy of 93.2%, validating the robustness of 1H NMR-based plasma metabolic profiling to diagnostic strategy. The results demonstrated that the NMR-based metabolomics approach processed good performance to identify diagnostic plasma biomarkers and most identified metabolites related to T2DM and CHD could be considered as predictors of T2DM-CHD as well as the therapeutic targets for prevention, which provided new insight into diagnosing and forecasting of complex diseases. Metabonomics, a postgenomic approach used to rapidly identify global metabolic changes in biological systems, has been increasingly applied to diagnose diseases, measure the response to treatment, discover biomarkers and identify perturbed pathways [15][16][17] . Nuclear magnetic resonance (NMR) spectroscopy is a rapid, non-destructive and high-throughput analytical method and has been widely used in metabonomic research [18][19][20][21][22] . It has been reported that NMR-based metabolomic approaches instituting a sensitive high-throughput molecular screening have already demonstrated promising results in diagnosing a variety of diabetes mellitus and cardiovascular system disorders [23][24][25] . In this study, we made a novel attempt to explore the potential biomarkers related to CHD and T2DM and validate these potential biomarkers as predictors to diagnose the patients with T2DM-CHD based on the NMR non-targeted metabolomics. Serum samples from T2DM and CHD patients were analyzed by NMR metabolic profile, principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) to screen out potential biomarkers. ROC curve analysis for the logistic regression model was constructed by the biomarkers of T2DM and CHD patients for T2DM-CHD prediction. This process may accelerate the advancement in understanding the mechanism of T2DM-CHD occurrence and progression at the metabolic level and providing information for the prediction of early marker metabolites for T2DM-CHD. Results Demographics and Clinical Characteristics. Detailed data about patients and controls are presented in Table 1. There was no significant difference in gender, age, Body Mass Index (BMI), Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP), total cholesterol, Blood Urea Nitrogen (BUN) and serum creatinine (SCr) among the four groups based on SPSS analysis (p > 0.05). The level of triglycerides and HbA1c in T2DM and T2DM-CHD was higher than that of the controls (p < 0.05). Apart from HDL in T2DM (p > 0.05), HDL in other two groups were higher than that in the controls (p < 0.05). As expected, FPG, 2 h plasma glucose (2hPG) and fasting insulin (FINS) in T2DM-CHD and T2DM were higher compared to the controls (p < 0.05), particularly in T2DM-CHD (p < 0.0001). The level of LDL in CHD and T2DM were a little higher than those in healthy subjects (HC), perhaps due to the influence of the medication such as stains and insulin. Therefore, the findings cannot be attributed to demographic factors. H-NMR analysis of Plasma samples. Plasma contains almost all of the low molecular weight species in whole blood and a few high molecular weight compounds, thus it can provide valuable bio-information in the organism's metabolism. Figure 1 shows representative 600 MHz 1 H NMR CPMG spectra of plasma from the healthy controls, T2DM group, CHD group and T2DM-CHD group. The plasma NMR spectra were dominated by LDL/VLDL (δ 0.86, δ 1.26), leucine (δ 0.95, δ 0.97), valine (δ 1.03), lactate (δ 1. 33 literatures [26][27][28] and in-house NMR database and further confirmed with analysis of the 2D NMR spectroscopy (the spectra was shown in Fig. S1). Visual inspection of the 1 H NMR spectra showed subtle differences in plasma metabolites between groups. In the 1 H NMR spectral of plasma samples, the dominated change of the signals among low molecular weight metabolites like leucine, isoleucine, valine, alanine, glutamine, creatine, proline, glucose etc. were detected. Multivariate data analysis was further performed to obtain more detailed analysis of metabolic differences between groups. Multivariate data analysis and the selection of potential biomarkers. PCA was used for the overview of the metabonomic data set and the spotting of outliers, and then for the detection of any grouping. This type of analysis is designed to highlight systematic variation across series of NMR spectra. It results in the calculation of a series of principal components (PCs) for each sample. The PCA scores plot was used to reveal observations lying outside the 0.95 Hotelling's T2 ellipse. The score plot was obtained with the first two PCs presenting 47.2% and 14.5% variance, respectively ( Fig. 2A). PLS-DA model was established to investigate the metabolic differences between four groups. The PLS-DA score plot displayed a good separation between HC group and other disease groups (Fig. 2B). Then, both of the two PLS-DA models with satisfactory discriminating ability were established to assess the metabolic differences between two disease groups (CHD and T2DM) and HC group respectively (Fig. 3). According to the score plot of the PLS-DA model, CHD patients and HC were discriminated obviously with R 2 X = 18.5%, R 2 Y = 95.2%, and Q 2 = 70.7% (Fig. 3A), and the T2DM patients and HC were discriminated with R 2 X = 17.7%, R 2 Y = 96.9%, and Q 2 = 0.675 (Fig. 3C). The parameters for describing the PLS-DA models were significantly elevated (R 2 Y, Q 2 > 0.5), which suggested that the PLS-DA models were robust 29 . The validation plot (Fig. 3B,D) demonstrated that the original PLS-DA models were not random and overfitting as both permutated Q 2 and R 2 values were significantly lower than the corresponding original values. In order to eliminate the influence of individual difference and conduct an insight into the changed metabolites responsible for the separation between two groups, the OPLS-DA model was constructed using the first principal component and the first orthogonal component. In Fig. 4, it reveals the OPLS-DA score plots for pairwise comparison of CHD, T2DM and HC group samples, along with the corresponding coefficients plots depicting the major discriminators. In the score plot (Fig. 4A, R 2 Y = 95.2%, Q 2 = 0.462), a significant biochemical distinction between the CHD patients and HC was identified and there was also a significant biochemical distinction between the T2DM patients and healthy controls in the score plot (Fig. 4C, R 2 Y = 96.9%, Q 2 = 0.622). The metabolic changes in patients were reflected in the color coded coefficient plots (Fig. 4B,D). Metabolites exhibiting significant changes (p < 0.05) were identified based on the absolute cutoff value of correlation coefficients (|r|) and VIP value and were listed in Table 2. The resonances assigned to proline and creatine were significantly increased, but the levels of isopropanol, alanine, leucine, arginine, acetate, glutamine, glycine, glucose and 3-methylhistidine were statistically decreased in the CHD group compared to those of the HC group. The T2DM group had lower levels of isoleucine, leucine, valine, isopropanol, alanine, arginine, glutamine, proline, creatinine, threonine and tyrosine, but had higher levels of glucose compared to thoes of the HC group. The potential biomarkers related to T2DM and CHD screened out above were used to predict the process and mechanism of T2DM-CHD. Hierarchical cluster analysis (HCA) of biomarkers for T2DM-CHD diagnosis. HCA could readily be used to assess relatedness and distance of any type of samples characterized by any type of descriptors, and the result was displayed as 'heatmap' . We used the metabolites listed in Table 2 as the variables to conduct the HCA, and got the heatmap (Fig. 5). From the heatmap, the similarity of different metabolites and different samples could be shown visually. The heatmap showed that the T2DM-CHD patients and healthy controls were almost completely separated from each other. It could be observed that the metabolic state of T2DM-CHD patients resulted in the decreased levels of isopropanol, glycine, alanine, arginine, proline, glutamine, acetate, creatine, 3-methylhistidine, creatinine, isoleucine, tyrosine, valine, threonine and leucine, as well as elevated levels of VLDL/LDL and glucose. The result of HCA further illustrated that these metabolites could distinguish the T2DM-CHD patients and HC, so these endogenous metabolites could be used as the potential biomarkers. Prediction and the diagnostic test to the T2DM-CHD disease. The 17 potential metabolites responsible for discrimination between T2DM-CHD patients and HC were identified. Table 3 shows the variation of the integrals of the normalized spectral regions responsible for these 17 metabolites and lists the results from the student's t-test (p < 0.05) for comparison of HC and T2DM-CHD. As is shown in Fig. 6A, a complete separation of T2DM-CHD patients and HC in PLS-DA score plots based on the 17 potential metabolites (R 2 X = 56.7%, R 2 Y = 84.9, Q 2 = 0.72), suggesting a severe metabolic disturbance of the 17 potential metabolites in T2DM-CHD patients by a supervised PLS-DA with a well goodness of fit (displayed in Fig. 6B). Then, ROC curves analysis was performed to validate the clinical effect of these potential biomarkers in diagnosing the T2DM-CHD. Areas under the ROC curve (AUC) were generally considered as the method of choice for evaluating the performance of potential biomarkers: the greater the AUC, the better the prediction of the model. In Fig. 7A, it showed a set of ROC curves for SVM models created using different subsets of metabolites selected by the filter approach, and six models were developed. The top 2 important variables (isopropanol and glycine) were used to build classification models, the AUC value was 0.983 and 95% confidence interval (CI) was 0.933~1. The AUC using a larger number of variables tried to achieve even greater areas under the ROC curves, and the maximum value was 0.983 (95% CI, 0.933~1) when we used 2 or 3 metabolites as the variables. Meanwhile, the predictive accuracy was the maximum value 93.2% when we use 5 or 7 metabolites as the variables (Fig. 7B). The metabolites in Fig. 7C were ranked by their contribution to distinguish the T2DM-CHD from HC. The greater the distance from the Y-axis, the greater the contribution of a particular metabolite in distinguishing cases from controls. This plot also indicated whether the metabolite concentration was increased or decreased in cases related to controls. The metabolites in Fig. 7C included isopropanol, glycine, alanine, arginine, proline, glutamine, acetate, glucose, creatine, 3-methylhistidine, creatinine, isoleucine, tyrosine, valine, threonine and leucine, and the importance decreased in this order, while the VLDL/LDL was rejected as it made little contribution to distinguish the T2DM-CHD and HC. The predicted class probabilities (average of the cross-validation) for each sample using the best classifier (based on AUC) is illustrated in Fig. 7D. The verification results showed that in the 28 T2DM-CHD samples, 26 were predicted correctly, and in the 15 HC samples, 14 were predicted correctly. Therefore, the OPLS-DA prediction model exhibited a sensitivity of 92.9% and a specificity of 93.3% for T2DM-CHD diagnosis. On the basis of selected biomarkers, ROC analysis revealed that T2DM-CHD could generate signature biomarkers and in return these biomarkers could be used to diagnose them. Metabolic Pathway and Function Analysis. In addition, based on the identified biomarkers, the plasma metabolic pathway analysis was performed using MetPA software to reveal the most relevant pathways related to T2DM-CHD. The impact value of these pathways calculated from pathway topology analysis above 0.1 was screened out as potential target pathway. According to the impact value, finally there were 4 potential target pathways related to 8 metabolites identified in this research. There were 4 pathways disturbed when T2DM-CHD occurred (Fig. 8), including arginine and proline metabolism, Glycine, serine and threonine metabolism, alanine, aspartate and glutamate metabolism and Pyruvate metabolism, which included more than one target. The details of pathways were displayed in supplementary Table S1 and Figures S2-S5, Supporting Information. Discussion The development of CHD and T2DM in patients is a serious problem that compromises the quality of life and survival of patients. Taking into account of the tendency to population aging observed during the last years, the problem of T2DM-CHD has become even more serious. The precise mechanism linking between CHD and T2DM is not completely clear and there are still unknown factors. Biomarkers predicting T2DM-CHD are useful to identify individuals at high risks of developing T2DM-CHD. Metabolomics is increasingly being applied towards the identification of biomarkers for disease diagnosis, prognosis and risk prediction. In the present study, 1 H NMR-based metabonomic approach was conducted to demonstrate metabolic differences between HC and T2DM-CHD. Subsequent analysis of the metabolite profiles of serum samples from CHD and T2DM patients could distinguish patients from healthy normal controls and provide a fingerprint of metabolic changes that characterized the disease, and highlighted the potential of metabolomic analysis in the evaluation of a disease condition. About 17 metabolic biomarkers were highly possible to be associated with T2DM-CHD, which showed better performance in terms of both specificity and sensitivity. These metabolites included isoleucine, valine, isopropanol, alanine, leucine, acetate, proline, glutamine, arginine, trans-aconitate, creatine, creatinine, glucose, glycine, threonine, tyrosine and 3-methylhistidine. The diagnostic model using ROC curves was further constructed based on the metabolites of CHD and T2DM to predict T2DM-CHD with satisfying sensitivity of 92.9%, specificity of 93.3% and accuracy of 93.2%. In our study, four unique metabolic pathways of arginine and proline metabolism, glycine, serine and threonine metabolism, alanine, aspartate and glutamate metabolism, and pyruvate metabolism are identified from T2DM and CHD patients (Fig. 8). The altered metabolites related to T2DM-CHD are most involved in energy metabolism and amino acids metabolism (Fig. 9). Energy metabolism. Glucose is the major source material for ATP production in cells. ATP is mainly produced through metabolism of glucose under normoxia condition, which is composed of three relay pathways: citric acid cycle (TCA cycle, Krebs cycle), oxygen-independent pathway of glucose to pyruvate in cytoplasm and oxygen-dependent electron transfer chain, respectively 30 . It is expected that reduced oxygen level in CHD patients will significantly affect the TCA cycle since it is oxygen dependent. The anaerobic glycolysis begins to play a dominant role for ATP production under the conditions of hypoxia, leading to the disorder of glucose. Creatine, synthesized in the liver and kidney, is transported through the blood and taken up by tissues with high energy demands. It can reflect the changes of energy metabolism in the muscles. Creatinine is derived from creatine and phosphocreatine. Creatine has the ability to increase muscle stores of phosphocreatine, potentially increasing the muscle's ability to resynthesize ATP from ADP to meet increasing energy demands. Therefore, the level of creatine and creatinine also reflect the disorder of energy metabolism in T2DM-CHD patients. Amino acids metabolism. Leucine, isoleucine and valine are essential amino acids whose carbon structures are marked by branch points (BCAA). These three amino acids are critical to human life and are particularly involved in stress, energy and muscle metabolism. BCAA, especially leucine, can be an important source of calories, and is superior as fuel to the ubiquitous intravenous D-glucose, and it also can stimulate insulin released by pancreatic b-cells in vitro 31 . As important insulin secretagogues, BCAAs exert a regulatory effect on proteolysis and participate in building body organs 32 . Altered BCAA metabolism is one of the characteristics of T2DM. As the most abundant amino acid in the serum, glutamine is the most important amino acid gluconeogenic precursor for adding new carbon to the glucose pool 33 . Turer et al. 34 used metabolomic profiling to compare cardiac extraction and plasma substrates, and demonstrated that patients with CHD had decreased concentration of glutamate/glutamine. Alanine is highly concentrated in muscle and is one of the most important amino acids released by muscle, functioning as a major energy source. It is an important participant as well as regulator in glucose metabolism, and its levels always parallel blood sugar levels. And reduced concentrations of glutamine and alanine were also observed in T2DM patients, which illustrated the enhancement of gluconeogenesis in the diabetic state. Some of the amino acids are associated with insulinopenia and thus would be seen to be a normal response to gluconeogenesis. Our results are consistent with previous studies which indicate that the conversion of glutamine and alanine is high in T2DM patients 35 [37][38][39] , however, there is growing evidence that elevated BCAA levels may reflect a state of insulin resistance that is not necessarily specific to T2DM 40 . Arginine is one of the most versatile amino acids in animal cells, serving as a precursor for the synthesis not only of proteins but also of nitric oxide, urea, polyamines, proline, glutamate, creatine and agmatine 41 . It may stimulate the oxidation of energy substrates (including fatty acids and glucose) in adipocytes, liver, skeletal muscle, heart and whole body. Fu et al. have reported that dietary L-arginine supplementation markedly reduced white-fat mass in Zucker diabetic fatty rats 42 . Isopropanol belongs to the family of alcohols and polyols compounds. The previous report indicated that isopropanol is one of the products from propanoate metabolism, and the substrate for synthesizing acetone catalyzed by the enzyme isopropanol dehydrogenase 43 . Alcohol dehydrogenase oxidizes alcohols to either aldehydes or ketones, with concomitant reduction of NAD + to NADH 44 . Thus, we suggested that the isopropanol is associated with acetone metabolism, which may be a significant differential metabolite in T2DM. For all we know, this study presented a holistic view of the metabolic changes related to T2DM-CHD and may contribute to its diagnosis. However, limitations of our study included a relatively small sample size in each group, which might prevent the differences in some metabolites from being fully apparent, and imperfect diagnostic approaches of altered metabolites. In addition, our understandings of these altered metabolites and their CHD T2DM Interal in group a (mean ± std) × 10 −2 Table 2. Quantitative comparison of metabolites found in plasma of CHD patients, T2DM patients and healthy controls. The arrows (↑ /↓ ) were used to show the metabolite levels increase/decreased compared with healthy controls. a The relative integrals of metabolites were determined from 1 D 1 H NMR analysis of plasma of each group. b The values of correlation number extracted from the correlation plots of OPLS-DA models. c The p values were obtained from student's t-test. The chemical shifts in boldface were that we used in calculating integrals and p values. underlying mechanisms remain at rudimentary levels. Future work will focus on confirming/validating current metabolite findings in larger independent patient cohorts and elucidating the biological mechanisms. Conclusion In the present study, 1 H NMR-based metabolomics method combined with multivariate data analysis were used to distinguish independently T2DM-CHD patients from healthy controls with high reliability. About 17 potential biomarkers related to T2DM-CHD disease were found by analysis and 16 of the 17 metabolites used as the biomarkers in diagnosing T2DM-CHD disease exhibited a sensitivity of 92.9%, a specificity of 93.3% and an accuracy of 93.2%. This study has been proved to be useful in improving the diagnosis of T2DM-CHD which may provide new insights to identify additional novel biomarkers. Materials and Methods Ethical approval. All procedures were designed according to the Declaration of Helsinki's ethical principles. The study protocol has already been ethically reviewed and approved by Ethics Review Committee of Beijing University of Chinese Medicine and the methods were carried out in accordance with the approved guidelines. Patients were aware of their involvement and signed a written informed consent agreeing to the use of the resulting information for medical publications. Subjects and participants. The study was conducted with the approval of the ethical committee of Beijing University of Chinese Medicine and all study participants have given informed consent for the investigation. A total of 71 participants from the affiliated Dongzhimen Hospital of Beijing University of Chinese Medicine were matched for age and gender and equally distributed into four study groups: (i) T2DM patients; (ii) T2DM-CHD patients; (iii) CHD patients; (iiii) Healthy subjects as controls (HC). Detailed data about four study groups are listed in Table 1. Diagnosis of diabetes was according to American Diabetes Association criteria (2005) and Diagnosis criteria of CHD referred to the WHO standard criteria (1979). From January 2013 to December 2014, we consecutively recruited patients who had been referred to the outpatient clinic from the affiliated Dongzhimen Hospital of Beijing University of Chinese Medicine for treatment of diabetes and coronary heart disease. There were 15 volunteers of HC subjects from the medical examination center of Dongzhimen Hospital in the same period of time. General information, past medical history, family history, personal history, and signs were collected within 24 hours after the patients were admitted. Details of information in the view of traditional Chinese four diagnostic methods were also recorded. Collections of patient histories and information from traditional four diagnostic methods were determined by the relevant professionals. Specific requirements for relevant professionals included having the occupation qualification, attending the physician or above, and having relevant clinical experience more than two years. Sample collection and preparation. Fasting blood samples were collected from the subjects in the morning by venipuncture and stored in EDTA-containing green-top tubes. Then the samples were centrifuged at 3 000 × g for 10 min at 4 °C to isolate plasma. The plasma samples were stored at − 80 °C until further processing and analysis. HMDB Chemical Shift Interal in HC group a (mean ± std) × 10 −2 Interal in T2DM-CHD group a (mean ± std) × 10 −2 r b (T2DM-CHD vs HC) (|r|> = 0.532) VIP Plasma samples were thawed and prepared by mixing 200 μ L of plasma with 400 μ l of 1.5 M of deuterated phosphate buffer (NaH 2 PO 4 and K 2 HPO 4 , including 0.1% TSP, pH 7.47), adding D 2 O up to 600 μ L if the volume of serum is insufficient. The mixture was left to stand for 5 min at room temperature and then centrifuged at 13 000 rpm at 4 °C for 15 min. The supernatant solution (550 μ L) was then transferred into a 5 mm NMR tube for NMR analysis. Acquisition of 1 H-NMR spectra. All the samples were analyzed at 298 K using a VARIAN VNMRS 600 MHz NMR SPECTROMETER operating (Varian Inc, Palo Alto, Calif) at 599.871 MHz using a 5 mm inverse-proton (HX) triple resonance probe with z-axis gradient coil. 1 H NMR spectra of plasma were recorded using the water-suppressed standard 1 D CPMG pulse sequence (RD-90°-(τ -180°-τ )n-ACQ), where a fixed total spin-spin relaxation delay 2nτ of 320 ms was applied to attenuate the broad NMR signals from slowly tumbling molecules (such as proteins) and retain those from low-molecular weight compounds and some lipid components. The free induction decays (FIDs) were collected into 64 K data points with a spectral width of 12 000 Hz and 128 scans. The FIDs were zero-filled to double size and multiplied by an exponential line-broadening factor of 0.5 Hz before Fourier transformation (FT). Standard COSY, TOCSY, HMBC and J-resolved spectra were also acquired for metabolite identification purposes for the selected plasma samples. Data reduction and multivariate pattern recognition analysis. All of the 1 H NMR spectra were manually phased and corrected for baseline distortion by MestReNova7.1.0 software (Mestrelab Research, Spain). All the spectra were referenced to the methyl group of lactate at δ 1.336. In order to exploit all metabolic information embedded in the spectra, all NMR spectra (0.5-9.0) were segmented into equal widths of both 0.01 ppm and 0.001 ppm. Spectral regions of δ 4.68-5.10, δ 3.65-3.57, δ 3.06-3.23, δ 2.66-2.72 and δ 2.53-2.60 were excluded to eliminate variations caused by imperfect water suppression, EDTA, and EDTA metal complexes. The area under the spectrum was then calculated for each segmented region and expressed as an integral value. The integrated data were normalized to the total sum of the spectrum before multivariate statistical analysis to give the same total integration value for each spectrum. Subsequently, the integral values were imported into SIMCA-P+ 12.0 (Umetrics, Sweden) for multivariate statistical analysis. The data were mean centered for PCA and PLS-DA [45][46][47] , and in order to improve the separation due to groups and minimize other biological analytical variation, sample classes were modeled using the OPLS-DA algorithm at a unit variance scaled approach. The PCA and PLS-DA score plots were showed with the first principal component and the second principle component, while OPLS-DA were visualized with the first principle component and the first orthogonal component. The model coefficients locate the NMR variables associated to specific intervention as y variables. The model coefficients were then back-calculated from the coefficients incorporating the weight of the variables in order to enhance interpretability of the model; in the coefficient plot, the intensity corresponds to the mean-centered model (variance) and the color-scale derives from the unit variance-scaled model (correlation). Thus, biochemical components responsible for the differences between samples detected in the scores plot can be extracted from the corresponding loadings with the weight of the variable contributing to the discrimination. The coefficient plots were generated with MATLAB scripts (downloaded from http://www. mathworks.com) with some in-house modifications and was color-coded with absolute value of coefficients (r).
5,980.8
2016-07-29T00:00:00.000
[ "Biology" ]
Tiling and optimizing time-iterated computations over periodic domains This paper deals with optimizing time-iterated computations on periodic data domains. These computations are prevalent in computational sciences, particularly in partial differential equation solvers. We propose a fully automatic technique suitable for implementation in a compiler or in a domain-specific code generator for such computations. Dependence patterns on periodic data domains prevent existing algorithms from finding tiling opportunities. Our approach augments a state-of-the-art parallelization and locality-enhancing algorithm from the polyhedral framework to allow time-tiling of stencil computations on periodic domains. Experimental results on the swim SPEC CPU2000fp benchmark show a speedup of 5× and 4.2× over the highest SPEC performance achieved by native compilers on Intel Xeon and AMD Opteron multicore SMP systems, respectively. On other representative stencil computations, our scheme provides performance similar to that achieved with no periodicity, and a very high speedup is obtained over the native compiler. We also report a mean speedup of about 1.5 χ over a domain-specific stencil compiler supporting limited cases of periodic boundary conditions. To the best of our knowledge, it has been infeasible to manually reproduce such optimizations on swim or any other periodic stencil, especially on a data grid of two-dimensions or higher. INTRODUCTION Stencil-style computations are widely used in solving partial differential equations over discretized domains.They have been extensively studied by the parallel and high-performance computing community.Stencil computations involve updating points in a data grid of certain dimensionality repeatedly.The computation performed at each point in the grid uses values from its immediate or short distance neighbors.These updates to the grid are repeated a certain number of times or until convergence.Hence, as originally viewed and specified, the computation accesses the entire grid each iteration before accessing it again the next iteration.Such an execution order is memory bandwidth-bound when the data grid does not fit in the last level cache. Stencil computations can be performed on discretized domains that are either non-periodic or periodic.Non-periodic domains have boundaries that do not change.However, periodic domains are often used to model a portion of a larger space.Periodicity also arises when one models a hollow object.The object is cut and unrolled flat in a lower dimensional space.For example, domains like hollow spheres, cylinders, or tori can be cut and flattened out in 2-d space.Similarly, a ring can be cut and flattened into a 1-d array.When this is done, the points on either boundary have to be treated as neighbors with respect to one another, resulting in wrap-around dependences.These wrap-around dependences create cyclic dependences between tiles in an otherwise valid tiling.Although a lot of effort has been put in optimizing stencil computations, there is very little work on optimizing those with periodicity.This is an important domain of numerical simulation, since periodic boundary conditions are prevalent in partial differential equation solvers.The fact that the SPEC CPU2000fp included the swim (171.swim) as one of its benchmarks is strong evidence of this.The swim benchmark is a weather prediction program that performs finite difference modeling of shallow water equations through periodic boundary conditions on a two dimensional grid [27].All its running time is spent in the stencil computation.Modeling the earth's atmosphere and surface also involves two-dimensional periodic domains [26]. Tiling for locality and parallelism [17,37,41] has been studied intensively for the optimization of stencils with no periodicity.Tiling for locality allows simultaneous reuse in multiple directionsthe directions correspond to the loop dimensions being tiled.In particular, the loop that iterates over time in stencils carries temporal reuse while the space loops carry constant reuse as well as spatial reuse along the innermost space dimension.Tiling for parallelism allows reduction in the frequency of synchronization.Tiling for locality and parallelism together makes the computation less dependent on memory bandwidth, and in the context of multicore processors, can make it scale better. We make a key observation about stencils with periodic boundary conditions: tiling can be enabled by first splitting the iteration domain (or index set) in a very specific way.Then, an extension of existing auto-parallelization and tiling techniques enables the necessary program transformations allowing for dramatic improvement in performance.In particular, our technique improves performance by several factors when compared to code that just tiles and parallelizes the loops that tile and iterate over the data space code. The rest of this paper is organized as follows.Section 2 introduces the technical background.Section 3 discusses challenges and the feasibility of various approaches.Section 4 and Section 5 describe our solution.Experimental results are presented in Section 6 before concluding in Section 8. BACKGROUND In this section, we introduce notation and the mathematical background necessary for the sections that follow. Definition 1 (Hyperplane).A hyperplane is an n − 1 dimensional affine sub-space of an n dimensional space. Since we are interested in integer spaces, by a hyperplane we refer to the set of all vectors x ∈ Z n such that h.x = k, for k ∈ Z. Two vectors v 1 and v 2 lie in the same hyperplane if h. v 1 = h.v 2 .The set of parallel hyperplane instances correspond to different values of k with the row vector h normal to the hyperplane. A hyperplane divides a space into two half-spaces, the positive half-space and the negative half-space.If the coefficients of h are integers, the set of integer points are divided into a non-negative half-space ( h.x ≥ k) and a negative half-space ( h.x ≤ k − 1). Index sets and dependences. Let S 1 , S 2 , . . ., S n be the statements of a program.The set of all iterations i S of S is called the index set of S and is represented by I S .Let m S be the dimensionality of statement S. A program parameter is a symbol that is not modified in the portion of the program being represented.Problem sizes appearing in loop bounds are typical examples of program parameters.Let m p be the number of program parameters, and p be the vector of program parameters.Let E be the set of dependence edges.For an e ∈ E, let D e be the dependence polyhedron.D e is a relation between source and target iterations, represented by s and t respectively, that are in dependence.For example, the vertical dependence instances in Figure 1 and Figure 2 correspond to the dependence polyhedron: Tiling Tiling is considered valid if and only if a total order can be constructed for the execution of all tiles, where each tile is being executed atomically.This implies that a tiling is valid if and only if there is no dependence cycle between the tiles.This can be very hard to check statically in general.Hence, compiler optimizers work with sufficient conditions such as that of non-negative dependence components: this was from the pioneering works of Irigoin and Triolet [17] and a large amount of literature on the validity of tiling relates to or derives from it [37,25,20,1,5].In particular, the condition involves checking if all components corresponding to (yet unsatisfied) dependences are non-negative for the set of contiguous loop nest dimensions that are being tiled.In a more general polyhedral setting, a tiling hyperplane is an affine function of the form: and a sufficient condition for φ S i to be a statement-wise valid tiling is written as: When the above condition is enforced for all edges e unsatisfied up to that depth, all linearly independent solutions for φ in (1) form a band of valid tiling hyperplanes at that depth.Often, when rectangular tiling is not valid on a given iteration space, it can in many cases be transformed so that rectangular tiling is valid in the new space, i.e., by finding the right set of φ's.E.g., a short negative dependence component can be dealt with through loop skewing with respect to an outer loop that satisfies that dependence.However, a well-known scenario in which such a transformation is not possible is when there are long dependences in either direction corresponding to a dimension.As can be seen in Figure 2, periodic stencil computations have such dependences and cannot be tiled along all dimensions readily. Stencils Figure 1 and 2 show the iteration space and dependences for stencil computations without and with periodic boundary conditions, respectively.As can be seen, for non-periodic stencils, all dependences are near-neighbor while for the periodic ones, there are edges wrapping around boundaries. Figure 2: with periodicity There are multiple ways one could implement the periodic boundary conditions in program code. Figure 3a and 3b show two ways of implementing a simple periodic stencil on a one-dimensional grid.Figure 3b uses a conditional to make the boundary updates access the correct values, while Figure 3a employs copies on to ghost regions to take care of the flow of values across boundaries.When the copy statements are taken into account, the data flow for both codes is equivalent.The conditional can be hoisted out to avoid overhead in the innermost loop.Also, the above code is written with modulo indexing for the time dimension instead of using two copies of the array-the latter is common practice as well.The swim SPEC benchmark in particular uses copies for boundary conditions, and ))/4.0; } (b) periodic (with boundary conditionals) Figure 3: Stencil: heat-1d equation uses an old and a new copy for the array instead of indexing the time dimension with a modulo operation. 1 Time loops and space loops. The number of times the space grid is updated is determined by the number of iterations in the outermost loop which refer to as the time loop.The loops that update points in the grid are referred to as space loops. In this context, time tiling refers to tiling the time loop, i.e., the outermost loop.Note that the space loops, being inner parallel loops, can be freely tiled.Time tiling allows temporal reuse to be exploited along the time dimension.This is often the source of a dramatic improvement in single-thread performance, as well as excellent scaling, as time tiled code may be either less memory bandwidth bound or no longer memory bandwidth bound. For non-periodic stencils, two existing techniques for time tiling are shown in Figure 4.The first one is classical parallelogramshaped tiling (Figure 4a) that can be obtained by skewing the space dimension(s) with respect to the time loop.Exposing parallelism on such tiles induces a pipelined startup and drain delay, since there is no boundary along which tiles can start in parallel.The second one that we refer to as diamond tiling was recently proposed [3].It allows concurrent start of all diamonds along the horizontal line: these tiles have no dependences among each other and can start in parallel.It leads to better load balance and maximizes the number of tiles on the wavefront without any pipeline fill-up and drain delays. CHALLENGES AND APPROACHES This section explores different approaches to the problem of tiling iterated stencil computations with periodic boundary conditions.While these approaches are generally not applicable in a compiler, and sometimes even unsuitable for manual transformation, they provide valuable insights into the challenges involved. As mentioned earlier, for stencils on periodic domains, the wraparound dependences at the boundaries create dependence cycles in an otherwise valid tiling.For example, there is no cyclic dependence between tiles in Figure 4a or Figure 4b.However, applying this same tiling to Figure 2 will create a cyclic dependence between tiles at either boundary. Merging boundaries We observe that the cyclic dependence between tiles can be broken if the tiles at either boundary for a dimension can be merged into a single special tile.If the partial tiles at each end match, as is the case in Figure 4b, they could be merged to give a full tile like those in the middle.However, this is not possible in general: depending on the alignment, the height of partial tiles at either boundary may not be the same. 1 Note that using a modulo with respect to two or any power of two does not hurt performance since it directly translates to a bitwise binary operation as opposed to a branch.Alternatively, such modulo indexing can be eliminated through partial unrolling of the loop. As shown in Figure 5, a proper choice of tile alignment could be found that guarantees matching height for the partial tiles by shifting the tile origin by an amount equal to half the remainder when the dimension length is divided by the tile size.Note that even with such an alignment, if the boundary tiles are not exactly half of a full tile, one would end up with either a smaller full tile or a larger non-convex tile.We will then need to alternate between the two shapes every time tile step.A roadblock to this approach is that it is not practical for compiler automation since it requires the knowledge of fixed tile shape and size, and it does not explicitly say in which cases an invalid tiling can be fixed to make it valid, and which of the tiling schemes is to be chosen.It would also miss other directs way of tiling the space, which would not require such a post-correction.In addition, a stencil in which dependences arise through multiple statements make such a trial and error approach almost infeasible.iteration dependence inter-tile dependence Cut and paste dependent portion Figure 6 shows an alternative approach where the cyclic dependences are broken through cutting-and-pasting the loop iterations that are transitively affected by periodic boundary conditions.This displacement effectively results in the shortening of the periodic boundary dependences.This is also equivalent to circular loop skewing [38].iteration dependence inter-tile dependence This approach requires determining the set of iterations on which another set depends.In other words, it requires computation of a transitive closure of dependences which has remained a very hard problem.Practical approximation schemes for it remain extremely complex and expensive [35].Libraries like isl [35,36] that do implement transitive closure often recommend avoiding its usage.In particular, to enable time tiling for a code like swim from the spec suite, one needs to compute a transitive closure over tens of dependences across multiple statements.In addition, unlike in the one-dimensional case, the backward slice that a tile depends on is not convex, i.e., it is a union of a large number of convex polyhedra.The number of polyhedra in such a union increases with the dimensionality of space, i.e., the number of space dimensions in the stencil. Duplicating computation at boundaries Redundant computation can be performed at each boundary to eliminate the dependence between boundary tiles resulting from periodic conditions.This is equivalent to replicating computation from the opposite boundary.Doing so would be similar to the approach used in [18] where neighboring tiles were overlapped and redundant computation performed to break a dependence for allowing concurrent start. Formalizing and implementing this approach would again require one to determine the set of dependent iterations, i.e., to compute the transitive closure.Hence, it would suffer from the same limitations as the "cut and paste approach", and in addition, lead to redundant computation.The amount of redundant computation needed will increase with the size of the tile along the time dimension, and with the dimensionality of the data space. Folding The approach of folding [8,42] used in the systolic array literature provides an interesting conceptual basis for addressing this problem.The folding approach folds the data domain along the middle of each dimension to bring the boundaries together, placing them one on top of the other.The technique of smashing as described by Osheim et al. [21] uses this idea of folding to describe time tiling for periodic stencil computations.They view smashing as "a data allocation technique rather than a loop/iteration transformation".This statement is partly inaccurate since dependence distances cannot be shortened by allocating, reordering, or laying out data in a particular way.They can only be changed by reordering iterations since the distances correspond to distances in the iteration space.Hence, allocating or storing data does not change the ability to tile unless the iterations themselves were reordered with respect to the order in which they were performed on the original data domain.It is assumed that the authors meant the execution order is also implicitly changed with the new data layout.Figure 7 illustrates the effect of folding on the 1-d heat stencil: the two horizontal halves (in the data space) are stacked on top of each other, converting the long cyclic dependences into short ones. The folding approach is attractive in that in the folded view of the iteration space, all dependences are short and existing tiling techniques will work without any fixing, replication, or computation of transitive closure.Osheim et al. [21] present the smashing technique as a manual or semi-automatic optimization strategy: there are no heuristics to determine when and how to fold or smash.In addition, visualizing it for higher than two dimensional data grids is not straightforward, and hence, there is a need for a formalism to reason about, express, and compute transformations that achieve the proposed effect.Doing so also automatically solves the code generation problem. Our approach is strongly influenced by folding, but it handles the periodic boundary constraints through iteration reordering transformations only.We require no changes to the data layout: userdefined data spaces remain unaffected. INDEX SET SPLITTING TO CUT LONG DEPENDENCES This section describes the first systematic method to enable tiling of stencil computations with periodic boundary conditions.The first step in our approach is to perform a preprocessing that splits the index sets of statements.The second step is to ensure that the transformation space allows the necessary transformations to be found for the new index sets and other performance enhancing transformations can be applied on it.This section deals with the first step while the next one deals with the second. The method we propose subsumes folding techniques summarized in the previous section.The key reason that a transformation like folding cannot be performed by existing frameworks is that affine transformations typically apply the same affine schedule to the entire index set of a program statement.If the statement's index set is partitioned at compile time into a finite number of partitions, and a possibly different affine transformation may be applied to each one, folding-like transformations fall into the space of valid, piece-wise affine transformations.Thus, an index set splitting heuristic has to be devised that suits the needs of periodic stencils. Short and long dependences We first explain the classification of dependences as being short or long along a particular dimension.A single dependence represented by an edge e in the dependence graph can correspond to multiple dependence instances, i.e., multiple source and target iteration pairs, s, t that are in dependence.A dependence instance is short along a dimension if its length, i.e., the difference (or distance) between the source and target iteration along that particular dimension can be bounded by a small constant.This constant is typically a small number and it is important that it not be comparable to loop trip counts.We see that a value of five is sufficient in practice.This value is fixed and it is used for any input program and all its dependences.A larger value such as ten could also be used as long as it is not comparable to the trip counts we expect for the problems of interest here.At the same time it should be larger than the stencil width.In practice, choosing this value to classify dependences as short is never a problem.We find that a value like five works well for the entire domain of interest.For example, for a 3-d stencil used, the grid sizes typically of interest while optimizing for execution time are at least a few hundred along each dimension. A dependence is considered long if it is not short.Intuitively, a long dependence is one whose length is of the order of iteration space extents and any bound on its length has to involve program parameters that are symbols appearing in loop bounds, typically problem sizes.A dependence whose length varies (depending on the particular source/target instances in dependence) from a small value to a large value is also thus a long dependence.For a dependence edge to be labeled short along a dimension, all of its dependence instances should be short along that dimension; while a single dependence instance being long will label the dependence as being long.In addition, if a dependence is referred to being long in a dimension-independent manner, it implies that there was at least one dimension along which the dependence was long.The above notion of short or long dependences is only meaningful in the context of a schedule for the iterations.When referring to it without mentioning a schedule, these are implicitly assumed to be defined for the identity schedule that corresponds to the original execution order.Applying another schedule will change these dependence distances and their property of being long or short along a dimension.Note that dependence distances for inter-statement dependences are only meaningful under a schedule since the source and target statements could have different dimensionalities.Since a a statement-wise schedule maps all statements to the same set of time dimensions, the distance between the mapped points in the transformed space is meaningful. In the examples presented so far, the dependence edge that captures the flow of values across the boundaries is a long dependence while all remaining dependences in the grid are short dependences.The length of the arrows in Figure 2 captures this in an obvious way.As an example, for the code in Figure 3b, the long dependence from the left boundary to the right one between s = (t, i) and t = (t , i ) is given by: The above is long along the inner dimension (i/i ) in the positive direction with length N − 1 while short along the outer dimension (t/t ).The short blue arrows are short dependences with the distances being standard distance vectors (1,0), (1,1), and (1,-1): these are the well-known constant distance vectors used in compiler literature [2,4,38]. Key idea. The approach we describe below attempts to cut all dependences that are long along one dimension roughly at its mid-point, while not affecting how the shorter dependences will be transformed in the resulting space.A hyperplane is used to cut the statement's index set into half spaces.After this cut, a separate affine transformation can be applied to each half space.The goal is to allow transformation frameworks to make all dependences short along at least one more dimension than was previously possible.This is sufficient to enable time tiling for periodic stencils. We first describe our approach for the case when all dependences are intra-statement.In our context, this is a stencil on a single data grid.An affine hyperplane is defined by two characteristics: its orientation given by its normal vector h, and its position given by an affine function of the program parameters, v( p), i.e., v( p) is of the form P• p + r.Finding a suitable hyperplane cut is the same as finding a suitable orientation and position.The cut itself is given by With such a cut, the index set of S, I S , is partitioned into two halves given by I + S and I − S : For example, a possible cut is 2i = N, cutting the i dimension in the middle; this corresponds to h = (0, 2), i S = (t, i), and v( p) = N with P = (1), p = (N), and r = 0. Having two linearly independent hyperplanes ( h) would generate four partitions, and so on. While trying to cut long dependences, we need to make sure the short dependences can continue to remain short, i.e., no short dependence is made long while the long ones are being reduced through a future automatic transformation algorithm.Consider separate affine transformations being applied to each half-space of some cutting hyperplane.If both ends of a short dependence lie on the same side of the hyperplane, the dependence continues to remain short because the same affine transformation is applied on it.If the ends lie on different sides, they both stay at a constant distance from where the dependence crosses the hyperplane.The crossing point thus has to be a fixed point for both affine transformations, and then the dependence will remain short.This provides the intuition that if the long dependences are all cut at their midpoints or at a fixed distance from their mid-points, the source and target iterations of the long dependences can be brought close with the new split index sets while keeping the original short dependences short.Note that it would be valid even if some long dependence instances, potentially belonging to the same dependence edge, are cut at their mid-points while others are cut close to it.We now propose a technique that automatically finds such a cut whenever possible. The following approach is taken for each dimension along which there are long dependences in both directions, positive and negativesince this is what prevents tiling.Let s and t be the source and target iterations corresponding to a dependence edge e, characterized by dependence polyhedron D e , that is long along a dimension.In order to cut all dependences within a bounded constant distance from their mid-points, the cutting hyperplane h has to satisfy the following: for some m ∈ Z + that we will minimize later.We thus have Note that h is unknown while s, t are related through the dependence polyhedron.The above can be linearized with the affine form of the Farkas lemma [28,11], i.e., if f 0 , f 1 , . . ., f m are the faces of D e , then there exist The coefficients of iterators in s and t from the LHS and RHS can be equated to obtain constraints linear in h's coefficients, P's coefficients, r, and m.The λ i s, also called Farkas multipliers, can be eliminated locally for each of the long dependences along that dimension, and the constraints aggregated.The constraints are now solved with the objective to minimize m.If a solution is found, a hyperplane orientation and position is obtained that cuts within a non-parametric or constant distance from the mid-points of all dependence instances in question, and we succeed in finding a split that in turn will allow distinct affine transformations on the split sets that shorten all of these dependences.m is that constant distance since it is free of program parameters ( p). m = 0 implies that all mid-points lie on h.i S = v( p).If a solution is not found, there exists no hyperplane that cuts all dependences at a bounded distance from their respective mid-points, and no index set splitting is applied for that dimension.This approach is repeated along all canonical dimensions along which dependences are long in both directions. Figure 8 shows two other synthetic examples where the cut will lead to better transformations.In general, our technique is robust and resilient to variation in the boundary dependences, width and pattern of the stencils.This is because we minimize the upper bound on the distance of the mid-points of the dependences from the splitting hyperplane, m, as opposed to looking for solutions with m = 0.It thus clearly works for the entire domain of interest.Comments on its applicability beyond this domain are made towards the end of the next section.As an example, for the periodic 1-d heat stencil from Figure 2, the split index sets are given by: I S is thus replaced with two statements, with index set I S + and I S − .We will show in Section 5 how profitable transformations can be applied automatically with the new statements. Multi-statement stencils In the case of multiple statements, long self-dependences could be hidden since they could be implied transitively through other inter-statement dependences which cannot themselves be classified as long or short.This is the case for stencils written with copies at the boundaries for every time step.If the approach described in the previous section is applied just for the intra-statement dependences in the case of multiple statements, it will not enable tiling even if it succeeds in finding a cut. This problem can be addressed by computing transitive dependences with respect to a set of dependences.A full transitive closure is not needed: one may only compute transitive dependences for paths leading back to the same statement.Once this is done, the approach described in the previous sub-section is applied to determine the index set splitting.In the case of periodic stencils, such transitivity is over a path of length two.However, if the code is not written with copies but with conditionals (Figure 3b), the need for computing transitive dependences does not arise even with multiple statements.This is the case we encounter for all experimental evaluations. POST-ISS SHORTENING AND TRANS-FORMATIONS In the previous section, we showed that index set splitting opens the possibility for dependences being shortened.We now argue that the Pluto framework, that shortens dependences, naturally finds the tiling transformation on the split index sets. Pluto scheduling algorithm We first provide some background on the Pluto scheduling algorithm.Consider a one-dimensional affine transformation for S: where i S is an iteration vector of S, m S is the dimensionality of statement S, m p is the number of program parameters, i.e., symbols appearing in the program (typically representing problem sizes), and p is the vector of those program parameters.Each statement has its own set of c i and d i coefficients: c i correspond to the index set dimensions while d i correspond to parameters and model parametric shifts.For convenience, the notation we use does not involve a superscript specific to S, i.e., c S i , d S i .The Pluto algorithm [5] finds such one-dimensional affine transformations, iterating from the outermost inwards while looking for tilable bands, i.e., for φs satisfying The objective function it uses is that of reducing dependence distances using a parametric upper bounding function that was first proposed as a technique by Feautrier [11]. u and w are then minimized, in order of decreasing priority, using the lexicographic minimum as lexmin u, w,...,c Dependence shortening Once an index set splitting is performed, the long dependence is still long since a new execution order has still not been specified.The split index sets obtain their schedules from the unsplit index set.For example, the long dependence in Figure 2 (code in Figure 3b that goes from the left boundary to the right one is given by: has the dependence distance along its two dimensions given by the vector [1, N − 1] T and this is long along the second dimension as per the original execution order.We now show that the objective function ( 6) is well-suited to enable tiling for periodic stencils as well.Note that a solution that corresponds to u = 0 is preferred over a solution with u 0 since the former would have a better objective function value as per (7).Importantly, u = 0 corresponds to a transformation that shortens all dependence distances to a constant (due to ( 6)), the constant itself being given by w that is also minimized as part of (7).Hence, transformations that shorten all dependences to within a fixed constant, which would be w, have a better objective function value than those that do not. For the dependence in (8), with index set splitting, s = (t, i) and t = (t , i ) are placed into two different index sets, S + and S − .Consider the following transformation on S + and S − : For S − , the above can be seen as a composition of (t, i) → (t, N − i) with the diamond tiling transformation: (t, i) → (t + i,t − i), resulting in transformation (t − i + N,t + i − N): this is the same as (10) written concisely.With the transformation in (9) (10), we get the new dependence distance for dependence (8) as: Hence, the dependence is made short along both dimensions by T : this is implied by u = 0 at both levels.The other long dependence is also shortened similarly by this transformation.Though there are other transformations that also enable such a shortening, this transformation, in addition, also enables concurrent start leading to tiles of shape shown in Figure 4b [3].However, as long as dependences can be shortened, one can exploit temporal locality along the time dimension and reduce frequency of synchronization with tiling.Thus, the objective is still well-suited for tiling periodic stencils.Another time tiling transformation that will lead to parallelogram tiles of shape in Figure 4a (still with u = 0 at both levels) is: As can be viewed geometrically and as a direct fall-out of the indexset splitting proposed in Section 4.2, the transformations that allow shortening of dependences, once index sets have been split, include reversals as well as negative (backward) parametric shifts.In particular, for a 2-d grid in Figure 9, three of four stacked split sets have to be reversed and shifted backwards along one or two dimensions in order to be aligned as depicted.Consequently, negative coefficients are needed in the statement-wise affine transformation functions.The Pluto algorithm [5] does not allow negative coefficients in its transformations.This is primarily due to the combinatorial difficulty in avoiding the trivial zero solution for φ's coefficients as well as in modeling the space of solutions representing linear independent sub-spaces [7].This trade-off between expressiveness and computational complexity has worked well in practice for many affine loop nests in which reversals are not a prerequisite to enable efficient parallelization.For the important class of computations we consider here, this trade-off is a limitation: the required reversals are not part of the space of valid transformations.Other transformation algorithms like those of Feautrier [11,12] also avoid negative values in their coefficients.Such algorithms are also designed to extract the maximal amount of fine-grained parallelism, by greedily satisfying dependences as early as possible.This design is incompatible with time tiling.This limitation in Pluto was recently addressed by Bondhugula and Cohen [6], thereby extending it to include transformations that allow reversal and negative parametric shifts in conjunction with other transformations.Since the objective function itself is untouched, index set splitting only enlarges the space of transformations with transformations that were originally in unsplit space still included. Overall impact. Note that our technique kicks in only when there are long dependences in both directions along a dimension.We make three observations here: (1) this is sufficient to deal with all stencils on periodic domains, (2) there is obviously no loss of good transformations in cases where the index set splitting does not succeed, and (3) all transformations that were valid on the unsplit index set are also naturally valid on the split index sets.We thus conclude that the approach has no detrimental impact on cases that lie outside the domain for which this technique has been developed.If our technique is applied even when there are long dependences in one direction, the index set splitting may still lead to better parallelization even though tiling was already valid.Evaluating this is out of the scope of this work and is left for future. Complementary transformations Vectorization is key to obtain good single thread performance for stencils [15,16].We rely on the native compiler to perform it.To this end, we only make sure that the generated code is preconditioned for good automatic vectorization. Once dependences are shortened and the code tiled, one need not maintain the execution order implied within a tile, i.e., the split sets can be freely reordered within a tile even if it makes the dependences longer again.This is because the dependence would only be longer inside the tile, preserving the validity of the tiling.Such reordering is helpful for vectorization, prefetching, better cache capacity use within a tile, and register tiling. Note that the split index sets all use different data for the most part except at the boundaries.Hence, we make the following changes to the schedule: 1. Reverse the reversed split index sets back so that we always have a positive stride.This helps vectorization as well as prefetching. 2. Separate out the split index sets at the tile level so that the entire cache capacity is used for each split index set independently, without interleaving.This ensures we do not artificially mix working sets, and that we keep tile sizes as large as possible.This also reduces cache conflicts and pollution among the split index sets. Both of the above optimizations significantly improve single thread performance while preserving the benefits of tiling and parallel scaling.Techniques we described have been implemented into Pluto [22].Experimental evaluation was performed on two different multi-way SMP multicore setups: an Intel Xeon SMP system and an Opteron SMP one.Table 1 lists their hardware specification.Intel's C, C++, and Fortran compilers version 12.1.3were used for all experiments, including for compiling codes we automatically generated. EXPERIMENTS The SPEC CPU2000fp swim benchmark (171.swim) is a weather prediction application that performs finite difference modeling of shallow water equations.It involves periodic boundary conditions on a two dimensional grid.Given that swim is part of the SPEC benchmarks, performance of code generated by a production compiler like icc is expected to be highly competitive and a strong reference point.There is no hand-optimized time-tiled code available for swim from prior art.Compiler flags used with ifort were "-O3 -ipo".We experimented with other combinations and found these to be the best.Most scores reported on spec.org for swim also use these flags for both base and peak tuning configurations.The Pochoir domain-specific compiler could not be used to specify such computation as explained in the next section. Besides swim, we use three other representative periodic stencil benchmarks, heat-1dp, heat-2dp, and heat-3dp, from the Pochoir suite [33].Problem sizes used are provided in Table 2.For swim, the reference input that the benchmark is required to be reported with was used-it specifies a 2-d grid of size 1335 × 1335 with an outer time loop of 800 iterations.For the heat benchmarks, problem sizes used are from the Pochoir suite and are meaningful for the respective computations.Performance for all is compared with Intel's compiler as well as the Pochoir stencil compiler [33] (version 0.5) that is publicly available. Choice of benchmarks and coverage. We argue that these benchmarks indeed comprehensively cover the domain of interest.Firstly, all realistic grid dimensionalities are covered.Other variations in input in this domain could come from a different width for the stencil, i.e., a different number of neighbors.However, this only affects the skewing factor needed to perform the tiling.For example, in Figure 4b, the skewing factor is one.The structure of the code and all other transformations and their effects remain the same.In addition, we did not find using different problem sizes or a different computation for the actual point update providing any additional insights.All data sets are significantly larger than the L3 cache. icc-par or ifort-par refers to code auto-parallelized with Intel's C or Fortran compiler respectively, using the '-parallel' flag in addition to the flags specified in Table 1, while icc-seq refers to the same without auto-parallelization.poly-diamond refers to code we generate that is time tiled using diamond tiling while poly-pipeline is tiled with parallelogram shaped tiles.poly-pipeline suffers from pipelined startup and drain and thus load imbalance, while diamond tiling allows concurrent startup enabling maximal parallelism; both enable reuse along the time dimension.The tile sizes for polydiamond are set to maximize locality and single thread performance.However, those for poly-pipeline are set to guarantee a sufficient number of tiles on the wavefront in the steady-state of the pipeline to keep all processors busy.Table 3 and Table 4 show the performance of different tiled versions, and compare them with pochoir and the native compiler's auto-parallelization. Table 11 shows scaling for heat-2d periodic on the AMD Opteron.Overall, a very big improvement is seen over icc-par as the latter is not expected to time tile such stencils.Lack of time tiling makes the code memory bandwidth-bound yielding no or limited speedup in spite of parallelization.Due to better locality from time tiling, poly-diamond code incurs less memorybandwidth per core, and the improvement with it increases with the number of cores.In some cases, the scaling with poly-diamond is not close to ideal since all implementations tend to get memorybandwidth-bound for a large number of cores.However, the improvement is still very significant.Improvements are higher for lower dimensional stencils than for higher ones as the spatial reuse is lower for the former. Except in one case, an improvement of 6% to 4× is seen over the Pochoir stencil compiler that is able to tile in the presence of periodicity, though with a different tiling strategy.The mean (geometric) speedup over it on the Intel and Opteron systems is 1.42× and 1.5× respectively.These performance improvements over Pochoir are similar to those observed for non-periodic stencils by Bandishti et al. [3].Figures 10a and 10b show improvement on the full swim benchmark.Our approach splits the data domain into four partitions as shown in Figure 9 before applying reversal, shifts, and skewing transformations.Time tiling allows nearly ideal scaling in contrast to ifort-par which scales poorly even when the number of cores is low.This behavior has indeed been expected without techniques to exploit reuse along the time dimension.Table 5 supports the claim that improved locality leads to higher performance and better scaling.With inner space loop tiling and parallelization alone, the computation incurs significantly higher number of cache misses and is memory bandwidth-bound.Both ifort-par and poly-diamond utilize all cores as was reflected from the CPU utilization.polypipeline suffers from load imbalance due to a pipelined startup and drain phase.The running times of our generated diamond-tiled code for swim on the Intel system and the resulting SPEC rate of 761.67 that we achieve are also better than the highest ever publicly reported on spec.org-acrossall machines (as of 2013).A direct comparison with any of those numbers is however not possible since the machines for which the numbers were reported are different from ours. Figure 12 shows that time-tiled code for the periodic case provides roughly the same performance as the non-periodic ones.Note that the amount of computation for both periodic and non-periodic, given a particular grid size and number of time iterations, is the same.In one case, surprisingly, the periodic version performs better than the non-periodic.This clearly shows that the non-periodic one could have been optimized better.In general, the periodic stencil code has a more complex structure and is expected to only perform at most as well as the non-periodic one.More optimizations in the polyhedral code generator Cloog could simplify it for better optimization by the native compiler.Overall, these interactions have not yet been studied fully, and this not being the main focus of this paper, are planned for future. RELATED WORK Recent stencil optimization works that include some domainspecific ones [10,30,31,34,16] and compiler-based ones [29,39,18,7,3] do not optimize those with periodic boundary conditions.The Pochoir [33] stencil compiler is the only one, to the best of our knowledge, that supports periodic conditions while applying the optimizations within the scope of this paper.Results indicate that Pochoir is able to perform time tiling via trapezoids regardless of the presence of periodic conditions, but the generated code is not as efficient as with our technique, as discussed in Section 6.We could not find a way to write multiple inter-related stencil computations with Pochoir, and hence the SPEC benchmark swim could not be expressed with it.However, ours being a general-purpose compiler approach driven by data dependences naturally handles such code.Of course, domain-specific optimization efforts have an opportunity to generate better code due to the greater amount of information they have about the problem, and our framework is suitable for integration into domain-specific stencil compilers. Index-set splitting [14] and iteration space slicing [23] are transformations that partitions iteration domains into smaller sub-domains.This in turn allows different scheduling functions for different pieces of the program and results in more freedom.These seminal works focus on minimizing the dimensionality and latency of admissible schedules.In this work we exploit the degrees of freedom offered by index-set splitting as well as the expressiveness of linear transformations to reduce folding to an index-set splitting problem followed by a dependence shortening transformation problem. Multiple tiling strategies have been devised to optimize stencil computations for shared and distributed memories.Originally, spatial decomposition through rectangular tiles is applied to the spatial dimensions.Spatial decomposition has the advantage of being simple to achieve but does not exhibit temporal reuse.The 171.swim SPEC CPU2000fp benchmark implements a well-known shallow water simulation model [32,27]; an earlier version with a smaller data set was already included in the SPEC CPU1995fp suite.It lends itself well to spatial decomposition.However, spatial decomposition alone is not sufficient to reduce the memory-bandwidth consumption of the simulation model.As shown in an earlier work on semi-automatic loop nest optimization, the swim benchmark is amenable to loop fusion across one iteration of the time loop.Such polyhedral-enhanced fusion improves temporal locality and achieved 34% speedup on single-threaded execution [9,13].But despite much progress in production and research compilers since 1995, and despite the promises of a boost in the overall SPEC CPU score, time tiling remained inaccessible for the swim benchmark. Time tiling was proposed to aggregate multiple time iterations and increase temporal reuse compared to tiling only in the data space [39].Time tiling has roots in Lamport's hyperplane method [19] and is the most widely implemented technique within polyhedral transformation tools and compilers.Due to its reliance on loop skewing to extract parallel wavefronts of tiles, traditional time tiling suffered from two problems: (1) pipelined startup and shutdown phases in which some processors do not have work, and (2) loadimbalance due to insufficient number of tiles along each wavefront.For stencils implementing an explicit residual smoothing scheme such as Jacobi iterations, concurrent startup is possible [18] and results in asymptotically more parallelism than available with the traditional form of skewing-enabled time tiling.A successful tiling scheme which systematically exploits available parallelism is based on diamond tiling [3].Our contribution builds on these insights, and extends them to stencils with periodic boundary conditions.This results in asymptotically more parallelism and locality on stencils with boundary conditions than was previously available [40]. Choffrut and Culik [8] perform folding on two-dimensional systolic arrays eliminating long wires for connections between elements that are related by reflections and/or rotations.[24] hints at using reflections to find piece-wise linear schedules as opposed to schedules for tiling: however, we found the approach proposed to determine splits itself to be incomplete and preliminary in its description, and very limited in its applicability.Yaacoby et al. [42] presents an algorithm on "uniformizing" dependences in affine recurrence equations in the context of systolic array synthesis through generalized folding.Though the method is unique because of its use of images of dependences and the characterization of affine recurrence equations which can be uniformized, its practical application and subsequent scalability is limited by its reliance on closures of dependence maps, eigenvalues and cycles in the dependence graph.Also, the formalism as described does not capture long dependences across boundaries-this is needed to derive folding for periodic stencils.Overall, our approach is inspired by folding, but, for the problem of tiling and parallelization for the domain of interest here, is more general and made possible by reasoning through index set splitting for dependence shortening.It is also far more robust and resilient to variations in dependence patterns, as argued towards the end of Section 4: it was made possible by minimizing the upper bound on the distance of the splitting hyperplane from the mid-points of long dependences.It thus subsumes reflections.Our approach can also seamlessly deal with any grid dimensionality as opposed to only up to two-dimensional as in the case of [8]. CONCLUSIONS We introduced an automatic method to optimizing time-iterated computations on periodic domains.Our method relies on an original index set splitting scheme.The scheme allowed us to transparently apply tiling transformations with the existing objective function used in Pluto.Experimental results on the swim SPEC CPU2000fp benchmark showed a speedup of nearly 5× over the highest performance achieved by a highly tuned commercial production compiler.We are not aware of any SPEC numbers for swim that come close to this result, obtained through either manual or automatic means.On other representative stencil computations, our scheme provides performance similar to that achieved with no periodicity.In addition, our technique always matches or outperforms-by up to 4×-a domain-specific stencil capable of handling periodicity in simpler cases.Our method is implemented in an open source research compiler and is available [22]. These results are not only interesting for computational sciences, but also excellent news for programming language and compiler designers.We conclude that it is practically infeasible to manually reproduce the optimizations we performed on swim or any other periodic stencil, especially on a two-dimensional or higher data grid.On the other hand, advanced tools can deal with this complexity, opening dimensions of program optimization that have so far been practically out of the reach of domain experts. Figure 5 : Figure 5: Partial tiles (in yellow) can be merged Figure 6 : Figure 6: Cut and paste over diamond tiling Figure 9 : Figure 9: Index set splitting and piece-wise scheduling: iterations are partitioned into 4 pieces by cutting along the dashed lines (2 time steps shown); interleaving the pieces, (shown on the right) results in a space with short dependences only 2-way SMP AMD Opteron (16 cores) Figure 10: Performance on Swim benchmark from SPEC2000fp Figure Figure Periodic heat-2d scaling on the Opteron system Table 1 : Details of architectures used for experiments Table 2 : Problem sizes for benchmarks (grid × time steps) Table 3 : Running times and speedup with poly-diamond on Intel Xeon multicore SMP Table 4 : Running times and speedup with poly-diamond on AMD Opteron multicore SMP Table 5 : Performance counters comparing ifort-par with polydiamond for swim on 12 cores on the Intel multicore
11,503.8
2014-08-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Improved Correlated Multiple Sampling by Using Interleaved Pixel Source Follower for High-Resolution and High-Framerate CMOS Image Sensor This article describes an improvement in the noise reduction performance of a column correlated multiple sampling (CMS) readout circuit using interleaved pixel source follower for high-resolution and high-framerate CMOS image sensors (CISs). In this architecture, the time-interleaved operation of the two pixel source followers reduces the restrictions imposed by the settling time of the pixel source followers and extends the time for multiple sampling. The noise analysis indicates that this method has an advantage of enhanced noise reduction not only for thermal noise but also for 1/ ${f}$ noise when a high-speed readout operation is required. The measurement of the noise performance of the 8K image sensor using the CMS with the interleaved pixel source follower method exhibits a low input-referred noise of 3.2 e− at 8K 120 frames per second, while 4.6 e− with the conventional single-source follower readout method. The measurement results match reasonably well with the analysis presented in this article, demonstrating the effectiveness of the interleaved pixel source follower method for high-resolution and high-framerate CISs. Improved Correlated Multiple Sampling by Using specify spatial resolutions of up to 8K and frame frequencies of up to 120 frames per second (fps). One of the most challenging tasks for such high-resolution and high-framerate image sensors is to achieve high-speed readout and low-noise performance. For example, the single-row time of an 8K 120 fps progressive-scan image sensor, which corresponds to the readout time for a single row in a column-parallel readout circuit architecture, is 1.85 μs (including vertical blanking interval), which is much shorter than that of a high-definition television (HDTV) 1080i (29.6 μs) [3]. In addition, a large number of pixels require a small pixel pitch, which leads to degradation of the signal-to-noise ratio. The correlated multiple sampling (CMS) technique has been studied as a promising noise reduction technique for achieving low-noise performance characteristics in CMOS image sensors (CISs) [4]- [6]. CMS is advantageous owing to its ability to efficiently reduce both thermal and 1/ f noise of the pixel source follower amplifiers, which are known to be one of the major noise components in well-designed lownoise CISs [7]. In the past few years, the CMS column-parallel analog-to-digital converters (ADCs) have been implemented with the CMS technique in various ways. These include a digital implementation with multiple A/D conversions based on a single slope (SS) [8], [9] and successive approximation register (SAR) [10] ADC and an analog implementation with a passive switched capacitor (SC) [11], [12] and an SC integrator circuit [13], [14]. However, achieving both high-speed readout and low noise, which are required in high-resolution and high-framerate CISs, is difficult for the CMS readout circuit. Indeed, they have not been used in CISs that meet both the pixel count of >8 Mpixel and framerate of >60 fps. One of the reasons for this is the restriction imposed by the settling time of the pixel source follower. Since multiple sampling must be performed after the pixel source follower output has settled, a part of the readout time is devoted to the settling of the pixel source follower. In particular, for high-resolution and high-framerate CISs, the readout time is limited to a short period and the settling time of the pixel source follower tends to be long because of the parasitic capacitance due to a large number of This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ vertical pixels. These factors make it difficult to secure the time required for multiple sampling. To overcome this difficulty, we previously implemented a circuit topology with interleaved pixel source followers in a column CMS readout circuit [15]. In this architecture, the twopixel source followers work in parallel at different phases: the output of one is multiple sampled while settling the output of the other. This time-interleaved operation reduces the restriction imposed by the settling time of the pixel source followers and extends the time for multiple sampling. We applied this method to a column-parallel readout circuit in an 8K image sensor and achieved random noise of 3.2 e − at a readout time of 0.93 μs [8K 120 fps operation with digital correlated double sampling (CDS)] [15]. However, the precise contribution of the interleaved pixel source follower method has not been evaluated. This method enhances the reduction effect, especially for thermal noise, as thermal noise is known to be reduced by a factor equal to the square root of the sampling number [16]. In contrast, the reduction effect for 1/ f noise could be degraded even when the sampling number is increased. This is because increasing the sampling number certainly reduces the 1/ f noise, but simultaneously the parallel operation of pixel source followers increases the interval of the reset and signal sampling, which is known to limit the reduction effect for 1/ f noise [5]. These conflicting effects are affected by the readout speed because increasing the sampling number and the interval of the reset and signal sampling are strongly dependent on the readout speed. Therefore, to clarify the precise contribution of the interleaved pixel source follower method, we need to discuss further both the thermal and 1/ f noise reduction effects considering its dependence on the readout speed. In this article, the impact of the interleaved pixel source follower method on both thermal and 1/ f noise is theoretically analyzed. Moreover, its dependence on the operation speed is discussed. To verify the theoretical analysis, we measured and compared the theoretical calculations with the noise performance of an 8K image sensor implemented with the interleaved pixel source follower method. The contribution of the interleaved pixel source follower method for the noise performance in high-resolution and high-framerate CISs is demonstrated. II. THEORETICAL NOISE ANALYSIS In this section, the noise reduction effect of the interleaved pixel source follower method for both the thermal and 1/ f noise is theoretically analyzed and compared with that of the conventional method regarding their dependencies on the operation speed. In the following discussion, a column CMS readout circuit is implemented with an SC integrator, column ADC, and digital CDS circuit as a typical example of the architecture. A. Architecture The schematic and timing diagrams of a conventional CMS readout circuit are shown in Fig. 1. The reset and signal levels of the source follower output (V R and V S , respectively) are sampled and integrated M times by the SC integrator. The column ADC converts the output voltages of the SC integrator into the digital code. The digital CDS circuit takes the difference between the digital code of V R and V S . T H , T 0 , and T sett are the readout time for a single row, the sampling period of the multiple sampling, and the settling time of the pixel source follower, respectively. For simplicity, T sett and T H /2 are assumed to be integral multiples of T 0 . Subsequently, considering the time constraint, the sampling number M and the interval of the reset and signal multiple sampling T g can be expressed as As shown in (1), a part of the readout time is devoted to the settling of the pixel source follower. Therefore, shorter T H and longer T sett , which are required for higher resolution and framerate CISs, result in a smaller M. This relationship makes it difficult to implement the CMS technique for high-resolution and high-framerate CISs. Fig. 2 shows a schematic and a timing diagram of a readout circuit using the interleaved pixel source follower method. Each pixel column has two source followers (SFA and SFB). Column vertical pixels are divided line by line into pixels A and B, and they are alternately connected to the two source followers. The two source followers work in parallel, and the phase of their operation differs by T H /2. The outputs of the source followers are connected to the SC integrator via selector switches ( SFA and SFB ). Subsequently, the SC integrator receives the reset levels of pixels A and B (V RA and V RB ), followed by their signal levels (V SA and V SB ). The digital CDS circuit takes the difference between the digital code of V SA and V RA and between that of V SB and V RB using a set of two registers. In this architecture, the input voltages of the SC integrator have already been settled before connecting to the SC integrator, which reduces the waiting time for settling. We call this topology the "interleaved pixel source follower method." T t is the minimum time interval between the last Mth sampling point and the first sampling point of the next multiple sampling of the SC integrator. This is the time that is required to operate the following operations: integrating the Mth sampled signal, column ADC sampling, resetting the SC integrator output, and charging the input capacitor for the first sampling of the next multiple sampling. Notably, T t < T sett /2 is required to achieve M = 1. When T g = T sett (4) and Note that T t is assumed to be an integral multiple of T 0 for simplicity. B. Thermal Noise Reduction Effect In the CMS readout circuit, the thermal noise is reduced by a factor equal to the square root of the sampling number owing to the averaging effect of multiple sampling [16]. Therefore, the enhancement of the noise reduction effect caused by the interleaved source follower can be calculated by the increase in M from the conventional method. In the conventional method, because a part of the readout time is devoted to the settling of the pixel source follower, M decreases as T sett increases, as shown in (1). In the interleaved pixel source follower method, T t has the same relationship with M as T sett has in the conventional method when T H /2 ≥ T sett − T t , as shown in (5). In high-resolution CISs, T t < T sett is supposed to be because of the large parasitic capacitance as a large number of vertical pixels strongly limits the bandwidth of the pixel source follower, which results in a large T sett . However, T t can be optimized by the SC integrator design regardless of the number of vertical pixels. The amount of increase in M from the conventional method to the interleaved pixel source follower method, that is, M, is obtained using (1) and (5) and is expressed as Here, T H /2 ≥ T sett −T t is assumed. Fig. 3 compares the thermal noise reduction effect of the conventional and interleaved pixel source follower methods. The term 1/ √ M is plotted as a function of T H /T 0 to discuss their dependence on the readout time. Here, T 0 , T sett , and T t are treated as constants, and T H is treated as a variable. T sett = 7T 0 and T t = 3T 0 are assumed as examples, which correspond to the actual value designed for the 8K image sensor described in Section III-A. T sett , T t , and T 0 strongly depend on the pixel rate and various design constraints, such as power consumption and circuit area. M is calculated as (4) from (7). The markers in Fig. 3 C. 1/f Noise Reduction Effect T g is constant (=T sett ) regardless of T H in the conventional method, while it increases with T H in the interleaved pixel source follower method. This can lead to the degradation of the noise reduction effect for the 1/ f noise component in the interleaved pixel source follower method. The noise reduction effect of the CMS for 1/ f noise can be expressed by the noise reduction factor F CMS defined by [5] with the definition of x c = ω c T 0 and x = ωT 0 , where ω c is the cutoff angular frequency determining the bandwidth of the noise components, and M g is an integer defined by M g = T g /T 0 . F CMS can be approximated by a noise reduction factor of the differential averager F DA expressed by [17] where R G is the ratio of M g to M (R G = M g /M). This approximation is useful for calculating F CMS as a function of only R G without numerical calculation of (8). For a large M, F CMS can be exactly approximated by F DA . R G of the conventional and interleaved pixel source follower methods (R G,S and R G,D , respectively) can be obtained from (1), (2), (5), and (6) as expressed by Here, T H /2 ≥ T sett − T t is assumed. At the point of T H satisfying R G,S = R G,D , F CMS of the two methods can be approximated as roughly equal using (9). For T H /T 0 1 (M 1), it is supposed that R G,S ∼ = 0 and R G,D ∼ = 1; (8), is plotted as a function of T H /T 0 . T sett = 7T 0 , T t = 3T 0 (the same as in Fig. 3), and x c = 16 are assumed as an example. T H that meets R G,S = R G,D (denoted as T HX ) is plotted as the vertical line of T HX /T 0 ( ∼ =27). The point of intersection of the two lines approaches the line of T HX /T 0 . When T H /T 0 < T HX /T 0 , a higher noise reduction effect can be obtained for the interleaved pixel source follower method. In contrast, when T H /T 0 > T HX /T 0 , the interleaved pixel source follower method shows an inferior reduction effect for the 1/ f noise compared with the conventional method. These are caused by the improvement effect by M that becomes smaller while the degradation caused by T g increases with increasing T H /T 0 . For T H /T 0 1, F CMS of the conventional and interleaved pixel source follower methods saturates to 4ln(2) ∼ = 2.77 and 9ln(3) − 8ln(2) ∼ = 4.34, respectively, which are equal to the saturation values approximated above using (9). D. Applicability An overall noise reduction effect can be obtained by applying the reduction factors 1/ √ M and F CMS for thermal and 1/ f noise components, respectively, and summing them. For low-speed CISs, where the thermal noise can be effectively suppressed by an efficient sampling number of the CMS or the bandwidth limitation effect of the low-speed readout circuit, the 1/ f noise could become a major noise component [18]. Under this condition, the interleaved pixel source follower method does not work effectively because the enhancement, owing to M, is small, and the degradation in 1/ f noise reduction that is caused by the increase in T g becomes large. In contrast, for high-speed CISs, only a few sampling numbers can be obtained, and high bandwidth is required for the readout circuit, which leads to large thermal noise components. Under this condition, the interleaved pixel source follower method is effective because the thermal noise reduction effects are effectively enhanced by M. Especially for high-resolution CIS, as shown in (7), M tends to be large because of the large difference between T sett and T t , which results in a large enhancement of the noise reduction effect. In addition, from the discussion on the 1/ f noise reduction effect, if T H is shorter than T HX , the interleaved pixel source follower method enhances the noise reduction effect even for 1/ f noise components. Therefore, M and T HX provide clear indications of the applicability. We conclude that the interleaved pixel source follower method effectively reduces noise at high-resolution and high-framerate CISs. A. Implementation of 8K Image Sensor The schematic and timing diagrams of an 8K image sensor readout circuit with an implemented interleaved pixel source follower method are shown in Figs. 5 and 6, respectively. The image sensor has a pixel array with 2.1-μm pixel pitch (effective area: 4.320 pixels × 7.680 pixels). The vertically aligned two-shared pixels are connected alternately to the two source followers. The outputs of the source followers are connected to an analog column CDS circuit via two switches, which is for level adjustment and amplification of the pixel source follower output. Subsequently, the CDS circuit receives the reset levels of pixels 1 and 3 (V R1 and V R3 ), followed by their signal levels (V S1 and V S3 ). Once V R1 is received, capacitor C f is reset by R , and the output voltage of the CDS (V CDS ) is set to a common voltage V com . Thus, the output where G A is the gain of the column CDS (G A = C i/ C f = 2). These voltages are sampled and integrated M times and converted into digital code by the three-stage pipelined ADC, consisting of folding integration (FI) and cyclic, SAR ADC. The FI ADC has an SC integrator that executes multiple sampling. The digital CDS circuit takes the difference between the first and third outputs and between the second and fourth outputs using two sets of registers. Then, the final CMS output codes of G A (V S1 -V R1 ) and G A (V S3 -V R3 ) are obtained. In this implementation, a margin, corresponding to the maximum difference between V R1 and V R3 , is needed for the lower limit of the column ADC input voltage range. This is required because if V R3 -V R1 is negative, then V com + G A (V R3 -V R1 ) is lower than the reference level of the CDS output (V com ). Output voltage clipping circuits are used in the pixel source followers to avoid abnormal voltage fluctuations in V R1 and V R3 . The difference between the two pixel source followers' gains or linearity error of the column ADC can cause horizontal striped artifacts, which can be reduced by applying signal corrections in the digital domain. Fig. 7 shows (a) circuit diagram and (b) timing diagram of the FI ADC. The FI ADC consists of an SC integrator, comparator, counter, and negative feedback path with a 1-bit digital to analog converter (DAC). This ADC works as a first-order incremental delta-sigma modulator. The counter output is treated as higher bits, and the residue voltage of the SC integrator is converted into lower bits by the following ADC. This type of A/D conversion is also known as an extended counting ADC [19], [20]. Duplicated sampling capacitors (C 1A and C 1B ) are implemented to obtain highspeed operation. In the first sampling phase, C 1A and C 2 are charged by the input voltage, and the voltage across C 2 and V RC is compared by the comparator. The sampled charge of C 1A is transferred to C 2 so that the integration gain is G I + 1, where G I = C 1A /C 2 = C 1B /C 2 = 1/2. In the second sampling phase, the charge sampled in C 1A is transferred to C 2 , and a reference voltage (V RL or V RH ) for the 1-bit DAC is subtracted. Moreover, C 1B simultaneously samples the input voltage. In the following phases, the ADC works similarly as in the second sampling phase. However, the roles of C 1A and C 1B are switched. The input voltage is amplified by the gain of G I M + 1, and the output voltage is maintained to a limited range from V RL to V RH . B. Noise Component Analysis in the Readout Circuit The noise components of the 8K image sensor are clarified in this section. For this, the noise characteristics of the signal readout circuit from the pixel-to-column ADC were analyzed based on the model presented in [5] with some modifications. The noise power referred to at the CMS output can be expressed as the sum of the noise components described as follows: P n,rst is the reset noise component of the integrator; P nT,smpl and P nF,smpl are the thermal and 1/ f noise components in the sampling phase, respectively; P nT,trns and P nF,trns are the thermal and 1/ f noise components in the signal charge transfer phase, respectively; and P nT,ADC is the thermal noise component of the column ADC sampling. P n,rst , P nT,trns , P nF,trns , and P nT,ADC can be calculated using the same model presented in [5]. In this analysis, P nF,trns and 1/ f noise due to the amplifier of the FI ADC are ignored because the amplifier uses relatively large transistors with low 1/ f noise. P nT,smpl and P nF,smpl must be modified to include the effect of the first sampling and the noise component of the column CDS and column bias circuit. P nT,smpl is modified as P * nT,smpl = P nT,SF + P nT,Bias + P nT,CDS where P nT,SF, P nT,Bias , and P nT,CDS are the noise components generated by the source follower, column bias circuit, and column CDS, respectively. The noise bandwidth limitation of the source follower and column bias circuit are assumed independent of the FI ADC sampling capacitor because the cutoff frequencies of the source follower ω cSF and the column bias circuit ω cB are lower than that of the CDS circuit. Thus, P nT,SF , and P nT,Bias can be calculated by where G nSF is the noise gain factor of the source follower [5], ξ SF and ξ B are the excess thermal noise factor of the source follower and the column bias circuit, respectively; g mSF , g mCS , and g mBias are the transconductance values of transistors M1, M6, and M7, respectively, as shown in Fig. 5, and H * CMS (ω) is the CMS transfer function. From the sampling timing and integration gain of the FI ADC, H * CMS is obtained with the z transform and is expressed in the z domain as where M g is an integer defined by M g = T g /T 0 . T g and T 0 are the intervals of the reset and signal multiple sampling and the sampling period, respectively. With z = exp( j ωT 0 ), the noise power transfer function for the CMS H * CMS (ω) is given by Similar calculations cannot be applied to P nT,CDS because the CDS cutoff frequency is affected by the sampling capacitance of the FI ADC. However, each sampled noise of the first sampling (P n,CDS1 ) and second to Mth samplings (P n,CDS2 ) can be calculated separately, and P nT,CDS can be obtained by summing them where and where ξ CA , β A , and C L are the excess thermal noise factor, feedback factor, and load capacitance of the amplifier, respectively, which are used for the CDS circuit in the Mth sampling phase. P nF,smpl is modified as follows: Timing diagram of the readout circuit used for emulating the conventional readout timing. The same operation is conducted for pixels 2 and 4. where K fSF is the flicker noise coefficient, and ς SF is the flicker noise factor of the source follower. The total noise power referred to at the output of the readout circuit P nC,total is given by P n,total = P n,rst + P * nT,smpl + P * nF,smpl + P nT,trns + P nT,ADC . The gain from the charge generated in the photodiode to the output is given by G cSF G A (G I M + 1), where G cSF is the conversion gain of the pixel source follower. The input referred noise is given by C. Experimental Setup The sampling timings used for the measurement are shown in Table I. The readout time for a single row T H of 1.85, 3.70, and 7.41 μs, corresponding to framerates of 120, 60, and 30, respectively, is used for the measurement. The conventional readout operation was emulated using the same readout circuit to compare with the conventional method. Fig. 8 shows the readout timing used for emulating the conventional readout operation. The two pixel source followers were operated one by one. The select switches ( SFA and SFB ) select the SFA, and the reset and signal levels of pixel 1 were readout in the period of T H . Subsequently, the select switches select SFB, and the reset and signal levels of pixel 3 were readout in the next period of T H . The column CDS circuit receives the reset and signal level of the pixels subsequently, and the digital CDS circuit takes the difference between the two consecutive codes. Note that the same readout operation can be performed by a single-pixel source follower and a single register in the digital-CDS circuit used for the conventional readout circuit. T 0 , T t , and T sett were 0.11, 0.81, and 0.27 μs, respectively. These parameters were determined by the bandwidth of the pixel source follower and SC integrator in the FI ADC. M obtained in the interleaved pixel source follower method is M (=4) larger than that of the conventional method for the same T H . T g is equal to T sett (=0.81 μs) regardless of T H in the conventional readout method; however, it increases in proportion to T H in the interleaved pixel source follower method. To measure dark noise for the 8K image sensor, the extracted 100 × 100 pixels were used to analyze the random noise; the root-mean-square noise values were calculated for each pixel, and the median value of the pixels was treated as random noise. Fig. 9 shows the calculated noise components as a function of M, where N n,rst , N * nT,smpl , N * nF,smpl , N nT,trns , and N nT,ADC are the input-referred noise components of P n,rst , P * nT,smpl , P * nF,smpl , P nT,trns , and P nT,ADC , respectively. Moreover, the sum of them is plotted as N n,total . Because N n,rst , N * nT,smpl , N nT,trns , and N nT,ADC showed the same behavior for both the conventional and interleaved pixel source follower methods, they are plotted using similar lines for both the methods. This is because these components are dependent on M but are independent of T g [5]. In contrast, N * nF,smpl that originates from the 1/ f noise of the source follower amplifier shows a different tendency when comparing the two methods. This is because the 1/ f noise reduction effect is affected by T g , as discussed in Section II. When the sampling number is small, N * nT,smpl is a dominant component; however, it is suppressed with an increase in M. For large M, N * nF,smpl becomes dominant, which is because the noise reduction effect for 1/ f noise is saturated for a large M. The saturation value of the interleaved pixel source follower method is higher than that of the conventional method, which corresponds to the degradation of the 1/ f noise reduction effect caused by the interleaved pixel source follower method, as presented in Section II. The noise components N n,rst , N nT,trns , N nT,ADC , and N nT,smpl are smaller than N * nT,smpl and effectively suppressed with an increase in M. From these results, N * nT,smpl and N * nF,smpl can be regarded as the main noise components that can explain the behavior of the overall noise performance of the image sensor. Fig. 10 compares the measured input-referred noise of the conventional and interleaved pixel source follower methods as a function of T H . The calculated noise of N n,total is plotted to compare with the calculation results. N * nT,smpl and N * nF,smpl are plotted as the main noise components of N n,total . At T H = 1.85 μs, corresponding to 8K 120 fps operation, a low input-referred noise of 3.2 e − with M = 6 is obtained for the interleaved pixel source follower method, while that for the conventional method is 4.6 e − with M = 2. The difference in the input referred-noise between the two methods decreases with an increase in T H . At T H = 7.41 μs, corresponding to 8K 30 fps operation, 1.5 e − with M = 30 and 1.6 e − with M = 26 are obtained for the interleaved pixel source follower method and the conventional method, respectively. These results can be explained by the analysis presented in Section II as follows. For high-speed readout operation (at T H = 1.85 μs), because T H is shorter than T HX , M (=4) strongly enhances the noise reduction performance for both the thermal and 1/ f noise, which results in the low input-referred noise of the interleaved pixel source follower method. For a relatively low-speed readout operation (at T H = 7.41 μs), the improvement effect by M becomes small, and the degradation of 1/ f noise reduction effect caused by T g becomes larger. Therefore, the difference in the input-referred noise between the two methods becomes small. N * nT,smpl , N * nF,smpl , and thermal and 1/ f noise components in the sampling phase, respectively, show the behavior mentioned above, and N n,total , whose main component can be regarded as N * nT,smpl and N * nF,smpl , agrees reasonably well with the measured results. Thus, these results clearly demonstrate the contribution of the interleaved pixel source follower method for the improvement of noise performance in the high-resolution and high-framerate CIS. IV. CONCLUSION This article describes the noise reduction effect of a column CMS readout circuit with interleaved two pixel source followers and discusses its effectiveness for high-resolution and high-framerate CISs. The noise reduction effect analysis indicated that the increase in the sampling number, owing to the interleaved pixel source followers, tends to be large in the high-resolution CISs, resulting in a large enhancement in noise reduction. Furthermore, the interleaved pixel source follower method has the advantage of enhanced noise reduction performance not only for thermal noise but also for 1/ f noise when a high-speed readout operation is required. The measurement of the noise performance for the 8K image sensor implemented with the interleaved pixel source follower method showed the low input-referred noise of 3.2 e − at 8K 120 fps operation while 4.6 e − for the conventional readout method. Furthermore, their dependence on the readout speed and the difference between the two methods agreed reasonably well with the analysis presented in this article. These results demonstrated the effectiveness of the interleaved pixel source follower method and clarified its effect on the noise performance at high-resolution and high-framerate CIS.
7,274.4
2021-05-01T00:00:00.000
[ "Engineering", "Physics" ]
Pigmentation and Sporulation Are Alternative Cell Fates in Bacillus pumilus SF214 Bacillus pumilus SF214 is a spore forming bacterium, isolated from a marine sample, able to produce a matrix and a orange-red, water soluble pigment. Pigmentation is strictly regulated and high pigment production was observed during the late stationary growth phase in a minimal medium and at growth temperatures lower than the optimum. Only a subpopulation of stationary phase cells produced the pigment, indicating that the stationary culture contains a heterogeneous cell population and that pigment synthesis is a bimodal phenomenon. The fraction of cells producing the pigment varied in the different growth conditions and occured only in cells not devoted to sporulation. Only some of the pigmented cells were also able to produce a matrix. Pigment and matrix production in SF214 appear then as two developmental fates both alternative to sporulation. Since the pigment had an essential role in the cell resistance to oxidative stress conditions, we propose that within the heterogeneous population different survival strategies can be followed by the different cells. Introduction Spore-forming Bacilli are Gram positive organisms characterized by the ability to differentiate the endospore (spore), a metabolically quiescent and extremely resistant cell type. The soil is generally indicated as the main habitat of Bacilli, however, spores have been found in many diverse environments, including rocks, dust, aquatic environments, and the gut of various insects and animals [1,2]. Such a wide environmental distribution is facilitated by the spore ability to survive long-term absence of water and nutrients and withstand extreme habitats that would kill other cell types [3]. Survival is due to the peculiar structure of the spore that is formed by a dehydrated cytoplasm containing a condensed and inactive chromosome, and by a series of protective layers. An innermost layer is the peptidoglycan-rich cortex that is itself surrounded by additional layers of proteinaceous material, the coat and, in some species, the exosporium [4,5]. Together these components protect the spore from UV radiation, extremes of heat or pH, exposure to solvents, hydrogen peroxide, toxic chemicals and lytic enzymes [3,6]. In the presence of water and appropriate nutrients the spore starts germination, a fast process during which the protective structures are removed and resumption of vegetative cell growth is allowed [3,4]. Spore formation is dependent upon environmental conditions that do not allow cell growth, such as a block of DNA replication and a decline of available nutrients [7]. In Bacillus subtilis, the model organism for spore formers, growing cells are mainly single and highly motile. When those dispersed cells reach the end of exponential growth they can follow alternative developmental pathways with some cells forming long chains, producing a polymeric matrix rich in sugars and proteins (matrix) and assembling into multicellular biofilms and others entering the irreversible program of spore formation [8,9,10]. Therefore, in dispersed cell populations matrix and spore production are mutually exclusive cell fates [8,11] and are both bimodal processes in which cells follow either one or the other pathway [12,13]. Both developmental cell fates are governed by a regulatory protein, Spo0A-P, that directly activates genes of the sporulation pathway [14] and indirectly acts on matrix synthesis, relieving the repression of genes for matrix production (epsA-O and yqxM-sipW-tasA operons) [15,16,17]. Two mechanisms cooperate to make sporulation and matrix production mutually exclusive: a metabolic control mediated by the intracellular levels of SpoOA-P and a chromosome copy number mechanism that prevents cells that have entered the sporulation pathway from expressing matrix genes [10]. Low levels of SpoOA-P induce matrix formation while high levels of the phosphoprotein block matrix formation and activate sporulation. Therefore, in a sporulation-inducing medium, in which SpoOA-P levels rapidly rise, cells enter sporulation instead of forming a biofilm. Conversely, in a medium in which SpoOA-P remains at low levels biofilm formation is promoted [10]. However, extracellular matrix production and sporulation are linked. KinD, a membrane histidine kinase which is part of the Spo0A phosphotransfer network, has been proposed to act as a checkpoint protein able to regulate the onset of sporulation by inhibiting Spo0A activity. KinD would alter its activity, depending on the presence or absence of the extracellular matrix, thus affecting the selective functionality on the master regulator Spo0A to regulate expression of genes involved in matrix production and sporulation [18]. Within a biofilm different cell types coexist and display a high degree of spatiotemporal organization with matrixproducing cells that ultimately differentiate into spores [8]. Another interesting feature of some Bacilli is the production of pigments. Isolates of several Bacillus species produce a wide variety of pigments, from spore-associated melanin-like molecules [19] to different types of carotenoids [20,21]. In some cases, those carotenoids have been characterized and proposed to provide resistance to UV irradiation and reactive oxygen species [20,21,22,23]. A pigmented strain of Bacillus pumilus, SF214, isolated from a marine sample, has been previously described [21]. SF214 is a moderate halophilic bacterium able to form a matrix and to produce an orange to red water-soluble pigment, i.e. a pigment that can not be partitioned into organic solvents but is retained in the aqueous phase [21]. The inability to partition this pigment into organic solvents, to resolve it by HPLC and to obtain characteristic carotenoid UV/VIS spectra, has precluded its definitive assignment as a carotenoid [21]. However, the spectral peak at 410 nm shown by aqueous extracts of SF214 [21] is likely to represent a protein-associated carotenoid, as previously described for carotenoproteins extracted from crawfishes [24]. Here we report that in SF214 pigment production is a highly regulated process that occurs during the stationary growth phase only in cells not devoted to spore formation. Thus SF214 pigment production appears as a bimodal phenomenon alternative to sporulation, parallel to matrix biosynthesis and essential to grant cell resistance to oxidative stress. Results Pigment Production is Dependent on Growth-phase, -temperature and -medium Synthesis of the water-soluble pigment produced by SF214 is a strictly regulated process as it depends on the growth-phase, -temperature and -medium. Pigment production was shown to be strongly induced only 8-10 hours after that cells have entered the stationary growth phase at 37uC in rich (LB) medium (Fig. 1A). Although SF214 is a mesophilic bacterium and its optimal growth temperature is 37uC, the maximal production of the pigment was observed at 25uC (Fig. 1B). Compared with cells grown at 25uC a slightly decreased production of pigment was observed at 30uC, whereas more than 2-fold and about 6-fold decreased synthesis was observed at 37uC and at 42uC, respectively (Fig. 1B). The absorbance spectrum of cell extracts of SF214 between 300 and 500 nm [20] showed that cells grown at 25uC produced about 4fold more pigment in a minimal (S7; black symbols in Fig. 1C) than in a rich (LB; gray symbols in Fig. 1C) medium, while in a sporulation-inducing (DS; white symbols in Fig. 1A) medium the synthesis of pigment was almost abolished. Heterogeneity of Pigment Production Previous reports have shown that carotenoids produced by the yeast Phaffia rhodozyma [25] or the halotolerant green alga Dunaliella salina [26] autofluoresce and that such property can be used to follow carotenoid production by fluorescence microscopy. We found that the water-soluble pigment of SF214 is also autofluorescent and that the fluorescence is not localized but rather diffuse in the cell cytoplasm. Interestingly, in the cell culture only some of the cells are fluorescent. Fig. 2 shows a representative microscopy field observed by phase contrast (left) and fluorescent microscopy either following the autofluorescence (middle) or after DAPIstaining (right). The enlarged panels of Fig. 2 clearly show that only some of the DAPI-stained cells were autofluorescent. The number of autofluorescent cells varied with the growth conditions (see below) but ranged between 20% in exponentially growing cells to 80% in stationary cells. Ghost-like cells, negative to DAPI staining and showing some autofluorescence were not considered. It is interesting to observe in Fig. 2 a doublet of cells (white and grey arrows in the enlarged sections). Those two cells seem to be still partially attached and to derive from the same mother cell, following the last round of division before stationary phase. Only one of them (grey arrows) has switched to the ''pigment state'' and is autofluorescent. Two lines of evidence support our conclusion that the observed autofluorescence was actually due to the water-soluble pigment: i) unpigmented Bacilli (including other isolates of B. pumilus) (not shown) and an unpigmented mutant of SF214 (described below) did not show any fluorescence under identical experimental conditions (Fig. 3); ii) the number of autofluorescent cells varied consistently with the variations of pigment production observed at various growth-phase, -temperature and -medium. As shown in Fig. 4, the number of fluorescent cells was higher in a stationary than in an exponential cultures (left panels), in cells grown at 25uC than in cells grown at 37uC (middle panels) and in cells grown in minimal (S7) than in rich medium (LB) (right panels). For each condition considered in Fig. 4, different microscopy fields were analyzed and over 1,000 cells for each condition counted. This analysis indicated that the increased production of pigment observed depending upon growth-phase, -temperature and -medium is not due to a higher production of carotenoid by each producing cell but rather to an increased proportion of cells able to produce the pigment. Restriction of pigment synthesis to a subpopulation of cells indicates that late stationary cultures of SF214 contain a heterogeneous population of cells and that pigment formation is a bimodal process. Pigment Synthesis Only Occurs in Cells not Devoted to Sporulation Free spores as well as immature spores still contained within the mother cells are known to autofluoresce [27]. We observed that the fluorescence of sporangia containing an almost mature spore was always limited to the prespore. Fig. 5 shows a representative microscopy field with sporulating cells of SF214 observed by phase contrast (left), autofluorescence (middle) and the merge (right): while only some cells autofluoresced with a fluorescence diffused in the cytoplasm, fluorescence associated to sporangia containing an almost mature spore was confined to the forming spore, as no fluorescence was visible within the cytoplasm. This observation, together with experiments reported in Fig. 1C indicating that when grown in a sporulation-inducing (DS) medium SF214 cells did not produce the pigment, suggests that pigment production in B. pumilus SF214 is mutually exclusive with spore formation. To better address this point we analyzed SF214 cells by pigment-driven autofluorescence (green) and by immunofluorescence due to anti-CotE primary antibody and fluorescent secondary antibody (red). CotE is a spore coat protein [28], produced early during sporulation, known to localize on the spore surface [27]. For our analysis antibody raised against CotE of B. subtilis were used [29]. In a preliminary experiment this antibody was shown to specifically react against a protein of B. pumilus SF214 corresponding in size to CotE of B. subtilis (Fig. S1). Fig. 6 reports representative microscopy fields of fluorescence and immunofluorescence microscopy of SF214 cells grown in LB at 37uC up to the early stationary growth phase. In this analysis we observed that, similarly to what observed in B. subtilis [27], B. pumilus CotE is localized around the forming spore, and that cells recognized by the anti-CotE antibody were all not autofluorescent. We never observed yellow cells, which would have been indicative of cells producing the pigment (green signal) and the spore-specific protein CotE (red signal) (see the merged panels of Fig. 6 for some examples). Therefore, based on the experiments of Figs. 5 and 6 we conclude that pigment synthesis and sporulation are alternative developmental pathways and occur in different cell subpopulations. Matrix Synthesis Occurs Only in a Subpopulation of Pigmented Cells In B. subtilis sporulation and matrix formation are alternative developmental programmes [11]. Since SF214 also forms a matrix [21] we verified whether also in this bacterium matrix formation and sporulation are alternative and whether matrix and pigment synthesis can occur in the same cells. To this aim we analyzed SF214 cells by pigment-driven autofluorescence (green) and by immunofluorescence due to anti-TasA primary antibody and fluorescent secondary antibody (red). TasA is a major protein component of the B. subtilis biofilm [15], encoded by the third gene of the yqxM-sipW-tasA operon [16,30]. For our analysis we used antibody raised against TasA of B. subtilis (a gift of A. Driks). Preliminary experiments showed that a protein homologous to TasA of B. subtilis can be extracted from spores of strain SF214 and that this protein is recognized by the anti-TasA antibody (Fig. S2). The homology with the protein of SF214 starts at position 24 of TasA, which corresponds to the first amino acid residue of the mature form of TasA after the proteolytic maturation of pre-TasA [30,31] (Fig. S2). Fig. 7 reports representative fields of fluorescence and immunofluorescence microscopy of SF214 cells grown in minimal (S7) medium at 25uC up to the early stationary phase. This analysis showed that: i) cells that were not autofluorescent and therefore devoted to sporulation (indicated by white arrows in Fig. 7A) were never recognized by the anti-TasA antibody, and ii) only about 80% of the autofluorescent cells (from a total of approx. 1500 cells counted in 6 different microscopy fields) were recognized by anti-TasA antibody (yellow cells in Fig. 7). Panel B of Fig. 7 shows some examples of autofluorescent cells that are not recognized by anti-TasA antibody. These results indicate that The Pigment of SF214 is Essential for Cell Resistance to Hydrogen Peroxide In non-photosynthetic organisms pigments have been associated to cell resistance to UV irradiation and reactive oxygen species [21,22,23]. To analyze the role of the SF214 pigment we isolated an unpigmented mutant after nitrosoguanidine (NTG) mutagenesis [32] (Fig. S3). To this aim mid-exponential phase cells were incubated for different times with 10 mg of NTG and the percentage of survival assessed by CFU determination (Fig. S3B). To minimize the possibility to have mutants carrying multiple mutations, we only analyzed cells exposed to NTG for the shortest time. NTG-treated cells were then diluted, plated and checked for pigmentation after 36 hours of incubation at 25uC. One unpigmented mutant, SF214-Mut, was chosen for further analysis. Although we could not isolate the mutation responsible for loss of pigmentation, as several attempts to transform SF214 with either plasmid or chromosomal DNA resulted unsuccessful (not shown), we were able to show that the unpigmented phenotype reverted spontaneously at a frequency of 1 clone out of 10 9 , thus suggesting that the NTG treatment had not produced multiple mutations. Analysis of the aqueous extracts showed that the mutant does not produce any molecule able to adsorb at 410 nm (Fig. S4) and, consistently, a fluorescence microscopy analysis showed that no fluorescent cells were present in a stationary phase culture of the unpigmented mutant (Fig. 3). SF214 and its unpigmented derivative were used to analyze the cell response to hydrogen peroxide. Cells of the two strains were grown at 25uC in minimal (S7) liquid medium and collected 10 hours after the entry into stationary phase. Cells were then incubated with 30 mM hydrogen peroxide and analyzed for viability after various incubation times. While wild type cells were all viable after exposure to hydrogen peroxide for up 30 minutes and showed a reduced viability only after 45, 60 and 90 minutes of treatment, the unpigmented mutant showed a clear decrease of viability at all incubation times (Fig. 8). In a parallel experiment spores of both strains were totally resistant to the hydrogen peroxide treatment at all time points tested (Fig. 8). Results of Fig. 8 confirm that the pigment has a role in the response of vegetative cells to oxidative stress. Spores do not contain the pigment but are totally resistant to hydrogen peroxide due to other, pigmentindependent mechanisms [6,33]. Discussion The main result of this report is the observation that pigment production in SF214, a marine isolate of B. pumilus, is a bimodal phenomenon alternative to sporulation. SF214 cells in stationary growth phase form a heterogeneous population able to follow diverse developmental fates. Some cells start the sporulation programme while others produce the pigment. Only a subpopulation of pigmented cells also produces a matrix. This is reminiscent of the situation found in B. subtilis. Seminal studies performed using B. subtilis, the model organism for spore formers, have shown that in dispersed cell populations spore formation, matrix production, competence to acquire external DNA and production of extra-cellular proteases are all bimodal processes [9,10,12]. Spore and matrix formation appear as alternative developmental pathways with some cells producing a matrix and others entering the irreversible program of spore formation [8,9,10]. In addition to those two also other cell fates are alternative in B. subtilis: within a biofilm, only a subpopulation of B. subtilis cells produce surfactin but, while surfactin-producers do not respond to their own surfactin, other cells do and become matrix producers. In this case, individual B. subtilis cells simultaneously expressing genes for both surfactin and matrix synthesis have never been observed [11]. These two subpopulations do not include the entire population and the rest of the cells that do not differentiate as surfactin or matrix producers probably originate the other cell types known to be present in B. subtilis populations [13]. In this frame each differentiation fate sets the stage for a subsequent cell type. For example, within biofilms matrix-producing cells are initially predominant and later differentiate and become spores [8]. By analogy, we propose that B. pumilus SF214 dispersed stationary cells also form a heteroge-neous population able to follow diverse developmental fates. Some cells enter the irreversible sporulation cycle forming the highly resistant but metabolically quiescent spore while other cells follow a different survival strategy and produce a pigment able to protect the cell from oxidative conditions. Cell diversification and the ability to develop different survival strategies in B. pumilus SF214 can then be viewed as a risk spreading (or bet hedging) strategy. Such stochastic switches between phenotypic states have been found in diverse organisms ranging from bacteria to humans and are considered among the earliest evolutionary solutions to adapt and facilitate persistence in fluctuating environments [34]. Only a subpopulation of pigment-producing cells forms an extracellular matrix. It is not clear whether matrix-production can also be viewed as a survival strategy in specific environments. However, the existence of more than two developmental cell fates is not surprising but rather expected on the base of the multiple cell types previously observed in B. subtilis [7,13]. An additional result of this work is the observation that pigment formation is a highly regulated process. Growth conditions affect pigment synthesis most probably regulating the number of cells that become able to synthesize the pigment. This conclusion is supported by the number of fluorescent vs. not fluorescent cells in diverse microscopy fields (Fig. 4). Although our analysis does not allow us to assess the amount of pigment synthesized at a singlecell level in the various conditions, it clearly shows that a regulation is exerted when the single stationary cell turns its fate towards either sporulation or pigment synthesis. Strain SF214 of B. pumilus is a field isolate and our attempts to genetically manipulate it have been so far unsuccessful. Several attempts to transform SF214 with chromosomal DNA of an antibiotic-resistant strain of B. pumilus or with a non replicative plasmid have all been unsuccessful. SF214 contains a large natural plasmid. We obtained a cured strain which did not show apparent phenotypic differences from SF214 but that was still refractory to transformation. The impossibility to manipulate SF214 has so far impaired a deeper molecular analysis of the various developmental fates of SF214 and of the regulatory proteins involved. A future challenging task will be to verify whether the master regulator Spo0A, known to control matrix formation and sporulation, as well as other cell fate regulators of B. subtilis such as ComX and SinI/R, is also involved in pigment development in B. pumilus. Pigment Extraction and Detection For pigment extraction, cultures were centrifugated at 7000 rpm for 10 minutes. The cell pellet was suspended in a lysis buffer (50 mM Tris-HCl pH 7.5, 1 mM DTT, 0.1 mM PMSF, 10% glycerol) and sonicated at 4uC for 10 min (30 sec. ON and 30 sec. OFF). The pellet was completely removed by centrifugation at 13000 rpm for 15 minutes. Protein concentration of the various extracts was determined spectrophotometrically and aliquots of identical protein concentration used to determine the adsorbance spectrum between 300 and 550 nm, as previously reported [21]. Hydrogen Peroxide Assays Vegetative cells and spores were diluted to a concentration of approximately 10 8 CFU/ml in PBS, and 1 ml of the cell suspensions placed in a 1.5 ml microcentrifuge tube. 30 mM H 2 O 2 (Sigma) was added to the cell suspensions at the concentration of 30 mM. Spores or cell suspensions were incubated at room temperature with continous gentle mixing. After various incubation times 100-ml samples were removed, immediately diluted, plated onto LB agar plate and incubated in order to determine the number of colonies. Fluorescence and Immunofluorescence Microscopy For autofluorescence and DAPI staining 200 ml aliquots of cell culture were centrifuged (2 min 6,000 g) and cells resuspended in 20 ml of phosphate-buffered saline (PBS, pH 7.4). Only for the DAPI staining PBS contained 0.1 mg/ml of 49,6-diamidino-2phenylindole dihydrochloride (DAPI). Six microliters of each sample were placed on microscope slides and covered with a coverslip previously treated for 30 seconds with poly-L-lysine (Sigma). Samples were observed with an Olympus BX51 fluorescence microscope using a Fluorescein-Isothiocyanate (FITC) or DAPI filters to visualize the fluorescence of the cells. Typical acquisition times were 2000 ms for autofluorescence and 100 ms for DAPI and the Images were captured using a Olympus DP70 digital camera and processed. Immunofluorescence was performed essentially as described by Azam et al (2000) [37], with a few modifications. Bacteria were fixed for 1 hour at room temperature in 80% methanol, washed, briefly treated with lysozyme and fixed to poly-L-lysine-treated coverslip slides to improve micrographs resolution. The coverslips were air dried and pretreated with 5% (w/v) dried milk in PBS, prior to incubation overnight a 4uC with the primary antibodies. In particular, a 1:400 diluition of anti-CotE (raised in mouse) and a 1:300 dilution for anti-TasA (raised in rabbit) were used. After ten washes, the samples were incubated with a 1000-fold diluted specific secondary antibody conjugates with Tetramethyl Rhodamine, TRITC (Santa Cruz Biotechnology, Inc.) for 2 hours at room temperature in the dark. After ten washes the coverslips were covered with one drop (30 ml) of Component C (Slow Fade: Molecular Probe S-2828) containing 0.1 mg/mL of DAPI. After 5 minutes the liquid was aspirated and the coverslips mounted onto microscope slides adding one drop of Component A (Slow Fade: Molecular Probe S-2828). The microscope slides were analyzed as described above. Figure S1 Western blot analysis of spore coat proteins of SF214 with anti-CotE antobody. Purified spores were extracted by SDS-DTT treatment as previously reported (Nicholson and Setlow, 1990), fractionated on 12% SDS-PAGE and blotted on a PVDF membrane. The membrane was reacted with antibody raised against the CotE protein of B. subtilis (Isticato et al., 2010), then reacted against HRP-conjugated secondary antibody and visualized by the ECL method. Coat proteins of a wild type and an isogenic mutant lacking CotE of B. subtilis were used as positive and negative control, respectively. (TIF) (Nicholson and Setlow, 1990), fractionated on 12% SDS-PAGE and blotted on a PVDF membrane. The membrane was reacted with antibody raised against the TasA protein of B. subtilis, then reacted against HRPconjugated secondary antibody and visualized by the ECL method. Coat proteins of a wild type strain (PY79) of B. subtilis were also used. (TIF) Figure S3 Isolation of an unpigmented mutant of SF214. (A) SF214 wild type and unpigmented mutant (Mut) on a LB plate grown at 25uC for 48 hours. (B) Survival of SF214 after treatment with 10 mg of NTG for various times. Mid-exponential cells were treated with NTG, washed twice, diluted, plated on LB plates and incubated at 37uC for 36 hours. (TIF) Figure S4 Pigment production in SF214 and its unpigmented mutant. Adsorbance spectrum between 300 and 500 nm of 360 mg of cell extracts of SF214 wild type (black symbols) and unpigmented mutant SF214-Mut (white symbols). Cells of both strains were grown at 25uC for 24 hours. (TIF)
5,639.2
2013-04-25T00:00:00.000
[ "Biology", "Environmental Science" ]
PID Parameters Auto-Tuning on GPS-based Antenna Tracker Control using Fuzzy Logic . Abstract – Moving vehicles require an antenna to communicate which is placed on the vehicles and at the ground station (ground control station, GCS). Generally, GCS uses a directional antenna equipped with the drive system with the conventional proportional, proportional-integral, or proportional-integral-derivative (PID) control, and step-tracking algorithms based on the received signal strength indicator (RSSI). This research used PID control method tuned with fuzzy logic based on Global Positioning System (GPS) to control a directional antenna at GCS. The resulting antenna tracker system was capable of tracking objects with a minimal error of 0° at azimuth and elevation angle and had a maximal error of 49° for a 49 km/hour speed object. The system had an average rise time of 0.7 seconds at an azimuth angle and 1.08 seconds at an elevation angle. This system can be used to control antenna direction for moving vehicles, such as an unmanned aerial vehicle (UAV) and rocket. I. INTRODUCTION Vehicles that have wide-ranging areas such as rockets, satellites, or unmanned aerial vehicles (UAVs) require an antenna to communicate.The antenna is placed on the vehicles and at the ground control station (GCS).Some efforts are needed to improve the quality of communication between the vehicles and GCS.Improvement on the side of the vehicles is usually avoided because it can increase the vehicles load. Improvement on the GCS side that can be done is using a directional antenna and moving the antenna facing the vehicles [1].Generally, the vehicles use the omnidirectional type of antenna, meaning that radiation patterns are spread evenly in all directions [2]- [4].The directional antenna on the GCS has a radiation pattern that forms a beam in a particular direction, while in the other direction the signal is weak.The use of a directional antenna extends the signal range and optimizes power consumption [5], [6]. The common method used to track the vehicles is with real-time global positioning system (GPS) data as a reference or using the received signal strength indicator (RSSI).Tracking method using GPS aims to get the coordinate of GCS and the vehicles, then process it to obtain an angle between the two [7].The RSSI method works by determining the strong signal received by at least 2 antennas on the GCS as a reference for directing the antenna [8]. Many research and development of antenna drive systems have been done, such as a recipient of video link data from a UAV using 5 channels antenna mono pulse [8].The researches used various control methods such as Proportional controllers [7], Proportional-Integral controller (PI) [9], Proportional-Integral-Derivative (PID) controllers [10], step-tracking algorithm with H∞ controller for closed-loop tracking design [11], Fuzzy-PD [12], and using Fuzzy-PID [13].All of the researches mentioned was conducted by RSSI tracking method.This method has disadvantages such as the construction of complex antennas, and susceptible to be interfered by other signals. In this research, a PID control system was optimized by using fuzzy logic to control the antenna drive system so that the antenna always leads to a vehicle.The tracking method used GPS as a reference to improve the previous research deficiencies.The expected result of this research is the antenna can face the vehicle with high accuracy and fast response. II. RESEARCH METHODS This chapter describes the components of the system and the used control methods.Generally, the antenna tracker system consists of a 433MHz Yagi-Uda antenna, an antenna drive device placed on GCS, and a payload as data sender from a vehicle to the GCS.The control method used is the optimized PID method using fuzzy logic. The 433 MHz Yagi-Uda antenna is used because it has a directional radiation pattern.The antenna must meet several criteria, namely the standing wave ratio (VSWR) ≤ 2, the reflection coefficient value of the antenna ≤-10 dB, and the link budget value ≥15 dBm.*) Correspondence author (Ahmad Riyandi) Email<EMAIL_ADDRESS>meet these criteria, antennas are designed and simulated using CST Studio Suite application.New antennas can be produced after the simulation results meet these criteria.The simulation results of the antenna used in this research are shown in Table 1. The data sender composed of several components as shown in Figure 1.This data sender device serves to send the location data of the vehicle to the antenna drive device in real-time.The location information of the vehicle is obtained through readings of BMP280 barometric sensors and GPS receivers.The BMP280 barometric sensor reads the altitude of the vehicles, while the GPS receiver reads the latitude and longitude data of the vehicle.The vehicle location information is sent to the antenna control device via 433 MHz telemetry radio.All components are supplied by a power supply system consisting of a 7.4 V battery and a voltage regulator. The antenna drive device is generally composed of several components as shown in Figure 2. Two servo motors are used each as an antenna drive at an azimuth angle and an elevation angle.The actual angular of the antenna in the azimuth direction is read by the HMC5883L magnetometer sensor, and in the elevation direction read by the MPU-6050 sensor.A 433 MHz telemetry radio connected to a 433 MHz Yagi-Uda antenna is used to receive the vehicle location information.The location data is processed by the STM32F108C microcontroller into an input reference for this antenna tracking system.The voltage supply system consists of an 11.1 V battery and some voltagereducing regulator. The input reference of the antenna drive consists of two inputs, namely the azimuth setpoint and the elevation setpoint.The azimuth setpoint value is obtained using the bearing equation whose output is the azimuth angle.Calculation of bearing angle using Equation 1 to Equation 4. Parameter z denotes azimuth angle, λ a as longitude of GCS, λ o as longitude of the vehicle, φ a as latitude of GCS, and φ o as latitude of the vehicle. The elevation setpoint value is derived from triangular trigonometric calculations, using altitude and distance values from GCS to the object projection point on the earth's surface (Haversine).The calculations of Haversine are calculated using Equation 5to Equation 9. R denotes the radius of the earth (6,371 km) and d as Haversine.Equation 10 is a trigonometric equation to obtain an elevation angle where b is the elevation angle and f is the altitude of the vehicle. From the reference value, the value of error and Δerror can be found as in Equations 11 and Equation 12. Parameter e(k) denotes actual error and e(k-1) as previous error.e(k) = Setpoint -Actual angle ( 11) The error and Δerror values are used as inputs for the fuzzy self-tuning PID control method as shown by the block diagram of Figure 3. Based on the mathematical equations of PID, the discrete version embedded in the digital system will have the form as seen in Equation 13to Equation 16where Up (k )= proportional control, Ui ( k)= integral control, Ud ( k )= derivative control, and U ( k ) = PID control.Kp, Ki, and Kd are PID control parameters tuned using fuzzy logic. ) The use of fuzzy logic aims to optimize the output of the PID.The expected result of this control is that the system is capable of achieving set points in the shortest time possible without overshoot and oscillation. In Fuzzy logic processing, there are several stages as in Figure 4.The first stage is the fuzzification.This stage is the fuzzy set determination of the membership function graph.The graph of membership functions designed on this system consists of three triangular membership functions and two trapezoidal membership functions.The five membership functions are each labeled NB, N, Z, P, and PB. Membership function used in this system is divided into membership function for error input and membership function for Δerror input.The membership function for error input limitation on the x-axis is based on the maximum error that may occur on the system, while the membership function for Δerror input limit is based on the observation of the maximum value of Δerror that can occur in the system.The y-axis on the membership function graph is the degree of membership.In this system, the degree of used is from 0 to 1.The degree of membership is obtained through the mathematical equation of the trapezoidal or triangular membership function.The membership function at the elevation angle input is shown in Figure 5 and Figure 6, while the membership function at the azimuth angle is shown in Figure 7 and Figure 8. The knowledge base consists of a database and a rule base.The fuzzy logic rule base is obtained using the heuristic approach, where the rule base is obtained from the analysis of the effect of PID parameters on the transient response of the system to obtain the desired system response.The set rule is used to connect between input variables and output variables.This used 4 which contains variables KS, K, S, B, BS, or called singleton.The singleton value is derived from tuning the PID parameters using the trial and error method.The next stage in fuzzy logic processing is fuzzy inference or decision-making system.At this stage, the decision-making method used is the Sugeno method.The process of obtaining a firm value from the fuzzy set occurs in the defuzzification process.The method used in defuzzification is the weighted average method with the formula as in Equation 17, where is the result of the rule evaluation process, while is the singleton value on the nth label of the linguistic variable of the output membership function.is the degree of fuzzy output membership.The number of is equal to the number of , that is as many fuzzy sets are designed on the output membership function. III. RESULTS AND DISCUSSION System testing was done through several stages.The first stage of testing was done on each sensor and GPS used.The next stage was testing the system response with manual input.The last stage was testing the whole system by laying objects on certain vehicles. The sensor testing parameter is the level of accuracy.The magnetometer and MPU sensor tests were performed using a protractor as a reference.Testing of barometric sensors and GPS used benchmark data as a reference.The test results showed that an average magnetometer sensor error rate of 8.4 °, MPU 0.3 °, and GPS 5 meters.From the test results, it was decided that the error rate is still within reasonable limits, so that can be continued to test the system response.System response test was done to measure the performance of fuzzy logic to the used control system.The test was performed on the servo elevation with setpoint 30° and 70° as shown in Figure 9 and Figure 10, and on the azimuth servo with various setpoints as shown in Figure 11 to Figure 14.The test results are summarized in Table 5 and Table 6.The fuzzy logic was able to optimize PID control.This is evidenced by test data showing that fuzzy logic is able to speed up the system to reach the setpoint.The result of the system response showed that fuzzy logic can optimize PID control with the better rise time to the steady state than conventional used in [7], [9] and [10]. The last stage of this test was to test the system by laying objects on certain vehicles.The first test was done by laying the object on a quadcopter so that the object's velocity and object's distance cannot be measured.The test route is shown in Figure 15.The data of the three trials are summarized in Table 7 and Table 8.It showed that the system can follow the object on variations of the tested route.This was evidenced by the minimum error value of all experiments is 0°.The system can also follow objects without data loss despite having a maximum error of up to 91° at azimuth angle and 38° at elevation angle.The maximum value of these errors occurs because the quadcopter changes direction drastically before GPS updates its data.This research performed better routing response at both elevation and azimuth angle changes than [7] which results in 8,3 of average error of the antenna using the RSSI method. The second test was done by laying the object on a motorcycle so that the speed and distance of the object can be measured.In this test the motorcycle was driven on the track as in Figure 16 with three variations of speed.The test was focused on the azimuth servo response with a test angle between 320° to 100°.The graph of the test results is shown in Figure 17 and Figure 18.The test results with measured objects are summarized in Table 9.From the three tests, it can be seen that the system can follow the object even as the speed is increased.This was evidenced by the less tracking time when the velocity was increased. The implication of communication or penalty in the maximum error was found by calculating the signal to noise ratio (SNR) value of the antenna.The SNR calculation uses the antenna radiation pattern data as in Figure 19, where the relative strength of the antenna field at the actual position is reduced by the relative strength of the antenna field in the direction of the maximal error.Table 9 shows that the penalty at the rate of 40 kmph was negative.This happened because the antenna radiation pattern was not ideal where the strongest relative field strength was not at an angle of 0° but at an angle of ± 30°.From the test results could be concluded that the increase in speed leads to an increase in average error and maximum error.This happens because the maximum value of the GPS update rate was 5 Hz which means that the position data of the object was updated at most every 0.2 seconds.This antenna drive system control using PID tuned with fuzzy logic can track a moving vehicle with high accuracy and better responses than previous research.It had better responses to follow a moving vehicle in both elevation and azimuth direction than conventional PID in [7], [9], [10].This system can track and control antenna direction without using the omnidirectional antenna as [2]- [4] and GPS as reference instead of using the RSSI method in [7], [8]. IV. CONCLUSIONS An antenna tracker system had ability following the movement of objects with better responses.The fuzzy logic tuning was capable of optimizing PID control so that it can increase the system's average rise time at the azimuth angle and elevation angle.The system can follow objects on a variety of routes without losing data.The system was also capable of following the object with speed up to 60 kmph. Figure 3 . Figure 3.Control system block diagram Figure 8 . Figure 8. Input membership function for azimuth angle Δerror Figure 10 . Figure 10.System response at the elevation angle with setpoint is 70°. Figure 9 . Figure 9. System response at the elevation angle with setpoint is 30°. Table 1 . CST Studio Suite antenna simulation results Figure 2. Block diagram of the antenna drive device Up (k )=Kp .e ( k) Table 4 . Rule base for Kd at elevation and azimuth angle Table 3 . Rule base for Ki at elevation and azimuth angle Table 2 . Rule base for Kp at elevation and azimuth angle Table 5 . System response at the various elevation angle Table 6 . System response at the various azimuth angle Table 9 . Azimuth servo response for the various speed movement Table 7 . Routing response at the elevation angle changes
3,570
2018-07-31T00:00:00.000
[ "Computer Science" ]
Design Of Fertigation System Control In Green House Based On Internet Of Thing (Iot) - Green house must be able to control the environment with temperature and humidity parameters that are suitable for plant growth. However, manual watering must always be done at any time, which is time-consuming for farmers. Greenhouses with modern technology create automatic controls such as plant sprinklers. Thus, the time spent on watering plants is less than the manual system. In addition, farmers can save water which has been wasted all this time because they do not know the condition of water requirements for plants. An automatic plant watering system with a DHT 22 sensor is used to control the greenhouse environment. With the development of the internet almost all over the world, giving changes to daily human activities. Internet of Things (IoT) technology allows objects to connect and communicate with each other. In this fertigation control device, IOT connects sensor devices and solenoid valves to be monitored via the internet network. IoT is built with the ESP8266 module which allows access via the internet. The hardware design uses a microcontroller as a control method. The data is then sent online to an open-source site that acts as a web server. The web server is used for controlling and monitoring data accessed via the internet. The conclusion of this tool is that the system can do watering automatically. The system can do watering automatically, greater than the humidity temperature of 30 and humidity of less than 90%. So that the condition of the plants can be maintained properly. The system can be controlled with a WIFI network through the Blynk application. Can display the status of humidity, temperature and humidity conditions on the LCD and the Blynk application. Can be controlled from anywhere and anytime. I. INTRODUCTION Agriculture in Indonesia is one of the main producers of raw materials consumed at home and abroad. As a result, more and more agricultural methods are being developed. The method that is widely used is the Green house or commonly called the greenhouse, or what is commonly referred to in Indonesia as the kumbun. It can be interpreted as a building designed to avoid and manipulate the environment to create the desired building. Environmental conditions for subsequent plant maintenance. Compared to plants outside the greenhouse, plants are more controlled and their growth is maximized, but greenhouse construction is not fully adapted to the climate in which the greenhouse is built. Greenhouse management also uses a lot of manual methods to meet the expectations of quantity, quality and continuity. Based on this, we want to create a smart greenhouse system that can be monitored automatically and remotely. However, in this system, it only focuses on controlling the smart usage center which is already equipped with sensors and controllers.. [1]. The design of this tool uses the DHT22 Sensor as input to be processed on the NodeMCU ESP8266 microcontroller. After processing, it will be sent to the relay to turn on the selonoid valve as output. With the advancement of information and communication technology. All sectors will have a positive impact. In this case in the agricultural sector. It is important that the integration between technology and agriculture in Indonesia must be treated optimally. The design of this tool will be implemented in Sokaan dorp, Pakuniran District, Probolinggo District. Green house designs with different shapes, depending on climatic conditions. Plants have certain conditions that help them to thrive and be productive. Climate adaptation in the greenhouse should be optimized with a system similar to the climate required for the growth of these plants. Below are some greenhouse systems that rely on technology in their construction. [2] This greenhouse is very simple, made of wood and other bamboo materials. At Low Tech there is no specific control to regulate the environmental parameters in the greenhouse. A simple technique is used to increase and decrease the temperature and humidity, the light intensity can be reduced by covering or curtain material. The temperature can be reduced by making gaps in the walls. [3]. This type of greenhouse is built from Glavish Iron (G.I). The canopy cover is made with a structure and screws for convenience. The whole structure is sturdy and strong against the wind. Heaters and coolers are used to regulate temperature, as are humidity regulators. This system is semi-automatic, so it requires a lot of attention and care. Then a lot of manpower is needed to maintain the ideal environment. This type is suitable for dry and composite climate zones. [4]. In this discussion many environmental factors in the greenhouse are controlled simultaneously. The control system has sensors, comparators, operators and signal receivers. Determining the position of the sensor is very important because all control systems seek to represent the state read by the sensor. The sensor collects variables, calculates them, and compares them with standard value measurements. For more controllable such as temperature control system, humidity control system, timing system . [5] Temperature is very influential on plant growth. Some processes in plants that are affected by temperature are plant transpiration, photosynthesis, and respiration processes. Plant growth is maximized when the temperature or temperature is maintained properly. When evaluating, focus on the growth limiting factors rather than the underlying temperature. [6]. The DHT22 sensor is a sensor with digital signal calibration. It can provide temperature and humidity information. This sensor has very high quality components For the DHT22 sensor can measure a wide range of temperature and humidity Send an output signal via a cable up to 20 meters. [6]. Solenoid valve is a valve that is controlled by electric current either AC or DC through the coil / solenoid. This solenoid valve is the most frequently used control element in fluid systems. As in pneumatic systems, hydraulic systems or in machine control systems that require automatic control elements. [8]. The ESP8266 NodeMCU is a development derivative of the ESP8266 family of ESP-12 IoT (Internet of Things) platform modules. The ESP8266 module can be found in our previous article. Functionally, this module is very similar to the Arduino module platform, only that it is dedicated to "connecting to the internet" [9]. Blynk is an Android or IOS operating system platform as a control module for Arduino, Rasberry Py, Esp32, and others via internet access. The Blynk application functions to control IoT (Interner of Things) devices, for communication between the Blynk Application and the microcontroller board must use a code called a token. In this final project, the author uses the Blynk Platform to monitor the temperature and humidity that is read by the DHT22 Sensor and the soil Moisture sensor in real time using the Internet of Things method in the shallot seed storage room project so that it can be controlled remotely [10]. From the description above, the research will focus on how to design a fertigation control system in an IoT-based greenhouse, and how to regulate and monitor temperature and humidity conditions in the greenhouse that are displayed online. Research Stage This research was carried out by following the steps as shown in the following flowchart: In the picture about the design above, it can be seen with a description of the overall electrical design, a microcontroller with a DHT22 sensor, RTC, relay, LCD and Selonoid Sensor. The following is the flow of Figure 3: DHT22 . Electrical Design The DHT22 sensor itself is a sensor that usually detects the state of the temperature at a predetermined place and in order to function to read and process data, a microcontroller or brain is needed as a processing center and its resources. In order for these two components to be integrated and work together, an electrical connection is needed between them. So in this step, the electrical design between the microcontroller and the DHT22 sensor is carried out, in the picture below the DHT22 design is carried out LCD Electrical Design LCD electrical design. This design is carried out for the information interface between the user and the research tool. On the LCD is displayed information about the tool that is running. The plan is as follows. Electrical Design of Solenoid Valve After designing the electrical sensor, the next step is to design the output or action, namely the selonoid valve. Because the Selonoid valve used works at a voltage of 12vdc, the power source is obtained directly from the main power supply. Then in order to be controlled to open / close, an automatic breaker is used, namely a 5vdc relay controlled by a microcontroller. The picture below the design in detail can be observed. Hardware Testing And Results In this step, a series of tests are carried out on the work on the hardware design that has been designed. This test is carried out to determine the level of success of the design that has been carried out. The tests that will be carried out are testing the 5 volt adapter, testing the sensor readings and testing the solenoid valve. DHT22 . Sensor Testing The purpose of this test is to determine the accuracy of the DHT22 sensor reading by comparing it with the medium for measuring temperature and humidity, namely HTC-1. This test is carried out 1 time a day with a span of 30 days. After that, the comparison resultsobtained is After getting the results of the error measurement, then find the average value of DHT22 with HTC-1 using the following formula: So that the average value of the error measurement is 3,99 %. Relay Test In the design of this tool, the relay functions as a breaker for the 12 volt dc motor input voltage, so that it can be turned ON/OFF Table 2 Relay Test Solenoid Valve Testing Selenoid Valve testing functions as an automatic water faucet opening device that will be flushed, or a solenoid valve as output. Table 3 Solenoid Valve Test IoT Testing IoT design testing functions as an ESP8266 Configuration with the Blynk Application Table 4 Testing IoT Test Results and System Discussion At this stage, the DHT22 sensor successfully reads the air temperature and temperature. if the temperature is < 30 C, the solenoid will not be active. However, if the temperature is > 30 C, the solenoid will turn on or be active. Here are the test results Table 5 Testing Results Conclusion Based on the analysis, design and testing of this tool, it can be concluded that, among others, as follows:: 1 system can do watering automatically. 2 The system can do watering automatically, humidity is greater than 30 and humidity is less than 90%. So that the condition of the plant can be maintained properly. watering time is at 09.00 the DHT22 32.3 sensor is active. and at the next watering at 09.00 the DHT22 26.1 sensor is not activef. 3 The system can be controlled with a WIFI network Through the Blynk application 4 Can display the status of humidity, temperature and humidity conditions on the LCD and the Blynk application. 5 Can be controlled from anywhere and anytime. Suggestions In the completion of this final project can not be separated from the various shortcomings that occur. So that it can be developed in the future. Some inputs from researchers, namely: 1. Researchers can add a tool, namely a blower so that it can condition the temperature when the temperature is too hot in the gree house.
2,662.8
2023-08-21T00:00:00.000
[ "Computer Science" ]
Transcriptional alterations of protein coding and noncoding RNAs in triple negative breast cancer in response to DNA methyltransferases inhibition DNA methylation plays a crucial role in multiple cellular processes such as gene regulation, chromatin stability, and genetic imprinting. In mammals, DNA methylation is achieved by DNA methyltransferases (DNMTs). A number of studies have associated alterations in DNMT activity to tumorigenesis; however, the exact role of DNMTs in shaping the genome in triple negative breast cancer (TNBC) is still being unraveled. In the current study, we employed two DNMT inhibitors (Decitabine and 5-Azacytidine), two TNBC models (MDA-MB-231 and BT-549) and whole transcriptome RNA-Seq and characterized the transcriptional alterations associated with DNMT inhibition. Colony forming unit (CFU), flow cytometry, and fluorescent microscopy were used to assess cell proliferation, cell cycle distribution, and cell death, respectively. Ingenuity pathway analysis (IPA) was used for network and pathway analyses. Remarkably, DNMT inhibition induced the expression of genes involved in endoplasmic reticulum response to stress, response to unfolder protein, as well as cobalamin metabolic processes. In contrast, suppression of cellular processes related to cell cycle and mitosis were hallmarks of DNMT inhibition. Concordantly, DNMT inhibition led to significant inhibition of TNBC cell proliferation, G2-M cell cycle arrest and induction of cell death. Mechanistically, DNMT inhibition activated TP53, NUPR1, and NFkB (complex) networks, while RARA, RABL6, ESR1, FOXM1, and ERBB2 networks were suppressed. Our data also identified the long noncoding RNA (lncRNA) transcriptional portrait associated with DNMT inhibition and identified 25 commonly upregulated and 60 commonly downregulated lncRNAs in response to Decitabine and 5-Azacytidinec treatment in both TNBC models. TPT1-AS1 was the most highly induced (6.3 FC), while MALAT1 was the most highly suppressed (− 7.0 FC) lncRNA in response to DNMT inhibition. Taken together, our data provides a comprehensive view of transcriptome alterations in the coding and noncoding transcriptome in TNBC in response to DNMT inhibition. to be larger with higher metastasis, chemo-resistance, relapse frequencies, worse prognosis and relatively poor outcomes in patients [2,3]. The intra-tumor heterogeneity (ITH) is highly associated with tumorigeneses and untreated tumors are driven to drug-resistance due to genetic and epigenetic modifications [4,5]. The emergence of resistance in TNBC is an imperative epigenetic challenge to address for the development of better and more effective treatment modalities for TNBC. Epigenetic mechanisms, which include DNA methylation, posttranslational modification of histone proteins, and gene repression through noncoding RNA (ncRNA) play vital roles under normal physiological and pathological conditions. DNA modification plays an important role in malignant cellular transformation, genomic imprinting, X-chromosome inactivation, gene expression, genetic instability, and mutations, which have been associated with several diseases including cancer [6,7]. Beyond genetic background, DNA methylation typically occurs at cytosines in the sequence of CpG dinucleotides, which are distributed randomly across the genome. Localized CpG-rich regions, known as CpG islands (CGIs) determine whether genomic regions are transcriptionally active or silent, where highly methylated DNA is associated with transcriptionally inactive genomic regions. Regions rich in GC pairs, such as in CpG islands, are usually unmethylated, serving as a method of gene expression control [8,9]. The rest of the genome maintains areas of sparse DNA methylated, excluding areas of active transcription sites of genes. Certain CpGs are involved in silencing, genomic imprinting and transcription from repetitive elements, including retroviral genes [10]. Abnormal epigenetic alterations arise in many cancers, regulating the expression patterns of specific genes. Epigenetic dysregulation frequently leads to inappropriate activation or inhibition of multiple signaling pathways and silencing of non-mutated tumor suppressor genes leading to loss of gene function [9,11,12]. Recent studies focus on pioneering approaches in treatment of numerous cancers by either inhibiting DNA hypermethylation and/or re-expressing silenced tumorsuppressor genes (TSGs). TSGs usually suppress or negatively regulate cellular proliferation and results in inhibition of tumorigenesis [13]. DNA hypermethylation is mediated through DNA methyl transferases (DNMTs), which can directly silence TSGs expression [14]. CGI hypermethylation in TSGs promoters is a hallmark for cancer. Transcriptional gene silencing and inhibition of transcriptional factors including AP-2, c-Myc/Myn, E2F, and NF-κB, in addition to the recruitment of methyl-CpG binding proteins, has been reported in human cancers including breast cancer [15,16]. Several genes are found to be hypermethylation in various cancers. It is reported that susceptible genes are involved in cell cycle regulation (Rb), apoptosis, genes associated with DNA repair (BRCA1), transcriptional regulation (hMLH1, Plk2) [17,18], drug resistance and metastasis [19], thereby preventing hypermethylation by DNA methyl transferase inhibitors (DNMTi's) [20] represent potential therapeutic intervention for TNBC [20]. Despite the growing advances in epigenetic medicine, there are still numerous challenges in the clinical management of TNBC. The clinically relevant and well characterized DNMT inhibitors, Decitabine and 5-Azacytidine are nucleoside analogue mechanism-based inhibitors, approved by USA Food and Drug Administration (FDA) to treat myelodysplastic syndrome and leukemia [21,22]. The two aforementioned DNMTi's may potentially reverse epigenetic alterations resulting in inhibiting cellular proliferation and reactivation of the expression of silenced cancer genes with hypermethylation as shown in preclinical studies for various solid tumors [23,24]. Data from a randomized clinical phase II study suggested that lower doses of decitabine proved more bearable in ovarian cancer [25]. However, clinical trials at present have not been fruitful in solid tumors [26]. In the present study, we employed RNA-seq data geared towards the discovery of the coding and lncRNA transcriptional landscape of TNBC cells treated with DNMTi's revealing a number of altered biological processes, and the activation of a number of mechanistic networks including TP53, NUPRI and NFkB, while RARA, RABL6, ESR1, FOXM1, and ERBB2 networks were mostly suppressed. We further identified the most highly induced lncRNA, TPT1-AS1, while MALAT1 was more suppressed in response to DNMT inhibition of TNBC models. Our data provides the first transcriptome and network analyses of TNBC cells in repose to DNMTi's for better understanding of the consequences of DNMT inhibition and their potential utilization for the clinical management of TNBC patients. Drug preparation Decitabine and 5-Azacytidine small molecule inhibitors were purchased from Selleckchem (Houston, TX, USA). Inhibitors were dissolved in dimethyl sulfoxide (DMSO) (Sigma Aldrich, St.Louis, MO, USA) at a stock concentration of 10 mM and were stored in aliquots at − 20 °C. Further dilutions were made in DMEM at the time of experiment to achieve final concentrations 2.0 μM. RNA isolation and quantification Forty-eight hours post inhibitor treatment (2.0 μM), total RNA was isolated from treated and control TNBC cells using total RNA purification kit (Norgen Biotek Corp, ON, Canada) as per the manufacturer's instructions. The concentration and purity of extracted RNA was measured using NanoDrop 2000 (Thermo Scientific, DE, USA) and RNA were stored at − 80 °C. Quality assessment of RNA The quality and quantity of extracted RNA was measured using on-chip electrophoresis utilizing the Agilent RNA 6000 Nano Kit (Agilent Technologies, CA, USA) and Agilent 2100 Bioanalyzer (Agilent Technologies) as per the manufacturer's instructions. Samples exhibiting an RNA Integrity Number (RIN) > 9 were used for library preparation. Total RNA library preparation and RNA sequencing Total RNA samples with a RIN higher than 9 were used as input for the library preparation using TruSeq Stranded Total RNA Library Prep Gold kit (Cat #: 20020598) from Illumina following the manufacturer's protocol. Briefly, 500 ng of total RNA was subjected to rRNA depletion and then to fragmentation. The first-strand cDNA synthesis was performed with random hexamers and SuperScript II Reverse Transcriptase (Cat#: 18064014) from ThermoFisher Scientific. The second cDNA strand synthesis was performed by substitution of dTTP with dUTP. The double-stranded cDNA is then end-repaired and adenylated. Barcoded DNA adapters were ligated to both ends of the double-stranded cDNA and then amplified. The libraries quality was checked on an Agilent 2100 Bioanalyzer system and quantified on a Qubit system. The libraries were pooled, clustered on a cBot platform, and sequenced on an Illumina HiSeq 4000 at a minimum of 50 million paired end reads (2 × 75 bp) per sample. RNA-Seq and bioinformatics analysis Pair-end reads were subsequently pseudo aligned to the Gencode release 33 index and reads were subsequently counted using KALLISTO 0.42.1 [27] as we described before [28,29]. TPM (Transcripts Per Million) expression values were subsequently subjected to differential analysis and hierarchical clustering and Principle component analysis as described before [30]. Transcripts exhibiting − 2.0 ≥ FC ≥ 2.0 and p < 0.05 were considered significant and were used for IPA analysis. Ingenuity pathways analysis (IPA) Differentially expressed genes from the RNA-seq analysis (2.0 FC, p < 0.05) were imported into the IPA Software (Ingenuity Systems Inc., USA) as we previously described [31]. Functional regulatory networks and canonical pathways were determined using upstream regulator analysis (URA), downstream effects analysis (DEA), mechanistic networks (MN), and casual network analysis (CNA) prediction algorithms. Disease and function analysis were used to identify the disease and functional categories affected by DNMTi based on alteration in transcriptome data. IPA uses precise database to paradigm functional regulatory networks from a list of individual genes and determines a statistical score, the Z-score, for each network, according to the fit of the network to the set of focus genes. The biological functions assigned to each network are ranked according to the significance of that biological function to the network [32]. Cell cycle analysis using flow cytometry Cell cycle analysis was performed with and without DNMT inhibitors (Decitabine (S1200) and 5-Azacytidine (S1782); Selleckchem) treatments as described before [33]. Briefly, MDA-MB-231 and BT-549 were treated with DNMT inhibitors at 2.0-μM final concentration in 6-well flat-bottom tissue culture plate. On day 3, cells were collected and fixed with 70% ice-cold ethanol and stored at 4° for overnight. Before staining, cells were washed with PBS twice and incubated in RNAse A (100 ug/ml) and propidium iodide (PI; 50 ug/ml) staining solution and then subjected to cell cycle analysis using BD LSRFortessa X-20 flow cytometer (BD Biosciences, CA, USA) at FL3 channel. Detection of apoptosis using fluorescence microscopy The acridine orange and ethidium bromide (AO/EB) fluorescence staining method was used to assess apoptosis in MDA-MB-231 and BT-549 cells after treatment with 2.0 μM of DNMT inhibitors. On day 5, cells were washed and stained with dual fluorescent staining solution containing 100 μg/ml AO and 100 μg/ml EB (AO/ EB, Sigma Aldrich, St. Louis, MO, USA) for two minutes; subsequently, imaged under Olympus IX73 fluorescence microscope (Olympus, Tokyo, Japan). The distinction uptake of AO/EB allows the identification of viable and non-viable cells. Principally, AO was used to visualize the number of cells undergone apoptosis, while EB positive cells indicated necrotic cells as we defined before [34]. Dot blot analysis of DNA cytosine methylation DNA was isolated using RNA/DNA/Protein Purification plus Kit (Norgen Biotek Corp., ON, Canada) according to manufacturer's instructions. Briefly, DNA samples was denatured with 0.1 M NaOH at 99 °C for 5 min, neutralized with ammonium acetate, NH4OAc (6 M) as previously described [35]. Different concentration of DNA (20 to 100 ng) in 5 µL blotted onto nitrocellulose membranes (GE Healthcare, Life Science, Germany). The DNA spots was dried at 60 °C for 10 min and further UV-cross linked (60 s, 1200 J/cm2), followed by blocked with 5% non-fat dry milk in tris-buffer saline (TBS) at room temperature for one hour. Subsequently, the level of DNA methylation was analyzed using anti-5-methylcytosine (5-mC) mouse monoclonal antibody at 1:1000 dilution (Catalogue No: 39649, Active Motif, CA, USA) at overnight at 4 °C incubation. Horseradish peroxidase (HRP)conjugated rabbit anti-mouse was used as the secondary antibody at 1:2000 dilution. Chemiluminescent detection was performed using WesternSure Chemiluminescent Substrate (LI-COR, Lincoln, NE, USA). Intensities were quantified using the quantification tool in Image Laboratory 5.0 software (Bio-Rad Laboratories). TPX2-siRNA transfection and colony-formation in TNBC cells To investigate the functional consequences of TPX2 knockdown on MDA-MB-231 and BT-549 viability, 0.084 × 10 6 cells/mL were transfected with the TPX2-siRNAs or scrambled negative control purchased from Ambion. Transfection was performed using a reverse transfection protocol as previously described [33]. In brief, siRNAs at a final concentration of 30 nM were diluted in 50 µL of Opti-MEM (cat no. 11058-021; Gibco, Carlsbad, CA, USA), and 1.5 µL of Lipofectamine 2000 (cat. no. 52758; Invitrogen) was diluted in 50 µL Opti-MEM. The diluted siRNAs and Lipofectamine 2000 were mixed and then incubated at ambient temperature for 20 min. One hundred microliters of transfection mixture were added to the 12-well tissue culture plate, and subsequently 300 µL of MDA-MB-231 and BT-549 (0.084 × 10 6 cells/mL) in transfection medium (Opti-MEM) were added to each well. The colony-forming ability of TNBC cells transfected with TPX2-siRNAs or siRNA-negative control was determined using a clonogenic assay as described before [31]. In brief, on day 7, the plates were washed and then stained with crystal violet and were subsequently scanned, and the number of colonies were observed under inverted microscope, followed by quantified using 10% of SDS. Similarly, CFU analysis was performed after DNMT inhibitors treatment on day 5. Quantitative reverse transcription PCR (qRT-PCR) 500 ng of RNA was used for reverse transcription using High Capacity cDNA Reverse Transcription kit (Applied Biosystems, Foster City, CA, USA). Real time PCR was carried out using PowerUp SYBR Green Master Mix (Applied Biosystems) on QuantStudio 7/6 Flex qPCR (Applied Biosystems) using primer pairs listed in Table 1. Relative levels of transcripts were determined using the 2 -ΔΔCT Method relative to GAPDH reference gene. Statistical and survival analysis Statistical analyses and graphing were performed using GraphPad Prism 8.0 software (GraphPad, San Diego, CA, DNMT inhibitors predominantly target pathways regulating cell cycle and apoptosis To characterize the transcriptional landscape alterations in TNBC in response to DNMT inhibition, MDA-MB-231 and BT-549 model were treated with Decitabine and 5-Azacytidine and compared to vehicle-treated controls. RNA was subsequently extracted and was subjected to whole transcriptome RNA-Seq analysis. Decitabine and 5-Azacytidine treatment led to significant inhibition of DNA methylation as shown in Additional file 1. Hierarchical clustering based on differentially expressed mRNA transcripts revealed clear separation of DNMT treated and control group (Fig. 1a). A total of 185 and 227 transcripts were upregulated, while 208 and 149 transcripts were downregulated in response to DNMT inhibition using 5-Azacytidine and Decitabine, respectively (2.0 FC, P < 0.05, Additional file 2: Table S1 and Additional file 3: Table S2). Notably, DNMT inhibition induced the expression of genes involved in response to endoplasmic reticulum stress, response to unfolder protein, cobalamin metabolic processes as well as p53 class mediator induced apoptosis. On the contrary, suppression of cellular processes related to cell cycle and mitosis were the hallmarks of DNMT inhibition. Similar distinction was observed using principal component analysis (PCA ; Fig. 1b). Downstream effector analysis of differentially expressed genes in TNBC cells treated with DNMT inhibitors We subsequently performed gene set enrichment analysis using the Ingenuity Pathway Analysis (IPA) tool on the differentially expressed transcripts in TNBC cells in response to DNMT inhibition. IPA analysis revealed several altered disease and functional categories in response to DNMT inhibition (Fig. 2a, Additional file 4: Table S3). Notably, cell death functional categories such as apoptosis and necrosis were the most activated in DNMTi treated TNBC cells (Fig. 2b). Upstream regulator analysis revealed three transcriptional regulatory networks (TP53, NUPR1, and NFkB complex) to be significantly activated (activation z score 4.7, 4.2, and 2.2 respectively) in response to DNMTi treatment of TNBC cells (Fig. 2c) (Fig. 2c, Additional file 5: Table S5). The expression of selected number of differentially expressed genes (FTH1, BBC3, KLF4, CEBPB, PTP4A3, PPARA, BIRC5, KIF23. MAD2L1, NCAPG, PBK, and PLK4) was validated using qRT-PCR in MDA-MB-231 and BT-549 (Fig. 2d, e).Taken together, our data highlighted activation of several tumor suppressor transcriptional regulatory networks in response to DNMT inhibition leading to tumor cell death. Repressed mechanistic networks leads to apoptosis in response to DNMT inhibition Upstream regulator analysis of the differentially expressed genes revealed dramatic effects of DNMT inhibitors on numerous vital networks in TNBC cells. Notably, the most inhibited networks were those driven by CREB1, E2F3, KDM1A, MITF, ERBB2, FOXM1, ESR1, RABL6 and RARA upstream regulators (Additional file 6: Table S5). In particular, FOXM1 and RARA networks were highly suppressed in response to DNMT inhibition, including self-inhibition by FOXM1. The regulator effects of those networks are represented in Fig. 3a, b, with their downstream effects on cellular apoptosis. On the other hand, activated networks include TP53 (Fig. 3c), which also triggers apoptosis in DNMTi treated cells. Taken together, our data highlighted a number of upstream regulator networks affected by DNMT inhibitors, which collectively promotes dell death. Functional consequences of DNMT inhibition on TNBC cell viability In order to validate the functional effects of DNMT inhibitors in the TNBC models, both MDA-MB-231 and BT-549 cells were treated with Decitabine and 5-Azacytidine and were subjected to AO/EB staining to assess cell death, CFU to assess cell proliferation and cell cycle analysis to assess changes in cell cycle distribution. AO/ EB staining revealed significant inhibition of cellular proliferation and induction of apoptosis and necrosis in both models (Fig. 4a, c). The data are concordant with CFU which also revealed substantial inhibition of colony forming capabilities of TNBC cells in response to DNMT inhibition. Cell cycle analysis revealed significant G2-M cell arrest, reduction in G0-G1phase, and an increase in subG0 (apoptotic), which is collectively concordant with IPA analysis (Fig. 4b, d). IPA analysis revealed suppression of TPX2 by TP53 in response to DNMT inhibition ( Fig. 2c and Additional File 5). To corroborate those findings, we used siRNA to assess the consequences of TPX2 depletion on TNBC CFU potential. Data presented in Fig. 4e revealed significant reduction in TPX2 expression in siTPX2-transfecetd TNBC cells which led to subnational inhibition of CFU of MDA-MB-231 and BT-549 cells suggesting the TP53-TPX2 axis as potential mechanism by which DNMTi exert their biological function. Prognostic value of the gene signatures in TNBC cells treated with DNMT inhibitors The prognostic value of upregulated and downregulated gene signatures in response to DNMT inhibition in TNBC to overall survival (OS) and disease-free survival (DFS) were evaluated using hazard ratio (HR) in GEPIA2 database (Fig. 5). Squares outlined with darker edges have the highest prognostic values. Interestingly, five upregulated genes (TBRG1, PIK3CB, ESSRA, ZFHX3, and NUPR1) were associated with worse OS, while CLIP2 and LAMB3 were associated with better OS. ESSRA, TPGS1, ASNS, and RAB3L1 exhibited worse DFS, while WIZ, CFAP70, and FTH1 were associated with better DFS. Looking into downregulated genes in response to DNMT treatment, SUV39H2, NCAPD3, PAPSS2, CENPN, SOD1, SCO2, and PSMB1 were associated with worse OS, while ANLN and RAD54L were associated with worse DFS (Fig. 5). Expression profiling of lncRNAs in TNBC cells in response to DNMT inhibitors To gain more insight into changes in differentially expressed lncRNA transcripts in TNBC cells treated with Decitabine and 5-Azacytidine compared to vehicle-treated control cells, we utilized RNA-Seq data and computational analysis. Transcriptome data were mapped to the Gencode release 33 followed by differential expression analysis to determine the lncRNA transcripts affected by DNMTi treatment. As depicted in Fig. 6a, hierarchical clustering revealed two major clusters, where control samples clustered to the left side, followed by MDA-MB-231 treated with Decitabine and 5-Azacytidine and BT-549 cells treated with Decitabine and 5-Azacytidine. A total of 70 common lncRNA transcripts were upregulated, while 190 common lncRNA transcripts were downregulated in response to 5-Azacytidine, while 97 common lncRNA transcripts were upregulated and 266 common lncRNA transcripts were downregulated in response to Decitabine in the two TNBC models (Additional file 6: Table S5, Additional file 7: Table S6). Principle component analysis (PCA) also revealed clear separation of TNBC cells treated with Decitabine and 5-Azacytidine compared to the vehicle control based on lncRNA transcriptome (Fig. 6b). We subsequently crossed the differentially expressed lncRNA list from 5-Azacytidine and Decitabine and identified 25 commonly upregulated and 60 commonly downregulated lncRNA transcripts in response to the two DNMT inhibitors (Fig. 7a, b; Additional file 8: Table S7). Expression of commonly upregulated and downregulated lncRNAs in response to 5-Azacytidine and Decitabine is presented as heatmap in Fig. 7c. Interestingly, TPT1-AS1 was the most highly induced (6.3 FC), while MALAT1 was the most highly suppressed (− 7.0 FC) lncRNA in response to DNMT inhibition (Additional files 6 and 7). Taken together, our data highlighted the transcriptional alterations in the coding and noncoding transcriptome of TNBC in response to DNMT inhibition. Discussion Our comprehension of epigenetic regulation has rapidly increased dramatically over the past decades. to evaluate DNA methylation patterns, facilitating the identification of actionable target pathways in the cancer epigenome [8,9]. In the current study, we analyzed the transcriptional alterations of two TNBC models in response to Decitabine and 5-Azacytidine treatment. While several of the identified transcripts could be directly regulated by DNA methylation, we do not exclude the possibility that a number of the differentially expressed genes in our experiments could be indirect consequence of DNMT inhibition. Our data supports a multi-pronged effects of DNMT inhibition though induction of genes involved in the response to endoplasmic reticulum stress, On the other hand, suppression of cellular processes related to cell cycle and mitosis, which corroborates the earlier findings, reported that genes associated with CpGs and P53 pathways regulates DNA repair and apoptosis in several types of human cancers including lung [36], and colorectal cancers [37]. Mechanistic network analyses revealed activation of manifold networks with NUPR1, TP53, and NFKB signaling on top of the list, while RARA, RABL6, ESR1, FOXM1, and ERBB2 networks were suppressed. Our data is concordant with other studies highlighting an important role for NUPR1 [38], TP53 [39], and NFkB [40] in several cancers, thereby supporting that the activated pathways have promising therapeutic potential for patients with TNBC. Among the identified mechanistic networks in current study, CREB, E2F3, and KDM1A were previously shown to activate transcriptional programs and to promote cellular growth, migration, cell cycle progression and DNA damage response [44][45][46]. Another group reported that CREB plays an important role in cellular migration and contributes to the epithelial-to-mesenchymal transition (EMT) of human breast cancer [47]. FOXM1, a transcription factor upregulated in several cancer types, plays a key role in cell cycle progression, stemness and tumorigenesis [48]. Our previous studies and other groups have also highlighted the role of FOXM1 activation in colorectal cancer and TNBC [29,49]. Overexpression of FOXM1 and ERBB2 lead to genomic instability and uncontrolled cell division and malignancy, which are associated with poor prognosis in various cancerous lesions including breast cancers [50,51]. Our previous study in colorectal cancer reported FOXM1 to be a novel target for epigenetic regulation by the miR-320 family [52]. Other research groups revealed that small molecule inhibitors, (naphthol AS-E) mediated CREB gene transcription through inhibiting cell proliferation, migration, and survival in breast cancer cells [53]. Our studies revealed that the most inhibited transcription networks by DNMT inhibitors were CREB1, E2F3, KDM1A, MITF, ERBB2, FOXM1, ESR1, RABL6 and RARA. Our findings from the current study are concordant with our previously published work on transcription factors such as ERBB2, RABL6, FOXM1 and MITF which are most effected by palbociclib treatment in MDA-MB-231 breast cancer cells, reducing colony formation, cell migration and viability [33]. Our current data corroborated a role for TPX2 in regulating TNBC proliferation and colony formation potential. Our data could explain in part the inhibitory effects of DNMTi though downregulation of TPX2 though TP53 activation. LncRNAs have recently been identified as key epigenetic regulators of multiple biological functions including [60,61]. In addition to regulation of protein coding mRNA transcripts, our data revealed regulation of several lncRNAs by DNMTi. Our data identified TPT1-AS1 as the most highly induced lncRNA (6.3 FC) in response to DNMT inhibition in TNBC cells. A number of studies reported oncogenic role for TPT1-AS1 in ovarian and colorectal cancer. However, in breast cancer low expression of TPT1-AS1 was associated with high tumor, nodes, and metastases (TNM) stage, lymph node metastasis and predicted shorter overall survival [72]. Our data revealed epigenetic regulation of TPT1-AS1 by DNA methylation as potential mechanism leading to its suppressed expression in breast cancer. MALAT1 was the most downregulated lncRNA in response to DNMTi of TNBC. Our recent data implicated MALAT1 in TNBC resistance to neoadjuvant chemotherapy where we showed CRISPR-Cas9 mediated MALAT1 promoter deletion to reduce CFU and enhance sensitivity of TNBC cells to chemotherapy, hence corroborating data from current study [73]. Conclusions In conclusion, our data provides a comprehensive view of transcriptomic alteration in the coding and noncoding transcriptome of TNBC cells in response to DNMT inhibition. Our data contributes toward our understanding of the mechanism by which DNMT inhibition induce TNBC cell death through widespread regulation of the genome and suggest their therapeutic potential to treat patients with TNBC.
5,689.6
2021-09-26T00:00:00.000
[ "Medicine", "Biology" ]
A 18.7 TOPS/W Mixed-Signal Spiking Neural Network Processor with 8-bit Synaptic Weight On-chip Learning that Operates in the Continuous-Time Domain We present a mixed-signal spiking neural networks processor with 8-bit synaptic weight on-chip learning in 40 nm CMOS that consists of a 10k mixed-signal synapse circuit and 100 analog leaky integrate-and-fire (LIF) neuron circuits. The processor has no clock signal except in peripheral circuits for I/O, and neuron and synapse circuits can operate asynchronously in the continuous-time domain, just like biological neurons. We demonstrate the energy efficiency of 6.24–18.7 TOPS/W in a multitarget spike learning task. I. INTRODUCTION Transistor shrinking is approaching its physical limits, so three-dimensional (3D) integration technologies are being studied for next-generation semiconductor devices.With 3D integration technologies, it is expected that new applications can be realized by stacking dies fabricated using different technologies, such as complementary metaloxide-semiconductors (CMOS), micro-electro-mechanical systems (MEMS), and dynamic random-access memory (DRAM). Such technologies will also reduce the delay and power consumption for communication with other chips and system areas. However, a concern is that thermal problems will be more serious than those encountered with singledie integrated circuits. Excessive heat should be considered because heat is a more serious problem in 3D integrations than in thin dies and thus limits the number of stacked layers per volume [1]. To realize highly stacked systems, it is important to develop a highly efficient arithmetic scheme that can void thermal problems. In hardware research for machine learning (ML), mixed-signal hardware based on computein-memory (CIM) architectures has been proposed to realize high-efficiency application-specific integrated circuits (ASIC) [2]- [11]. CIM architectures are used to reduce power consumption for multiply-accumulate (MAC) operations. In CIM, MAC operations are carried out using analog current and voltage, and processors employing CIM architectures have demonstrated high energy efficiency [2]- [8]. Moreover, CIM processors based on resistive random-access memory (ReRAM) have been proposed to achieve higher energy efficiency [9]- [11]. The CIM approach has been shown to work effectively in the ultra-deep submicron regime [8]. CIM architectures can potentially allow AI processors to directly process sensor data without analog-to-digital conversion, thereby realizing extremely high-efficiency 3D integration in intelligent processors for data output from MEMS sensors. However, CIM circuits are more sensitive to fabrication mismatches than are conventional digital circuits. On-chip learning can potentially reduce the influence of mismatches [12], but hardware based on CIM processors is mostly inference hardware that has synaptic weights of 1-4 bits to reduce the footprint and energy consumption of the digital-to-analog converter (DAC), and implementation of hardware using the CIM approach with on-chip learning of synaptic weights exceeding 4 bits is a challenge. Besides thermal problems, 3D integration has a global clock distribution problem. Because it is difficult to synchronize global operations among several chips using a common clock signal, it is important to select a configuration that does not require synchronization between chips in a 3D stacked circuit. Spiking neural networks (SNNs) have been proposed as an asynchronous operation model. SNN hardware has already been implemented as digital circuits with a clock signal [13]- [17] and analog or mixed-signal circuits without any clock signal [18]- [23]. Analog SNN hardware operates in the continuous-time domain without a clock signal, eliminating the power consumption that would be necessary for clock signal distribution.Furthermore, system scaling by chip stacking is easy. With the aim of realizing a component for scalable ML systems using 3D stacking technology, we propose an SNN that satisfies three important criteria: high-efficiency computing with a CIM architecture, asynchronous operation without clock signals, and on-chip learning with a synaptic weight exceeding 4 bits.We designed a prototype using a TSMC 40 nm CMOS that operates in the continuous-time domain, the same as biological neurons.We employed the remote supervised method (ReSuMe) [24] as a supervised algorithm. The remainder of this paper is organized as follows: In Section 2, we describe the learning algorithm implemented in our circuit. Section 3 describes the proposed circuit and implementation of the synapse and neuron circuits. Section 4 presents experimental results for the proposed circuit, and Section 5 concludes. II. LEARNING ALGORITHM The remote supervised method (ReSuMe) [24] shown in Fig. 1 is a supervised-learning algorithm for SNNs, in which algorithm weight updates are based on the ith presynaptic spike train S pre,i (t), the postsynaptic spike train S post,j (t) output from the jth neuron, and the target spike train S tgt,j (t) for the jth neuron. This algorithm can learn multi-target spikes, and can also be applied to various spiking neuron models, including leaky integrate-and fire (LIF) [25], and the Hodgkin-Huxley [26] and Izhikevich [27] neuron models.This algorithm is expressed as where t is the continuous time, a d is a non-Hebbian term, and s ij is the delay between the S pre,i (t) and S tgt,j (t) firings (s ij = t pre,i − t tgt,j ). The exponential kernel f ij (s ij ) is where A R is the amplitude of long-term potentiation, and τ R is the time constant of exponential decay. In our circuit, we set A R = A + = A − . III. PROPOSED CIRCUIT A. CHIP ARCHITECTURE The proposed circuit is implemented based on the computein-memory architecture shown in Fig. 2 to achieve highefficiency MAC operations. This architecture consists of a mixed-signal synapse circuit and an analog leaky integrateand-fire (LIF) neuron circuit. The synapse and neuron circuits have no clock signal and operate asynchronously in the continuous-time domain, the same as actual neural systems. The neuron-synapse array macro performs a MAC operation when a pre-spike arrives. Thus, processor power consumption depends on the frequency of the pre-spike input and the values of the voltage sources. Synaptic weights are stored by localized flip-flops in the synapse circuit, and each synapse outputs an analog current weighted by the synaptic weight. The macro in the fabricated chip consists of the column circuit shown in Fig. 2. Figure 3 shows the architecture of the SNN processor, which consists of a 100×100 mixed-signal synapse array and a 100 × 1 analog LIF circuit array. Input spikes (pre-spikes) and output spikes (post-spikes) are parallel input and output using a decoder and an encoder with 7 bits, respectively. Each decoder has 100 output nodes for pre-spike inputs. Target spikes for supervised learning using ReSuMe are input by a serial-to-parallel convertor (S2P). By restricting the operating neuron circuits, the processor can select between 100-input mode and 1,000-input mode. The mode is changed by a 1-bit selection signal SL. Subsection III-B describes the method for restricting neuron circuits. Figure 4 shows details of the neuron circuit, which consists of a pulse generator (PG), M lk for leakage, a reset switch, M ip , M in , and a membrane capacitor, where voltage V xrst is the reset voltage of the membrane potential. The PG realizes threshold processing for generating a spike pulse using an inverter. Bias voltages V bp and V bn adjust the threshold voltages of the inverters for threshold processing by restricting the current for charging/discharging gate capacitance in the next stage. Transistors M ip and M in supply bias current to reduce time variation of V x,j (t) induced by leak current from the synapse array. Membrane capacitor C x,j consists of a MOM capacitor and a parasitic capacitor for the synapse array. The design value for C x,j is 32.7 fF. Note that bias voltages V ip , V in , V lk , and V bn , V bp and reset voltage V xrst are common to all neuron circuits. B. NEURON CIRCUIT Three registers in the neuron circuit change the number of synapse circuits per neuron circuit. The first register sets the neuron circuit to active or inactive. Membrane potential V x,j (t) connects the next membrane potential V x,j+1 (t) and the output node of synapse circuits via switches SW 2 and SW 1 . The ON/OFF state of SW 1 and SW 2 are controlled by the second and third registers, respectively. For example, in the case of 100 synapses per neuron, the values of the first, second, and third registers are 1, 1, and 0, respectively. In the case of increasing the number of synapse circuits per neuron, the registers of an inactive neuron are set to 0, 0, and 1, and a metal line-connected membrane capacitor C x,j is shared with the next neuron. Figure 5 shows a block diagram of the synapse circuit. The synapse consists of a delay-line array and update signal gen-erator (DLA&USG), 8-toggle flip-flops (T-FFs), and a DAC. The DLA&USG generates update signals for the synaptic weights held in flip-flops. The DAC outputs analog current according to the synaptic weight when pre-spike S pre,i is input. Voltages V bDAC , V bUSGA , V lkn , V lkp , V rsn , and V rsp are analog bias voltages, the roles of which are described below. C. SYNAPSE CIRCUIT Because synaptic weights are held in flip-flops in our circuit, if kernel functions are expressed as analog continuous waveforms, an analog-to-digital converter (ADC) is required because f ij (s ij ) has an analog value. To avoid using an ADC, we discretize the kernel function into five digital time windows consisting of five digital pulses S D1 (t)-S D5 (t), as shown in Fig. 6(a). By this modification, f ij (s ij ) and τ R are respectively expressed as f D ij (s ij ) and 5 q=1 T wq . Pulse widths T w1 -T w5 are adjusted by V b1 -V b5 , respectively. The non-Hebbian term a d was not implemented in our circuit. Synaptic weights are varied when a spike pulse of The value for dw i,j /dt and f D ij (s ij ) depends on the time-window index q. In the case of a positive update, one is added to the qth flip-flop when S tgt,j (t) is included in the qth time window. In the case of a negative update, one is added to all flip-flops except the qth, then one is added to LSB when S post,j (t) is included in the qth time window. Note that negative updates are achieved as complements of the number 2. Figure 6 The DLA&USG consists of five DL circuits and one USG. The digital time window S D,q (t), having pulse width T wq , is output from each DL. The DL includes a transistor biased on V bias (see Fig. 7(c)). This transistor suppresses the rising slope of V A generated at the trailing edge of S in (see Fig. 7(d)). This suppression is generated by a current limit for charging the parasitic capacitor at the drain node of the biased transistor, with V bias adjusting the slope. Varying the slope changes the time needed to reach threshold voltage V invth of an inverter. As a result, T wq is varied. Figure 8 shows the details of the T-FF. The T-FF is inverted at the trailing edge of an update signal. To achieve asynchronous addition, subtraction, and carry, we employed a circuit comprising a T-FF and an XOR. By connecting the XOR to the output stage of a T-FF, the T-FF is inverted when an adjacent lower bit switches from High to Low (carry) or when S U D,n switches from High to Low, where n is the index of the T-FF. Calculation results for synaptic weight can be unstable in subtraction if S U D,n are input at almost the same VOLUME 4, 2016 time. To avoid this problem, we shifted the timing of S U D,n so that signals arrive in order from the MSB to the LSB. The DB consists of seven AND gates, seven OR gates, and one NOT gate, and generates switching signals for current sources in the AB. S 1p -S 7p and S 1n -S 7n are connected to PMOS/NMOS switched-current sources (SCSs). 3) Flip-flops in Detail The AB consists of a current mirror block (shaded area) and an NMOS/PMOS transistor acting as the SCS. The current mirror block generates gate voltages for the SCSs. The generated voltages depend on V DAC , which is set to the source-drain current values of M 7p and M 7pb . We can obtain a gate voltage for which the source-drain current value of M 6nb will be half that of M 7nb by setting aspect ratio W/L of M 7nb to twice the size of M 6nb . The gate voltage corresponding to the current value of the lower bits can be generated by the same procedure. Tables 1 and 2 show W/L ratios of transistors comprising the AB when the W/L of M 7pb and M 7nb are defined as unity, respectively. Current ratios in these tables are designed values when the sourcedrain current of M 7pb is defined as unity. Figure 10 shows waveforms of each nodes during ReSuMe learning. The process of supervised learning in the designed circuit is summarized as follows: D. SUPERVISED LEARNING OPERATION 1) Pre-spike S pre,i (t) is input and the membrane potential than or equal to 127 and negative otherwise. E. FABRICATED CHIP We designed and fabricated the proposed circuit using TSMC 40-nm (1-poly, 8-metal) CMOS technology. Figures 11(a) and (b) show a whole-chip microphotograph and the singlesynapse circuit layout, respectively. IV. RESULTS OF CIRCUIT EXPERIMENTS The prototype chip has nodes for observation and experiments. We can measure the membrane potential Figure 12 shows measurement results for synaptic weight versus change in membrane potential when the synapse receiving simultaneous input was changed from #1 to #4. Note that this membrane potential waveform is that of the 100th neuron obtained through a source follower. The characteristics shown Fig. 12 are equivalent to DA conversion characteristics and thus should ideally be linear. However, as the figure shows, the characteristics were sigmoidal, nonlinear, and very noisy. Table 3 shows the slopes and intercepts obtained by linear fitting. The slope for #2 is about twice as larger as that for #1, but slopes for #3 and #4 are not three and four times the slope for #1. This is attributed to the transistor characteristic that the current value decreases as the drain-source voltage decreases. B. MULTI-TARGET SPIKE LEARNING TASK To demonstrate functionality for high-efficiency on-chip learning, we conducted a multi-target spike learning task. In 100-input mode, one learning period was set to 80 µs (prespike train input: 60 µs; wait: 20 µs), and all synaptic weights were set to 255 (= (11111111) 2 ). Pre-spikes were input in order from S pre,1 to S pre,100 every 0.6 µs. Triple spikes were used as target spikes. Firing times were set to 1,960 ns, 2,930 ns, and 4,420 ns. Target spikes were set to the same spike train for all neurons. Figure 13 shows the voltage waveforms of S tgt,100 (t), S post,100 (t), and V x,100 (t) during the task. The neuron fires at high frequency when the number of iterations p is unity, as shown in panels (a) and (b). We can see that the number of spikes decreases as learning progresses (see panel (b)), but there is nearly no decrease in the case of no learning (see panel (a)). The firing times of S post,100 almost converged to the firing times of S tgt,100 after thirty learning cycles. Figure 14 shows power consumption in the standby, learning, and inference states. Power consumption in the neuron array is low in the standby state, because the neuron circuits do not fire. The power consumptions in the neuron array and I/O during learning and inference were nearly the same, but those in the synapse array during learning were not the same. This difference was likely due to DLA&USG and T-FF updating. Table 4 shows performance results and a comparison of the proposed method with conventional SNN hardware and artificial neural network (ANN) hardware. We calculated energy efficiency with 1 MAC defined as 2 OP. Energy efficiency was higher in mixed-signal processors with a CIM architecture than in designed digital processors. Of these, our prototype processor showed the best energy efficiency in learning operations with 8-bit synaptic weights. These results show that even when using conventional CMOS technology, SNN hardware with on-chip learning can achieve very high energy efficiency when combined with CIM and an asynchronous architecture without a clock signal. V. DISCUSSION CMOS ANN hardware for inference has already achieved high energy efficiency of more than 600 TOPS/W with MAC operations based on a CIM architecture [29] by limiting the bit-width of the synaptic weights as shown in Table 4. The energy efficiency is higher than that of ANN hardware using ReRAM [10], [28]. Thus, if we focus only on the efficiency of the MAC operation, it would seem to be no advantage of adopting non-CMOS memory such as ReRAM and phase change memory (PCM) for synapses in exchange for the risk of higher manufacturing cost and lower yield. However, the physical characteristics of non-CMOS memory that allow analog information to be input and output as an analog signal can be a basic element of information processing without an ADC. Such an element is suitable for realizing information processing devices in which a sensor is directly connected to the information processing unit. In such a system, the power required for ADC can be reduced, and thus highly efficient information processing can be expected when viewing the system as a whole. As Table 4 shows, the energy efficiency of ASICs with learning is only a few TOPS/W, which is lower than that of ASICs without learning. This is presumably due to the von Neumann-type architecture, in which different blocks are used for the memory and weight update blocks. In this study, we sacrificed integration and achieved high energy efficiency of up to 18.7 TOPS/W during learning by distributing the memory and weight update circuits in each synaptic circuit. This efficiency during learning was higher than the 15.4 TOPS/W during inference because the number of operations was larger than the 2 OP of the MAC operation (see Note C in Table 4), and this operation is executed efficiently by using the time domain. The general flow of learning is to calculate the difference between the target value and the output of the neuron, and then to apply a function to the difference to determine the weight update amount. By expanding the information in the time domain, the difference and the function can be calculated simultaneously using a time window function. SNNs can naturally handle temporal information and thus are suitable for implementing efficient on-chip learning hardware. Network configurations and learning algorithms that can take advantage of the characteristics of SNNs are still in the exploratory stage. Loihi [15] and SpiNNaker 2 [34] are designed to allow a flexibility in the learning algorithm and network configuration. In this study, we limited the learning algorithm to ReSuMe and restricted the flexibility, which resulted in high energy efficiency during learning as shown in Table 4. This result is one example of high energy-efficient on-chip learning hardware that can be realized even with a a) This value was calculated from "Energy per synaptic spike op (min) = 23.6 pJ," shown in Ref. [15], when 1 MAC is 2 OP. b) These values were obtained from the macro, which is the synapse circuit array. c) A learning operation consists of pre-spike × weight and its summation (2 OP), post-spike × time window and its subtraction from the weight (2 OP), and target spike × time window and its addition to the weight (2 OP). One learning operation is thus generally 6 OP, but not always, because of the case where the neuron circuit does not fire when there is pre-spike input. d) To increase the range of conductance available in a synapse, multiple PCMs were used for a given magnitude of conductance update. e) Executing 8-bit matrix multiplications from local SRAM in the 16×4 MAC accelerator. manufacturing process as old as 40-nm CMOS by using a circuit configuration that is specific to a particular application. VI. CONCLUSION We fabricated prototype SNN hardware with 8-bit synaptic weight on-chip learning based on CIM in TSMC 40 nm CMOS, and demonstrated high-efficiency on-chip learning operations using the fabricated chip. The prototype operates in the continuous-time domain, the same as biological neurons, because the neuron and synapse circuits have no clock signal. The architecture based on CIM and asynchronous operations without a clock signal showed energy efficiency higher than that of conventional CIM-based SNN hardware. Furthermore, even when the input-output characteristics of the synapses were noisy and non-linear, the output of the fabricated chip converged to the target signal. This architecture can contribute to the implementation of highly energyefficient learning in an SNN processor using conventional CMOS technologies, but an integration of the neuron and synapse circuits remains low because the processing units are physically arranged without reusing a single processing unit in a time-division manner. In future studies, we will consider that system scaling using a stacked die depends on the energy efficiency of the stacked chips comprising the system, because heat dissipation will limit the system size. Therefore, highly efficient circuit architectures that sacrifice integration on a chip-bychip basis may be an option for realizing large-scale systems using 3D integration technologies. . APPENDIX A CALCULATIONAL PROCEDURE FOR ENERGY EFFICIENCY We explain the calculational procedure for the energy efficiency of our fabricated chip during inference and learning. These values of energy efficiency were calculated from the power consumption of the synapse array without the standby power consumption( = 27.02 µW). The energy consumption was calculated from the power consumption when running an 80 µs operation sequence in which spikes were input into 10,000 synaptic circuits. A. INFERENCE Power consumption using over inference, including standby, was 43. 25 A learning operation consists of pre-spike × weight and its summation (2 OP), post-spike × time window and its subtraction from the weight (2 OP), and target spike × time window and its addition to the weight (2 OP). One learning operation is thus generally 6 OP.
5,133.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
Second language anxiety: Construct, effects, and sources Abstract Second language (L2) anxiety is the most studied affective factor in the field of second language acquisition. Numerous studies have been conducted on this emotion from different perspectives over the last few decades. These studies can be classified into three groups. The first group has tried to conceptualize and operationalize L2 anxiety and identify the different components or dimensions of the construct (e.g., Cheng, 2004; Horwitz et al., 1986). The second group has explored the impact of L2 anxiety on various motivational, behavioral, learning, and performance aspects of L2 learning (e.g., Gkonou et al., 2017). Finally, the third group has investigated different sources of L2 anxiety (Papi & Khajavy, 2021). In this manuscript, we will draw on studies from the three strands to present an overview of the state of research on this construct and conclude by discussing major issues with the conceptualization, measurement, and design of studies on L2 anxiety. Emotions exist to "prepare us with an automatic, very quick, and historically successful response to life's fundamental tasks" (Reeve, 2015, p. 354).These adaptive responses are derived from human cognition about life situations we experience.A person's perception of achievement can lead to the emotional response of enjoyment whereas their failures can arouse the emotional response of disappointment.Similarly, a person's perception of the existence of safety and security can lead to the emotional response of calmness whereas the perceived existence of risk perceptions can generate fear or anxiety.Whereas feelings such as joy and fear have roots in the existing reality, emotions such as hope and anxiety are responses to the anticipation of possible but currently nonexisting situations.In the specific case of anxiety, cognitions that generate this unpleasant emotion represent the individual's anticipation of negative consequences (e.g., negative judgment, poor evaluation, failure) that may or may not happen immediately or in a near or distant future.The emotional response of anxiety to such anticipations can function as an adaptive mechanism that would help the individual prepare for the anticipated negative situation.When it comes to anxiety for goal pursuits such as language learning, anticipations of certain costs can lead to the arousal of this emotional response and motivate action to minimize this feeling by approaching the goal.At the same time, the anxiety aroused during L2 use or learning could harm the student's quality of experience and performance due to its inhibitory effects on learners' L2 comprehension and use (e.g., Horwitz, 1986;Teimouri et al., 2019). The anxiety associated with L2 learning, performance, and use situations is commonly known as foreign or second language (L2) anxiety.L2 teachers and practitioners generally see this emotion as an obstacle for language learning.Anxious L2 learners commonly report experiencing tenseness, freezing, trembling, sweating, and palpitations in their L2 classes, underperforming, overstudying, avoiding the L2, forgetting what they mean to say, being distracted and confused in class, and having trouble speaking in the new language (Horwitz, 1986).Some L2 teaching methods such as Suggestopedia and Community Language Learning have explicitly focused on reducing anxiety as a central principle of L2 teaching.Krashen (1982) argued that anxiety creates an affective filter that would block second language acquisition (SLA).This emotion has also been the topic of scholarly research for almost four decades in the field of second language acquisition.Studies on L2 anxiety can generally be classified into three groups.The first group of studies is conceptual, which has tried to introduce the notion to the field, examine its different dimensions, and provide methods for its measurement.The second group includes studies that have investigated the effects of anxiety on different L2 outcomes.Finally, the third group contains studies that have explored the potential sources of L2 anxiety.The following sections provide overviews of the three groups of studies and proceeds to provide suggestions for future research on this topic. Group 1: Conceptualization and Operationalization Early debates on the concept of anxiety focused on whether this emotion has facilitative or debilitative effects on L2 learning."Facilitating anxiety motivates the learner to 'fight' the new learning task; it gears the learner emotionally for approach behavior.Debilitating anxiety, in contrast, motivates the learner to flee the new learning task…" (Scovel, 1978, p. 139).In other words, facilitative anxiety is a moderate level of anxiety that motivates the individual to temporarily or permanently remove or ease the source of anxiety, but debilitative anxiety is so overwhelming that it can inhibit any adaptive action.Scovel (1978) also made a distinction between trait anxiety that is considered a relatively stable personal characteristic and state anxiety that is considered an emotional reaction to specific situations.Gardner (1985) did not specifically deal with what anxiety is, but he considered anxiety to be largely debilitative and made a distinction between classroom French as an L2 anxiety and general classroom anxiety and drew on his previous research to argue that the former is a better predictor of L2 French achievement.His conception of anxiety included measures such as English classroom anxiety, English use anxiety, English test anxiety, and generalized interpersonal anxiety.Horwitz (1986) defined L2 (foreign language) anxiety as "a distinct complex of self-perceptions, beliefs, feelings, and behaviors related to classroom language learning arising from the uniqueness of the language learning process" (p.128).She attributed the arousal of L2 anxiety to the risk inherent in the individual's uncertainty about the linguistic and sociocultural standards of the new language, challenge to the individual's self-concept as a competent communicator, and threat to the perceived authenticity of one's communication due to the individual's relatively immature command over the new language.Horwitz (1986) developed the Foreign Language Classroom Anxiety Scale (FLCAS) with thirty-three items reflective of the common anxiety-related thoughts, feelings, symptoms, and behaviors that students experience in their foreign language class.In a follow-up study, Horwitz et al. (1986) showed that foreign language anxiety is distinct from other forms of anxiety such as communication apprehension, fear of negative evaluation, and trait anxiety.To explore the factors underlying the FLCAS, Aida (1994) submitted data collected using the questionnaire to a factorial analysis that yielded four factors reflecting speech anxiety, fear of failing, comfort speaking with native speakers, and negative attitudes toward the foreign language class. The FLCAS helped streamline research on L2 anxiety by providing a useful tool for researchers to conduct studies and compare results across different contexts and populations.However, due to its bias for the oral dimension of L2 communication, its broad scope, and a lack of a meaningful theory for representing a thorough understanding of the experience of L2 anxiety, researchers have developed new scales that have narrower and more theoretically meaningful scopes.These scales either focused on anxiety related to specific L2 skills and dimensions or classified different cognitions, attitudes, feelings, symptoms, and reactions related to anxiety.MacIntyre and Gardner (1994) developed a questionnaire with items that specifically focused on anxiety reactions related to the input (e.g., "I get flustered unless French is spoken very slowly"), processing (e.g., "I feel anxious if the French class seems disorganized"), and output (e.g., "I may know the proper French expression but when I am nervous it just won't come out") stages of language learning.Saito et al. (1999) introduced and developed a scale for measuring foreign language reading anxiety.Not unlike Cheng et al. (1999), the scale included a mixture of items that addressed anxiety symptoms (e.g., confusion, nervousness, feeling intimidated) and other thoughts, emotions, preferences which might only be indirectly related to L2 reading anxiety (e.g., translating while reading, enjoying reading, reading difficulty).The researchers did not report a factor analysis that would uncover specific factors that might underlie these items, leaving the construct validity of the scale open to questions.Kim (2000Kim ( , 2005) ) developed the Foreign Language Listening Anxiety Scale (FLLAS), which included thirty-three items that fell under two constructs related to the experience of L2 anxiety: lack of confidence in listening (e.g., "I feel confident in my listening skills") and tension and worry in English listening (e.g., nervousness, tenseness, discomfort, confusion), with the latter being more directly related to L2 listening anxiety.Similar to Cheng et al. (1999) and Saito et al. (1999), Kim's scale also included items that only seemed to be indirectly related to L2 listening anxiety (e.g., "I have difficulty when the environment around me is noisy").Kimura (2008) reported the results of a factor analysis that yielded three factors underlying Kim's (2000) FLLAS items, including emotionality, representing the affective dimension of the anxiety experience (e.g., "My thoughts become jumbled and confused in listening for important information"), worry, representing thoughts that create anxiety for the individual (e.g., "I often get so confused that I cannot remember what I have heard"), and anticipatory fear, representing the experience of anxiety while listening or in anticipation of listening in a foreign language (e.g., "I feel tense when listening to, or imagining myself listening to, a lecture").The distinctions between these three components were not clear though; in addition, seven items from the original scale did not even load on any factors, suggesting that the items did not create a theoretically meaningful model of L2 listening anxiety.Woodrow (2006) developed a scale for measuring L2 speaking anxiety by focusing on the various situations that cause anxiety inside or outside of the class context.For instance, giving an oral presentation and communicating with native speakers were considered situations that would cause in-class and out-of-class L2 speaking anxiety, respectively.So far, the scales reviewed above do not seem to have a clear focus when it comes to operationalizing L2 anxiety with scales including items that measure a wide and atheoretical mixture of cognitions, attitudes, reactions, experiences, and situations that are in many cases only indirectly related to L2 anxiety.To avoid the conceptual confusion in the previous L2 anxiety scales, Cheng (2004Cheng ( , 2017) ) developed theoretically meaningful scales that only focused on the experiential dimensions of L2 anxiety based on Lang's (1971) tripartite framework.These scales also focused on the anxiety related to specific L2 skills.Cheng (2004) developed the Second Language Writing Anxiety Inventory (SLWAI) that included twenty-seven items that specifically measured the symptoms associated with L2 writing anxiety, including somatic/physiological (e.g., pounding heart, sweating, trembling, tenseness), cognitive (e.g., mind going blank, worrying, confusion, jumbled thoughts), and behavioral (avoidance) symptoms (e.g., avoiding writing in L2, avoiding L2 writing situations).In a more recent attempt, Cheng (2017) developed brief scales for measuring anxiety specific to L2 skills, namely L2 speaking anxiety, L2 listening anxiety, L2 writing anxiety, and L2 reading anxiety.The researcher developed a pool of items based on previous studies, the results of a focus-group interview, and piloting the initial questionnaire, which was administered to 523 learners of English in Taiwan in the main study.The results of exploratory factor analysis led to the emergence of four skill-specific anxiety scales with items representing the somatic (e.g., "When listening to English, I often feel my heart pounding"), cognitive (e.g., "When listening to English, I often worry that I will miss information") and behavioral (e.g., "When listening to English, I often give it up easily") dimensions of anxiety.The scales were confirmed in confirmatory factor analyses and showed acceptable psychometric properties such as reliability, discriminant, and convergent validity.Overall, several researchers seem to have focused on different dimensions of L2 anxiety.Scovel (1978) explored its debilitative versus facilitative or trait versus state dimensions.Gardner and associates (e.g., Gardner, 1985) focused on the specific situation in which anxiety is experienced such as English classroom, English use, English test, and generalized interpersonal anxiety.Horwitz (1986) put the focus of her work on developing the foreign language classroom anxiety scale that included a mixture of loosely related thoughts, feelings, symptoms, and behaviors.MacIntyre and Gardner (1994) examined anxiety related to the input, processing, and output stages of L2 learning.Finally, skill-specific scales for measuring anxiety were developed by Cheng (2004Cheng ( , 2017)), Saito et al. (1999), Kim (2000Kim ( , 2005)), and Woodrow (2006), among others.This chaos in focus of measurement has led to confusion among researchers and practitioners alike (Sudina, 2023).One notable exception is the work produced by Cheng (2004Cheng ( , 2017)), who has used Lang's (1971) framework of anxiety and rigorous methodological procedures for developing psychometrically valid scales for measuring skill-specific anxiety.These scales not only provide a clear focus on the experience of anxiety, but they also represent the experience in a theoretically meaningful way that distinguishes the somatic, cognitive, and behavioral aspects of it.Confusing the cognitions (e.g., fear of negative evaluation, judgment, and embarrassment; perceived task difficulty) or situations (e.g., taking a test, giving an oral presentation) that can precede the arousal of anxiety, or the related emotions (e.g., shame, embarrassment) and cognitions (e.g., "I'm not good at English") that may follow the experience with the actual experience of anxiety has only caused plenty of confusion in the field and should be avoided.Distinguishing the experiential dimensions of anxiety, on the other hand, can help us understand what that experience feels like for the learner, what contributes to it, what its consequences are, and finally how and where we can intervene to make a positive impact in the learner's experience. Group 2: The Effects of Anxiety Anxiety likely affects the L2 outcomes through its impact on learners' motivation and learning experience.Studies on the effects of anxiety in L2 learning, therefore, can be classified into two major groups: the first group focuses on the effects of anxiety on learner motivation and learning processes and behavior, and the second group examines the effects of anxiety on L2 outcomes. The first group of studies has led to interesting findings related to the effects of L2 anxiety on students' learning motivation, process, and behavior.Steinberg and Horwitz (1986) showed that people who were made anxious tended to avoid using their L2 in novel and creative ways.MacIntyre and Gardner (1994) exposed L2 learners to a video camera while they were completing a vocabulary learning task.They found that the induced anxiety adversely affected their task performance at the input, processing, and output stages of vocabulary learning.This effect dissipated when the students got used to the camera and were able to partially make up for their performance deficit.Gregersen and Horwitz (2002) found that more anxious learners believed the goal of using the target language to be avoiding mistakes whereas students with lower levels of anxiety were eager to talk without any concern about making mistakes.In a more recent study, Papi and Khajavy (2021) found that L2 anxiety led to the students' vigilant use of the target language, suggesting that anxious students tended to use the target language only if they had to.The debate over the effects of anxiety on L2 outcomes has been an interesting one since the introduction of the concept to the field (Li et al., 2022).The debilitative versus facilitative dilemma was especially considered an important one until the mid-1980s when more specialized instruments (e.g., Horwitz et al., 1986) for measuring L2 anxiety were developed (see Gardner, 1985;Scovel, 1978).Earlier studies had shown mixed results with some showing a negative association between L2 French class anxiety and L2 achievement (Gardner et al., 1976), and others showing positive relationships (Chastain, 1975).The confusion caused by the inconsistent results led MacIntyre (2017) to call this period "the confounding stage" in research on L2 anxiety.Nonetheless, since the introduction of the FLCAS (Horwitz et al., 1986), numerous studies have been conducted to explore the relationship between the new measure of foreign language classroom anxiety and achievement.Three meta-analyses have been conducted to synthesize the results of these studies.In the first published meta-analysis, Teimouri et al. (2019) analyzed ninety-seven published studies conducted between 1985 and 2017 and found a moderate correlation of -.36.The size of this correlation varied as a function of different moderators such as language educational level, target language, achievement measure, and anxiety type.More notably, listening anxiety (r = −.46) and writing anxiety (r = −.41)showed stronger correlations with achievement than reading anxiety (r = −0.38)and speaking anxiety (r = −.39)did.In a second meta-analysis involving forty-six studies, Zhang (2019) reported a medium-size negative correlation (r = −.34) between L2 anxiety and language performance (i.e., course grades and language performance tests), which did not change much across proficiency groups.In addition, listening anxiety showed larger correlations with performance (r = −.53)than reading anxiety (r = −.23) and testing anxiety (r = −.27).Botes et al.'s (2020) meta-analysis only included fifty-nine classroom studies that employed the Horwitz et al. (1986) FLCAS as the measure of L2 anxiety.The results of the study showed another medium-sized negative correlation (r = −.39) between FLCA and general academic achievement, a value that was stronger for listening (r = −.53) and writing achievement (r = −.44),followed by reading (r = −.34) and speaking achievement (r = −.26).Having become more streamlined, research on the notion of anxiety seems to have led to the general conclusion that anxiety is bad for language learning (e.g., Horwitz, 2017;MacIntyre, 2017).MacIntyre (2017) went so far as to consider the issue one "that can be put to bed" (p.27) and Horwitz (2017) called the search for facilitative anxiety "a huge step backwards" (p.39).This claim has its basis in the large number of studies that have provided evidence for the negative relationship between anxiety and L2 outcomes.However, it is based on a narrow definition of anxiety as an emotion that is generated only due to difficulties in the process of L2 learning and use.For example, if a student anticipates that in an oral presentation in class their peers may laugh at them if they make any mistakes, this anticipation may make them anxious during the presentation, which can keep them from trying novel structures (Steinberg & Horwitz, 1986), negatively affecting the input, processing, and output stages of learning (e.g., MacIntyre & Gardner, 1994) or making them avoid using the L2 eagerly (e.g., Gregersen & Horwitz, 2002;Papi & Khajavy, 2021).However, L2 learners do not only have L2-related feelings.In the real world, goal-pursuit anxiety functions as a strong motivational force.The motivational capacity of anxiety is the reason behind creating laws, rules, and regulations in almost every institution.Individuals often may feel anxious about meeting some duties and obligations in order to avoid possible negative outcomes, which motivates them to take action and remove the source and experience of anxiety (e.g., Papi et al., 2019).For instance, students might feel anxious about completing an assignment within a certain timeframe even if they are not enthusiastic about the assignment.Employees may feel anxious when they run late, and the anxiety can push them to hurry and make it in time for work.Drivers may feel anxious while seeing the police and avoid speeding.Anxiety is a reality and has a strong motivational force.In line with this argument, Papi and Khajavy (2021) found that for the learners who are motivated by oughts and obligations, L2 anxiety can motivate them to remain vigilant in class and use the L2 when they have to, even though this vigilance negatively affected achievement.Papi (2010), Papi and Teimouri (2014), and Tahmouresi and Papi (2021) also found positive associations between L2 anxiety and motivation for students motivated by their ought-to L2 self (representing obligations).Tahmouresi and Papi (2021) found L2 anxiety to positively predict L2 writing motivation, but it negatively predicted L2 writing achievement.Anxiety, therefore, can be an alternative motivational force in the absence of more internal and self-determined sources of motivation.However, due to the inherent risk-taking involved in L2 learning and use, the quality of the behavior motivated by anxiety does not seem to positively contribute to L2 learning and use.According to Papi and Khajavy (2021), "[l]earning a new language, at least to higher levels, might require leaving one's comfort zone, embracing another culture and language, taking risks to use the language and make mistakes, and developing a new identity" (p.565).In other words, "in the very short term, an anxiety response motivates selfprotective behaviors that deal with an uncomfortable situation even if such actions limit learning and practice opportunities in the longer run" (MacIntyre & Wang, 2022, p. 177).The quantitative effect of anxiety on motivated behavior thus seems to be outweighed by the quality of the behavior that may not be a good fit for learning a new language (Papi, 2018). Group 3: Sources of L2 Anxiety Several empirical and theoretical studies have investigated the sources of L2 anxiety.Before examining these sources, it should be noted that by "source," we do not necessarily mean a causal effect, and readers should be aware that in most cases such a relationship implies a reciprocal relationship between L2 anxiety and other constructs (MacIntyre, 2017).By reviewing the literature, we have divided the sources of L2 anxiety into three categories of linguistic, learner-internal, and learner-external factors. With regard to linguistic sources of anxiety, Sparks and Ganschow (1991) argue that L2 anxiety is mainly the result of difficulties that people experience in their first language (L1) skills (i.e., language aptitude).However, this view has been criticized by other L2 researchers (see MacIntyre, 1995) who believe that many other factors are involved in producing L2 anxiety besides L1 skills.One of these factors is L2 learners' self-perceived language proficiency.Research has consistently found that L2 learners with higher self-perceived language proficiency experience less L2 anxiety (e.g., Botes et al., 2020;Jiang & Dewaele, 2020).Furthermore, it has been found that more anxious L2 learners tend to underestimate their L2 proficiency while less anxious L2 learners tend to overestimate their L2 proficiency (MacIntyre et al., 1997).Actual L2 proficiency has also been found as a predictor of L2 anxiety with people who have higher L2 proficiency/achievement experiencing less L2 anxiety because they have the necessary skills to do the relevant language tasks and activities (e.g., Jiang & Dewaele, 2019, 2020;Jin et al., 2015;Liu, 2006).Among other linguistic factors, multilingualism has been linked to less L2 anxiety levels (Botes et al., 2020;Dewaele, 2007;Thompson & Lee, 2013).Such a link might be related to the fact that multilinguals are more confident about learning new languages, can communicate more effectively due to their prior experience of L2 learning (see Dewaele, 2007), or have higher metalinguistic knowledge, which could help them to decrease L2 anxiety (see Botes et al., 2020;Thompson & Lee, 2013).Furthermore, research has found that multilingualism does not necessarily reduce anxiety unless the multilingual has at least an intermediate proficiency level in the additional language (Thompson & Lee, 2013).Finally, among linguistic factors, frequent use of the L2 has been associated with experiencing less L2 anxiety (Dewaele, 2013;Dewaele & Al-Saraj, 2015;Jiang & Dewaele, 2020).It has been argued that the individuals who use the L2 more frequently have higher selfperceived communicative competence and are more willing to use the L2 in different situations, which in turn reduces L2 anxiety (Jiang & Dewaele, 2020), even though the reverse can also be true.That is, less anxious students might feel more confident about their communicative competence and be willing to use the L2 more frequently (Gregersen & Horwitz, 2002;Papi & Khajavy, 2021).In sum, learners' perceived L2 learning competence, whether it comes from their L1 skills, multilingual skills, or actual L2 proficiency, seems to be associated with lower levels of L2 anxiety.This can also be related to the finding of a recent meta-analysis (Zhou et al., 2022) in which a strong meta-analytic correlation (r = -.70) was found between L2 anxiety and self-efficacy.Learner-internal factors have been also reported as predictors of L2 anxiety.Some of these factors are sociobiographical (e.g., gender and age), while others are psychological (e.g., motivation and personality).Sociobiographical factors have not shown very conclusive findings.For example, with regard to the role of gender in L2 anxiety, research has produced mixed findings (see Piniel & Zólyomi, 2022).Some studies have found that females reported higher levels of L2 anxiety (e.g., Khajavy et al., 2018) while other studies have found that males reported being more anxious (Dewaele et al., 2022).Still other studies did not find any significant difference between males' and females' L2 anxiety (Jiang & Dewaele, 2020;Matsuda & Gobel, 2004).A recent meta-analysis by Piniel and Zólyomi (2022) found no statistically significant difference between females and males in terms of their L2 anxiety.In addition, this result was not moderated by factors such as age, geographical area of residence, L2, and major of study.Another sociobiological factor examined in relation to L2 anxiety is age.Like gender, mixed findings have been reported for age.While some studies have found that older L2 learners experience more L2 anxiety (Dewaele & Al-Saraj, 2015;Onwuegbuzie et al., 1999), other studies have found the opposite (e.g., Arnaiz & Guillen, 2012).It seems future research is required to systematically examine the role that age plays in L2 anxiety and the possible moderators that can affect this link.Psychological factors can also be sources of L2 anxiety.For example, several studies have found that self-esteem, as a personality trait, can be negatively related to L2 anxiety (Jin et al., 2015;Onwuegbuzie et al., 1999;Young, 1991).Low selfesteem makes L2 learners worry about others' judgments and make them want to please other people, which can in turn increase their anxiety (see Young, 1991).Competitiveness, another personality characteristic that refers to the situation in which L2 learners compare themselves to other students, has been reported as a cause of L2 anxiety.Findings about the role of competitiveness as a source of L2 anxiety has been mixed.For example, Bailey (1983) found that competitiveness is a source of L2 anxiety, while Onwuegbuzie et al. (1999) did not find a significant relationship between them.Interestingly, Jin et al. (2015) found that competitiveness was a negative predictor of L2 anxiety, which was in contrast with previous studies (e.g., Bailey, 1983).Jin et al. (2015) explained that such a contrast might be related to factors such as using different competitiveness scales, study designs, or other intervening variables.Previous research has found that L2 anxiety can be related to learners' L2 motivation (Jiang & Papi, 2022;Papi, 2010;Papi & Khajavy, 2021;Tahmouresi & Papi, 2021;Teimouri, 2017).Individuals motivated by an ought-to L2 self (representing the learner's L2-related duties and obligations) tend to experience more L2 anxiety in comparison with individuals motivated by an ideal L2 self (representing one's L2-related hopes and aspirations), because the former group is more prevention-focused and sensitive to the presence or absence of negative outcomes, which naturally provoke anxiety.On the other hand, individuals with an ideal L2 self are more promotion-focused and more concerned with growth, advancement, and positive outcomes, which can even decrease anxiety.L2 learners' mindsets (i.e., individuals' perceptions of their L2 learning ability) can be sources of L2 anxiety.Consistent with Mindset Theory in general (Dweck & Leggett, 1988), research in the field of applied linguistics has identified two types of mindsets: L2 growth mindset, which refers to the perception that L2 learning ability can be improved by effort and hard work, and L2 fixed mindset, which refers to the perception that L2 learning ability is an innate ability and cannot be improved (Mercer & Ryan, 2010).Research has shown that a fixed L2 mindset can be a source of L2 anxiety while a growth L2 mindset can be a source of positive emotions such as enjoyment (Khajavy et al., 2022;Lou & Noels, 2020;Ozdemir & Papi, 2022).The reason for these findings is that learners with a fixed L2 mindset are more concerned about how they are judged by other people, especially in challenging situations.These perceptions in turn increase their L2 anxiety.On the other hand, learners with a growth mindset see these challenges as opportunities for learning and are less concerned about others' judgments.These perceptions protect them from experiencing L2 anxiety (Lou et al., 2022).Among other factors that have been reported as sources of L2 anxiety, we can refer to personality factors.For example, several studies have found that extraversion is a negative predictor of L2 anxiety (e.g., Dewaele, 2013), as extroverts are more willing to take risks and are generally more optimistic than introverts (Dewaele & Al-Saraj, 2015).Another personality predictor of L2 anxiety is neuroticism (versus emotional stability) as people scoring higher on neuroticism experiencing more L2 anxiety (Dewaele, 2013;Dewaele & Al-Saraj, 2015).In other words, L2 learners who are naturally more emotionally stable experience less L2 anxiety.Moreover, among lower-order personality factors, trait emotional intelligence can be a negative predictor of L2 anxiety (Shao et al., 2013).The reason is that learners with higher trait emotional intelligence "are better able to control their own emotions and to gauge the emotional reactions of other people, allowing smoother interpersonal relationships, resulting in lower anxiety levels" (Jin & Dewaele, 2018, p. 151).Another lowerorder personality factor related to L2 anxiety is perfectionism for which research has found that more perfectionistic L2 learners suffer more from L2 anxiety (e.g., Gregersen & Horwitz, 2002).One point that should be taken into account is that perfectionism can be both adaptive and maladaptive.For example, the personal standards aspect of perfectionism (i.e., following high standards that are motivating) has been a negative predictor of L2 anxiety, while concern over mistakes aspect of perfectionism has been a positive predictor (Barabadi & Khajavy, 2020).Learners' regulatory focus and regulatory mode have also been found to predict L2 anxiety.Jiang and Papi (2022) found that learners' regulatory focus and concern with growth and accomplishments strongly and negatively predicted their L2 anxiety.Teimouri et al. (2022) found that learners' regulatory mode of assessment, representing preoccupation with the accuracy and suitability of L2 output, positively predicted their L2 anxiety, whereas their locomotion mode, representing the preoccupation with the act of communication, negatively predicted their L2 anxiety.Learner-external factors constitute the third source of L2 anxiety.For example, a supportive classroom environment in which teachers help students and classmates support each other can reduce L2 anxiety (Khajavy et al., 2018).A harsh manner of error correction by L2 teachers (Mak, 2011;Young, 1991), strictness, younger age, and limited use of L2 in the class (Dewaele et al., 2019) have been reported as sources of L2 anxiety.In addition to teachers' characteristics and instruction, positive attitudes towards L2 teachers can be linked to less L2 anxiety (Jiang & Dewaele, 2019).L2 anxiety increases when students have to speak in L2 in front of the class or when they have to do tasks that they are not familiar with (Young, 1991).Finally, students who are perceived to have a higher relative standing than their classmates experienced less L2 anxiety (Dewaele & Dewaele, 2017;Jiang & Dewaele, 2019). Conclusions Even though MacIntyre (2017) calls the early period of research on L2 anxiety "the confounded approach," we argue that we still have not made our way entirely out of that period.Definitions and measurements of anxiety are still divergent and all over the place (see Cheng, 2004Cheng, , 2017)).This was evident in the meta-analytic studies conducted on the topic.Teimouri et al.'s (2019) meta-analysis included studies that employed twenty-five different questionnaires for measuring anxiety.The sheer number of questionnaires used makes any conclusions drawn from such an analysis questionable.Half of the studies included in Teimouri et al. (2019) used the FLCAS, and Botes et al. (2020) only included studies that used the FLCAS.As discussed above, Horwitz's (1986) FLCAS itself included thirty-three items that represented a mixture of cognitions, attitudes, feelings, reactions, and behaviors that may tap into constructs other than anxiety.As Sparks and Patton (2013) argued, the FLCAS might be a better measure of students' perceived L2 competence than their language learning anxiety (see also Teimouri et al., 2019).The FLCAS includes items ranging from worry and nervousness to selfconfidence, word-by-word translation, tests, and being distracted in class.Aida (1994) factor-analyzed the FLCAS items and found four conceptually distinct factors including speech anxiety, fear of failing, comfortableness in speaking with native speakers, and negative attitudes toward the foreign language class.In another study, Ozdemir and Papi (2021) used twenty-two items in a factor analysis that led to the emergence of two factors representing oral English communication anxiety and English-speaking selfconfidence.In South Korea, Park (2014) also found two factors underlying the FLCAS, which he labeled communication apprehension and understanding, and communication apprehension and confidence.Given the lack of a valid theoretical basis for L2 anxiety in the FLCAS and other scales (e.g., Saito et al., 1999;see Cheng, 2004see Cheng, , 2017) ) used in L2 anxiety studies, the data used in these meta-analyses, thereby the conclusions drawn may not be considered valid.This divergent and unprincipled representation reflects Horwitz et al.'s (1986) broad definition of L2 anxiety as "a distinct complex of self-perceptions, beliefs, feelings, and behaviors related to classroom language learning" (p.128). Although it has been helpful to develop L2-specific measures of anxiety such as the FLCAS, this should not prevent researchers from exploring other types of anxiety that might affect the L2 learning processes and outcomes.For instance, L2 classroom anxiety can be aroused in certain classrooms and negatively affect the learner's experience in that context; L2 task anxiety can be related to specific L2 tasks (e.g., oral class presentation); and goal-pursuit anxiety can be a type of anxiety that is generated in response to the costs associated with not meeting certain duties and obligations and lead to motivated action (e.g., Tahmouresi & Papi, 2021).Certain learners are in fact motivated only through the anxiety that such duties and obligations produce even though the anxiety may not harm the quality of their L2 learning and performance.More recently, a complex dynamic perspective toward exploring L2 anxiety has become popular (e.g., Gregersen et al., 2014).The trend is motivated by research on complex dynamic system theory (CDST) and has shown that anxiety is dynamic and complex.Whereas the approach is interesting from a methodological standpoint, the dynamic and complex nature of anxiety is common sense and trying to prove the obvious may not help push the field forward.This research approach can be more informative though if researchers try to not only simplify the complexity of L2 anxiety but also identify the sources of its dynamicity, based on which appropriate interventions can be designed for the effective management of student anxiety.
7,770
2023-03-01T00:00:00.000
[ "Linguistics", "Psychology" ]
Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances The power of interactive 3D graphics, immersive displays, and spatial interfaces is still under-explored in domains where the main target is to enhance creativity and emotional experiences. This article presents a set of work the attempts to extent the frontiers of music creation as well as the experience of audiences attending to digital performances. The goal is to connect sounds to interactive 3D graphics that musicians can interact with and the audience can observe. Introduction Interactive 3D graphics, immersive displays and spatial interfaces have shown undeniable benefits in a number of fields.In particular, theses technologies have been extensively used in industry, where real-time modifications of virtual mockups as well as immersive visualization allows for the optimization of conception cycles.On the other hand, the power of such technologies and interfaces is still under-explored in domains where the main target is to enhance creativity and 1 emotional experiences.This article presents a set of work we conducted with the final goal of extending the frontiers of music creation, as well as the experience of audiences attending to digital performances. Our general approach is to connect sounds to interactive 3D graphics with which musicians can interact, and that can be observed by the audience.For example, imagine that a musician faces a huge stereoscopic screen where large composite 3D objects represent musical structures composed of multiple sounds.She can navigate around and interact with these structures.By looking at the appearance of the visual objects, she can easily infer which sound is associated to which visual object.She can select an object and move it to specific places to modify the related sound.For example, by bringing the object closer to her, she increases the amplitude of the sound.By sliding it through a dedicated tool, she modifies its pitch.For any modification, the visual appearance of the object is modified accordingly.Now, she wants to play with other musicians, co-located in the same space, or at a distance.She follows what the others are doing just by looking at their audio-visual 3D objects and the virtual tools that they use.She can prepare musical structure for them to play, too.During all this process, the audience benefit from a rich visual feedback that allows them to follow what musicians are doing. We are convinced that such interactive 3D environments and spatial interfaces may serve the purpose of expressiveness, creativity and rich user experiences.Compared to more traditional digital music performances, they opens new opportunities for music creation.On the other hand, the use of these technology also opens new research questions that need to be tackle with a great care.Indeed, the final user experience depends on the efficiency of each level of the musical application, from very technological considerations such as the latency of the controllers, to human factor aspects such as the choice of audio-visual mappings, passing through the design of the best suited interaction techniques.To address these questions, we defined three circles for interactive 3D musical performances.The first circle is dedicated to the musician where he or she needs to interact with the sounds in a very precise and expressive way.In the second circle, we take into account the band.The challenge is thus to favor relevant collaborations between the musicians.Finally, the audience is part of a third circle.Its members should experience rich immersion in the performance.We target advanced interactions between each of the circles on the one side, and the audio-visual content on the other side, both at an input and output level.Of course, it also exists strong interactions between the circles themselves.For example, depending on the actions of a musician, the band reacts, which in turn has an impact on the audience. Audience 2 First circle: The musician The first circle connects the musician with his or her instrument.With acoustic instruments, musicians perform gestures which generate (excite) or modulate sounds.The energy of gestures is mechanically converted into vibrations and modulations.With Digital Musical Instruments (DMIs), this physical link is broken, and needs to be rebuilt, in order to restore energy transfer to and from the instrument.This also means that the mappings between gestures and changes in the sound can be defined freely.The amount of change in the music is not bound to the energy of the gesture, thus a single gesture may have any impact on any aspect of the sound, and even on multiple sounds at once.To create and modulate sounds with DMIs, the standard approach is to control each of the sound parameters by way of dedicated devices (e.g.mixers and MIDI controllers).Virtual approaches have also been proposed where sounds are controlled by way of virtual sliders and knobs operated by a standard pointing or touch device.Contrary to hardware sensors, these virtual components can be dynamically created and adapted in order to facilitate the control of multiple sound processes. To extend the possibility of standard approaches, we have explored the control of sounds by way of 3D graphical representations we called 3D reactive widgets.These audio-visual objects provide some of the feedback on the sound that is lost with digital instruments and allow sounds to be manipulated through adapted 3D interaction techniques. They provide visual feedback on the value of sound parameters.To determine the relevant mappings between the sounds and their visual representations, we conducted a set of psychometric experiments [1].For example, these experiments showed that users preferences followed physical principles, such as having the volume mapped to the size of the widgets, and semantic ones, such as mapping the pitch to the color brightness of the widget.They also showed that multiple audiovisual mappings can be combined on a single 3D reactive widget without degrading perception, for example to allow users to visually perceive the values of both volume and pitch parameters of an audio loop simultaneously. Manipulating a sound then amounts to manipulating the appearance of the associated widget, making the musical interaction more transparent to the user.We designed graphical modulation tools called Tunnels [3] for changing the appearance of the 3D reactive widgets, and consequently the sound they embed.Each Tunnel displays a scale of values, discrete or continuous, for one or several graphical parameters.Figure 3 left for example shows three Tunnels.The one on top changes the color along a continuous scale, which is mapped to the pitch of the sound.The one in the middle changes the size along a discrete scale, which is mapped to the volume.The one on the bottom changes both the pitch along a discrete scale and the volume along a continuous scale.Tunnels behave like virtual sliders.When a musician drags a 3D reactive widget inside a Tunnel with a color scale ranging from dark red to light blue, the color of the widget is set accordingly, consequently modifying the pitch of the associated sound.Research on musical have demonstrated the importance of one-to-many mappings [7], where one gesture controls several sound parameters, which lead to more expressive instruments.The Tunnels allow for controlling several sound parameters at once with complex scales, but at the same time provide visual feedback of the value of each parameter separately. More than 2D graphical interfaces, 3D User Interfaces are well suited to represent, navigate and interact in scenes with complex 3D structures.We take advantage of this to extend the musical technique of live-looping with the "Drile" instrument [2].The hierarchical live-looping technique used in "Drile" allows musicians to build and manipulate complex musical tree structures of looped musical manipulations.Figure 1 shows a musician interacting with two musical trees. However, 3D selection, manipulation and navigation techniques and devices need to be adapted for expressive musical interaction.To that extent, we developed Piivert [?], a 3D interaction device and set of techniques that take into account these specific requirements.Piivert is depicted on Figure 3. First it divides interaction according to well-known instrumental gestures categories [6] and their temporal and physical constraints.Excitation gestures, which generate sound, require low latency and are therefore performed on Piivert using pressure sensors located below each finger.These also provide the passive haptic feedback needed for precise instantaneous gestures (taps, rolls ...).On the contrary, modulation (changing the sound) and selection (choosing part of the instrument) gestures are done through the 3D interface, respectively with the Tunnels and the Virtual Ray technique.To provide additional feedback to the musician, we extended the standard virtual ray metaphor by modifying the appearance of the ray according to the amount of energy sent to excite a reactive widget when hitting or pressing the pressure sensors. The results of a study we conducted suggest that Piivert, by separating excitation and selection gestures, increases the temporal accuracy and reduces the error rate in sound playing tasks in an immersive environment. In order to perform high-level commands on the instrument, whilst keeping with the temporal constraints of musical gestures, Piivert provides a vocabulary of percussion gestures.For example, flams (two taps in a fast sequence) with different fingers and in different orders can be used to start/stop the recording of a loop, delete it, or activate different playing modes for the other fingers.Using both hands and gestures such as flams (two fingers) and rolls (three or more fingers), a large set of commands can be triggered while preserving normal playing of notes and chords with individual or simultaneous fingers hits. 3 Second circle: The band Figure 4: Left: Drile played by two musicians on each side of a semi-transparent 3D screen.The musician on the other side of the screen has selected a reactive widget using virtual rays from his both hands.Right: Two musicians interact through the Reflets system.One musician uses a 3D interface to grab and process the sound of the guitar played by the musician reflected in the mirror. The second circle connects the instrument and musician to the other musicians of the band. In acoustic orchestras, non-verbal communication between musicians allows them to synchronize their actions, follow a musical structure, exchange and improvise together.With Digital Orchestras (DOs), there is a loss in the awareness of what other musicians are playing making, synchronization and exchanges more difficult. For instance, Figure 4 depicts the Drile instrument being played by two musicians, using a two-sided semi-transparent screen.An optical combiner is placed at a 45 degrees angle, with projector screens below and above it, forming a Z.Each side of the combiner only reflects one projection, therefore displaying the instrument only from the corresponding musician's point of view.With this setup, musicians can both perceive the virtual components of the instruments and each other directly. 3D user interfaces also open new possibilities for musical collaboration in DOs.For example, musicians may cooperate on the same musical processes and parameters, by interacting with the same 3D reactive widget placed in a shared virtual space.For example, in Drile, an expert musician may prepare loops that they then pass on to other musicians.Furthermore, different interaction techniques and/or access to the musical structure can be given to the musicians depending on their expertise.In Drile again, expert musicians may access the whole trees of loops, while beginners may only access the higher-level nodes, which require less complex gestures in order to produce musically satisfying results. With the Reflets project [5], we push the collaboration further.As depicted in Figure 4, a large vertical optical combiner is placed between the musicians of the band, combining spaces on each side of it.Musicians therefore perceive their reflections next to or overlapping the musicians on the other side.Reflets enables collaboration with both physical and virtual instruments.Figure 4 shows a scenario with a guitarist and another musician playing a gestural controller.Short loops from the guitar can be grabbed by the other musician simply by reaching through the reflection of the guitar, and manipulated through gestures within a control box.With Reflets, the 3D interface provides both visual feedback and novel collaboration opportunities, while preserving non-verbal communication between the musicians.Various other scenarios for collaboration were explored during workshops with musicians, dancers and circus artists from the Bristol artistic scene, leading to public performances.The third circle adds the spectators to the digital performance equation.In performances with acoustic instruments, the visual feedback, such as musicians' instrumental gestures but also general body movements, has been shown to have a strong impact on the emotion perceived and felt by the audience.With DMIs, this visual component is greatly impaired.Due to the variety of physical interfaces and sound synthesis/processing techniques, it is very hard for the audience to perceive the relation between musicians' gestures and the musical result.Many DMIs also feature automated sound processes, so that the musical result is not only dependent on the gestures performed.Finally, the mappings between sensor values and sound parameters values can be very complex, with changes in scale and cardinality.The familiarity that spectators have with acoustic instruments that they have played or seen played before, and with the physical principles that they experience in everyday life, does not exist anymore with digital ones.This leads to the well-known issue of liveness.Not perceiving the engagement of musicians with DMIs, i.e. how much they are in control of the music being played, may degrade the experience spectators have during performances. Third circle: The audience Our approach is to augment the instrument from the audience point of view, while preserving the interface designed for the musician's expression.With the Rouages project [4] depicted on Figure 5, we propose to reveal the mechanisms of DMIs using an augmented reality display by : i) amplifying gestures with virtual extensions of the sensors, ii) representing the sound processes and the amount of control they require and iii) revealing the links between the sensors and sound processes. The feedback from audience members of demonstrations and public performances was generally positive, with spectators commenting on the fact that they could more easily understand what was happening in the instrument, and what was the actual impact of musicians' gestures.In addition, the results of a study suggest a positive effect on the audience perception.We specifically designed DMIs that represented commonly found issues.We showed videos of performances with these DMIs, with and without visual augmentations, to participants.We then asked them to rate the perceived control and their confidence in their rating.Augmentations had a significant positive impact on the rating of perceived control when they represented changes in scale or in nature between gestures and resulting changes in the sound, for example when a hit gestures triggers a continuous change of pitch.Also audience members were more confident in their ratings when the changes were in part automated and in part done by the musicians, meaning that they perceived better what was the exact impact of the musicians. In order to be used in actual performances, these 3D augmentations need to be perceived consistently by all members of the audience, no matter their position in front of the stage.With Reflets, we propose a novel mixed-reality display that uses the specific configuration of performances.It relies on spectators revealing the augmentations on the stage side of the optical combiner by intersecting them with their bodies or props from the other side.During a performance, they may therefore explore the inner mechanisms of the instruments being played.Because of the flat optical combiner, 3D content revealed by one spectator is visible and appear consistently for all spectators.Figure 5 shows a spectator revealing augmentations, extensions of the sensors and representations of a loop and a sound sample, of a DMI using large white panels. On the contrary to many DMIs, 3D virtual instruments such as Drile already provide visual feedback useful for the audience on the links between gestures and the sound parameters, for example through graphical tools such as Piivert and the Tunnels.However, the scenography of performances with these instruments need to fill a number of requirements, such as musicians' immersion, audience immersion, musician visibility, audience visibility and continuity between physical gestures and virtual tools.For example, the same screen cannot be used for both the musician and spectators, as the rendered perspective is adjusted for the musician as he moves, and therefore does not match the average audience viewing position.To cope with this issue, we can setup two separate screens which mark out the virtual space.The musician screen renders the 3D environment with stereoscopy and head-tracking, providing correct depth perception of the instrument.The audience screen renders the scene from the side and a point of view in the center of the spectators.They perceive both the physical musician, his gestures and the virtual rays that he manipulates to interact with the 3D musical structures. Conclusion Interactive 3D environments and spatial interfaces open up very interesting opportunities for musical performances.They offer novel playgrounds to musicians who can explore new dimensions in music creation.They also favor the emergence of interactive installations where audiences can experience new forms of performances.Exploring new directions in immersive musical performances is also fruitful for research in spatial interfaces: it feeds challenging research questions that can find interesting applications outside the scope of music. Figure 2 : Figure 2: Interactions occur between each circles, and between the circles and the instrument. Figure 3 : Figure 3: Left: Audio/visual Tunnels dedicated to sound modulations: Color/pitch (top), size/volume (middle) and combined pitch/volume (bottom), with a reactive widget being manipulated.Right: The Piivert input device with markers for 3D interaction and pressure sensors for musical gestures. Figure 5 : Figure 5: Left: A Digital Musical Instrument is augmented by the Rouages system, revealing its mechanisms.Right: Reflets allows spectators to explore these mechanisms through a large scale semi-transparent mirror.
4,020.8
2016-09-01T00:00:00.000
[ "Art", "Computer Science" ]
A GRADIENT-REGION CONSTRAINED LEVEL SET METHOD FOR AUTONOMOUS ROCK DETECTION FROM MARS ROVER IMAGE : Rocks are one of the major Martian surface features and yield significant information about the relevant geology process and the life exploration. However, autonomous Martian rock detection is still a challenging task due to the appearance similar to the background, the view and illumination change. Therefore, this paper presents a gradient-region constrained level set method based on mars rover image for automatic Martian rock extraction. In our method, the evolution function of level set consists of the internal energy term for guaranteeing the deviation of the level set function from a signed distance function and the external energy term, where the gradient-based information is integrated with the locally adaptive region-based information, for robustly driving the motion of the zero-level set toward the object boundaries even in images with ununiform grey scale. The resulting evolution of the level set function is based on the minimisation of the overall energy functional using the standard gradient descent method. As a result, those detected Martian surface regions that are most likely to yield valuable scientific discoveries will be further analysed based on two-dimensional shape characterisation. To evaluate the performance of the proposed method, experiments were performed on mars rover image under various terrain and illumination conditions. Results demonstrate that the proposed method is robust and efficient for automatically detecting both small-scale and large-scale rocks on Martian surfaces. INTRODUCTION Currently, the new round of deep space exploration booms and the leading countries and organizations have initiated several deep space exploration missions in recent years, such as National Aeronautics and Space Administration (NASA)'s Lunar Reconnaissance Orbiter (LRO), Lunar Crater Observation and Sensing Satellite (LCROSS), Mars Global Surveyor, and Mars Odyssey, European Space Agency (ESA)'s Mars Express, China's Lunar Exploration Project, Japan's SELenological and Engineering Explorer (SELENE), which has provided a considerable number of reliable data and contributed to widespread research interests.For example, the acquired high-resolution satellite imageries from these orbiters were used for the high-resolution imaging and mineralogical mapping of the surface (Bibring et al., 2006), for radar sounding of the subsurface structure down to the permafrost (Picardi et al., 2005), to generate the gravity model (Smith et al., 1993), to inverse the heat flow activity over the Martian surface (Abramov and Kring, 2005), etc.The weather and climate conditions on Mars could be interpreted and analysed based on the observed atmospheric circulation and composition (McCleese et al., 2007).Spectrometers and thermal imagers could be used to detect evidence of past or present water and ice (Michalski et al., 2013), as well as study the planet's geology and radiation environment. With the development of deep space exploration technology, the requirements that both lander and rover have the capability of independently and autonomously analysing information and selecting the valuable scientific data are increasing.To date, eagle 2, Phoenix, Spirit, Opportunity, Curiosity, Jade Rabbit and so on have been designed and landed on several terrestrial bodies, such as Moon or Mars, to perform exobiology and geochemistry research, and even sample return missions.In the future, deep space exploration missions will be able to collect more sensor data than can be transmitted to earth.Under the circumstances, autonomous in-situ scientific data analysis enables major increase in scientifically valuable data return without heavy downlink or remarkable time delay. Mars, one of the planets in the solar system, exhibits similar characteristics to Earth and has always been a hotspot issue in the deep space exploration.For Mars exploration, it is highly desirable to analyze the imagery data for distinguishing objects (e.g.rocks, gravels or creatures) from background in images acquired from the Mars exploration rover (MER).Rocks are one of the major Martian surface features and their distribution and the physical properties can provide crucial information about planetary surface for many applications ranging from hazards avoidance, robotic route planning to further geological analysis.First, rocks are the ideal cross-site tie points for visionbased rover localization and navigation.Second, for many sites of scientific interests on Mars, the rock distribution is high enough to create a landing or rover failure probability.The position and distribution of rocks can guarantee the safe navigation missions and increase the accessible surface area.Third, according the type and distribution of the rocks, such as sedimentary or igneous, the topographic and physical characteristics (e.g., interior conditions, surface conditions or atmospheric conditions) of the environment around the landing site can be investigated.What the regions were like and the effect of climate or weathering can be deduced when the rocks were being formed and deposited.Consequently, independent and autonomous rock detection and analysis can help to guarantee the safe rover navigation and achieve scientific and engineering goals. Rock segmentation procedure plays a crucial role for the success of Mars exploration rover mission and its scientific studies.In recent years, numerous studies about rock segmentation and detection derived from Mars images have been done.Gor et al. (2001) proposed an unsupervised hierarchical framework, where intensity information was used to detect small rocks and range information to detect large rocks, for autonomous rock detection on Martian terrain.Fox et al. (2002) segmented the rocks from the background based on intensity and height and classified the shape and other geologic characteristics of rocks from two-dimensional photographic images and three-dimensional stereographically produced data.Castano et al. (2004) constructed an image pyramid model for extracting different scale rocks from different levels, where at every level the edge detector and the edge walker were used to find closed shapes as rocks.Li et al. (2007) extracted and modelled rocks from three-dimensional ground points generated by stereo image matching as cross-site tie points to long-range autonomous Mars rover localization.Thompson and Castano (2007) 2013) adopted a mean-shift segmentation algorithm to generate a set of homogeneous objects and combined 3D point clouds derived from a pair of intensity images to extract both small and large rock candidates.Wang et al. (2015) investigated the imagery characteristics of Martian surface and model the interaction between two pixels of an image for differing foreground rocks from background information to keep rover safe navigation.Xiao et al. (2017) presented a new autonomous rock detection approach based on homogeneous region-level intensity information and spatial layout.Xiao et al. (2018) reconstructed background information using sparse representation and implemented a threshold segmentation on enhanced contrast map to precisely detect rocks for Mars rover. Rocks in the Martian scene exhibit significant difference in morphology and the image intensity varies remarkably due to the illumination, which poses great challenges for automatically detecting these rocks.To address these challenges, in this paper, we develop a gradient-region constrained level set image segmentation method based on Mars rover image.Moreover, the shape characterisation of these extracted rocks is further analysed for studying the geological origins and history.The rest of this paper is organized as follows.Section 2 describes the proposed method in detail.Section 3 presents the experimental results and analysis for evaluating the proposed method.This paper concludes with a discussion of future research considerations in Section 4. Principles of level set method The level set method proposed by Osher and Sethian (1988)  satisfies the Eq.(1).Fig. 1 shows a curve given by the zero-level set of a level set function By defining a different energy term to represent information within the image domain, the evolving contour can change flexibly according to varying purposes.The existing level set methods can be generally divided into two categories: the edgebased models (Caselles et al., 1997) and the region-based models (Chan and Vese, 2001). The geodesic active contour (GAC) model proposed by Caselles et al., (1997) is a typical edge-based model and solved by minimizing an energy functional in Eq. ( 4). where C is a curve parameterized by the arc-length s and ds denotes the arc-length element (Wang et al., 2014).Consequently, the gradient descent flow can be derived from the Euler-Lagrange of (4), as shown in Eq. ( 5). where div denotes the divergence operator,  and v denotes constant coefficients and is mainly used to stop the curve at object boundaries with high gradient values.For more details, please refer to (Caselles et al., 1997). The region-based models take the region information into account.For example, the CV model (Chan and Vese, 2001) establishes the energy function in the frame of the Mumford-Shah functional (Mumford and Shah, 1985) for segmentation, . For more details, please refer to (Chan and Vese, 2001). Gradient-region constrained level set model As mentioned earlier, the conventional CV model only introduced the global region information into the evolution function without using the edge information, which results in the inaccurate object boundary detection especially for images with ununiform gray scales, while the typical GAC model often failed at the ambiguous rims.Therefore, we adopt a gradientregion constrained level set model image segmentation method integrating gradient information with region information for automatically extracting Martian surface rocks. To introduce both the region information and gradient information into the evolution function, the energy function is constructed as follows In order to avoid the re-initialization during the evolution, the internal energy term ) ( int  E is used to penalize the deviation of the level set function from a signed distance function (Li et al., 2005), as defined in Eq. ( 9). where  denotes a constant. Then, the external energy term consists of two parts: the gradient-based energy term ) ( edge  E (Li et al., 2005) and the region-based energy term ) ( region  E (Li et al., 2007), as defined in Eq. ( 10) and Eq. ( 11). where 1  and 2  denote two constants, only fit the intensity near x for alleviating the effect of ununiform gray.For more details about ) ( region  E , please refer to (Li et al., 2007).Finally, the standard gradient descent method is used to minimize the energy function Eq. ( 8). Rock shape analysis After the rocks over Martian surface are detected, their inherent shape characterization is valuable information for studying geologic origins and history (Blatt et al., 1980).Indeed, the shape of a rock is a complex property and is tough task to describe precisely.Referring to the basic concepts about classifying and categorizing the general appearance of microscopic particle grains in geological work (Dudek andTsotsos, 1997, Kwan et al., 1999), in this section, the ellipse fitting error and eccentricity of the fitting ellipse derived from modeling the rock by an ellipse within image spaces through a direct least-squares fitting method (Maini, 2008) are used as indicative measures of a rock's shape. First, the ellipse fitting error between the fitting ellipse and the rock boundary is calculated as a measure of its relative roughness, indicating the sharpness of a rock's corner and the angularity of its edges.Its value ranges from 0 to positive infinity.the greater a rock's ellipse fitting error, the more angular its edge is. Then, the eccentricity of the fitting ellipse is an important characterization of the rock and provides information with respect to the composition and history.In our method, let .This value ranges from 0 to 1.The smaller the value, the more circular the rock is. EXPERIMENTATION AND ANALYSIS To evaluate the performance of the proposed method, experiments were performed on images captured from Spirit Mars Rover Panoramic and Navigation Cameras along its traverse path under various terrain and illumination conditions.3 shows experimental results derived from the proposed method, suggesting that most rocks can be successfully extracted for further analysis.However, in some challenging regions, where rocks are stuck together, two or more rocks might be detected as one rock (as shown in Fig. 3(b)).In addition, the regions with large slope also reduce the detection performance (as shown in Fig. 3(d)). Fig. 4 shows extracted rocks and their associated fitting ellipses, where fitting ellipse is represented by red dashed lines and rock boundary by yellow polygons.As shown in Fig. 5, the rocks are ranked for each measure.These measures provide an intuitive shape characterization of each rock.For instance, rock # 1, 7, 9, 15 has larger ellipse fitting errors, which indicated that they are more irregular than others.Furthermore, the eccentricity of rock #8 is minimal.We can conclude that compared with others, its appearance is more circular. CONCLUSIONS To independently and autonomously detect rocks on Martian surfaces, we develop a gradient-region constrained level set image segmentation method based on Mars rover image.In our method, the gradient-based information is integrated with the locally adaptive region-based information for robustly driving the motion of the zero-level set toward the object boundaries even in images with ununiform grey scale, which effectively alleviate the weak problem and increases the attraction of the true rock edges to active contours.Meanwhile, the inherent geometric characterization of these extracted rocks is further analysed for giving valuable information with regard to both the geological analysis and scientific missions.Experiments were performed on Mars rover image under various terrain and illumination conditions.Results suggest that the proposed method is robust and efficient for automatically detecting both small-scale and large-scale rocks on Martian surfaces.Nevertheless, in some challenging areas where rocks are stuck together or covered with sands, the proposed method might not produce satisfactory extraction results.Enhancing the rock detection performance in these challenging areas will be our future focus.Additionally, constructing a framework for the rock classification task will be our focus.As a result, those detected Martian surface regions that are most likely to yield valuable scientific discoveries will be further explored using more scientific measurements. compared the performance of seven existing rock detection algorithms on Mars Exploration Rover imagery, terrestrial images from analog environments, and synthetic images from a Mars terrain simulator.Dunlop et al. (2007) incorporated the local-scale, the object-scale and the scene-scale attributes into a learned rock appearance-based model for Martian rock detection and segmentation.Matthies et al. (2008) conducted stereo-based rock detection building on the surface plane fit approach for landing hazard detection.Song (2010) used texture-based image segmentation and edge-flow driven active contour for automated rock segmentation from Mars exploration rover imagery.Di et al. ( Fig. 2 shows the schematic diagram of level set method during evolution.At 0 = t , the initial position ) 0 , , ( ) 0 , , ( =  = = t y x d t y x  G denotes the Gaussian kernel with standard deviation  , and I denotes the image.The function g a and b denote the semi-major and semi-minor axes of the fitting ellipse, respectively.Supposing that Scene III (d) Scene IV Fig. 3 Experimental results derived from the proposed method.The detected rocks are marked with yellow polygons. Fig. Fig.3shows experimental results derived from the proposed method, suggesting that most rocks can be successfully extracted for further analysis.However, in some challenging regions, where rocks are stuck together, two or more rocks might be detected as one rock (as shown in Fig.3(b)).In addition, the regions with large slope also reduce the detection performance (as shown in Fig.3(d)). Fig. 5 Fig. 5 Two-dimensional measures for these extracted rocks.Results show ranking of each rock for two measures. rocks and their associated fitting ellipses.Fitting ellipse: red dashed lines, rock boundary: yellow polygons. is an effective implicit representation for evolving the motion of curves or surfaces in two-dimensional (2D) or threedimensional (3D) space and has been successfully applied in image segmentation problems to date since it allows for automatic change of topology, such as merging and breaking.For image segmentation, active contours implemented via level set methods can evolve from an initial position to the desired features, such as the object boundaries, in the direction normal to the active contours subject to constraints in the images.Let , let R denote the set of real number, let  denote an empty set.The goal of the level set-based image segmentation is to separate the whole image domain  by C , where C denote an active contour, let  represent the image domain, level set formulation of active contours,
3,665.2
2019-06-05T00:00:00.000
[ "Computer Science" ]
The Mars Regional Atmospheric Modeling System (MRAMS): Current Status and Future Directions : The Mars Regional Atmospheric Modeling System (MRAMS) is closing in on two decades of use as a tool to investigate mesoscale and microscale circulations and dynamics in the atmosphere of Mars. Over this period of time, there have been numerous improvements and additions to the model dynamical core, physical parameterizations, and framework. At the same time, the application of the model to Mars (and related code for other planets) has taught many lessons about limitations and cautions that should be exercised. The current state of MRAMS is described along with a review of prior studies and findings utilizing the model. Where appropriate, lessons learned are provided to help guide future users and aid in the design and interpretation of numerical experiments. The paper concludes with a discussion of future MRAMS development plans. Introduction The Mars Regional Atmospheric Modeling System (MRAMS) is a mesoscale and large-eddy simulation model designed for the simulation of the Mars atmosphere at horizontal scales from O(10 m) to O(1000 km). The development of MRAMS began in the late 1990s. The Mars Pathfinder mission marked the re-ignition of martian exploration after the long, twenty-year hiatus following the Viking missions [1,2]. While the tiny Sojourner rover was blazing the path (and thus, the mission name Pathfinder) for larger and more capable future rovers, the base station lander manifested a compact meteorological station that measured temperature, pressure, and wind [3]. These new data motivated one of us (Rafkin), having initially very little knowledge of the Mars atmospheric literature or Mars in general, to investigate whether anyone had ever attempted to simulate the Mars atmosphere with a mesoscale model. After consulting with colleagues and searching the literature, the answer was clearly "no", although colleagues thought it would be a worthwhile endeavor. Thus was born the MRAMS project, which was initially supported in the form of a small seed grant by Dr. Robert Haberle at the NASA Ames Research Center, which provided initial, partial funding for MRAMS co-developer Tim Michaels, who was a graduate student at the time. The result of that effort was described in the initial MRAMS paper [4]. The MRAMS model is based on the Regional Atmospheric Modeling System (RAMS) that was in wide use in the terrestrial community in the late 1990s [5]. RAMS version 3b was chosen as the starting point for no other reasons than its familiarity to one of us (Rafkin) resulting from many years of prior use and development for Earth applications, and that was the current stable release at that time. Little to no consideration was given to whether the assumptions that went into RAMS were appropriate for Mars. This was not willful neglect of due diligence, but a reflection of the ignorance of just how differently the Mars atmosphere behaved and was forced, and it would later lead to modifications to everything from model initialization, boundary conditions, the dynamical core, and to the physics that are either unnecessary or inappropriate for Earth applications but are important for Mars. At the same time, however, there was a highly beneficial infusion of terrestrial mesoscale modeling experience and mesoscale dynamics into the nascent Mars mesoscale modeling enterprise. This knowledge would motivate future modeling investigations, influence the interpretation of the results in a broader context of terrestrial analogs, and diffuse to some degree into the more mature global circulation modeling community. Since the introduction of MRAMS, numerous other mesoscale models have been introduced e.g., [6][7][8]. While all of these models are similar in concept, each has slightly different representations of dynamics (i.e., different dynamical cores) that are solved on different computational grids. The physical representation of Mars processes (i.e., the physics) is generally different. So, while all of these models can generally be applied to similar problems, the answers will generally differ. Further, not all models have the same capabilities within the physics. For example, MRAMS has a bin representation of microphysics that provides the capability for representing complex aerosol size distributions while most other models do not. The current version of MRAMS is v2.9_r39 and is based on RAMS version 4.4 (c. 2001), with major (nearly all-encompassing) modifications to code structure, the model core, and many physical routines. Although RAMS v4.4 had parallel computing capabilities, that capability was broken as a result of modifications made to apply the model to Mars. Parallel computing capabilities have since been restored, along with greater portability between operating systems and reliability. Below, the details of additions, improvements, and changes are described. Full Compressibility and Buoyancy RAMS is a nonhydrostatic, but not fully compressible, model. As discussed in [4], sound waves are not filtered but are solved using a time-splitting scheme. Diabatic heating is not fully and properly coupled to the pressure and velocity field. Additionally, to avoid the computationally-expensive need to solve a 3D elliptic equation to obtain the pressure perturbation, the mass continuity equation in RAMS is formulated using the Exner function (π) instead of pressure, and linearization of the equations results in the substitution of the time-invariant base-state Exner function (π 0 , where π = π 0 + π and π is the perturbation from the base-state value) for the full Exner function wherever π π appears. This enables a relatively fast tridiagonal solver to be used. In the absence of full compressional heating, the modeled pressure signal using the baseline RAMS dynamical core is unable to faithfully reproduce the strong diurnal pressure cycles of Mars. In the baseline core, heating hydrostatically lowers the surface pressure, but a partially offsetting compressional increase in pressure is necessarily absent or incorrectly diagnosed. The first attempt at rectifying this issue was [9], who found the baseline RAMS dynamical core to be lacking in the case of strong diabatic heating from thunderstorms. For that work, they were able to largely correct this by adding the compressional term (i.e., a term proportional to the time rate of change in potential temperature) to the dynamical core, retaining the same linearization strategy of using π 0 in place of π. The same strategy was adopted for MRAMS, which enabled substantially improved simulations of Mars' diurnal pressure cycles. In the vertical momentum equation of RAMS, the buoyancy term is approximated as being proportional to the ratio of the potential temperature perturbation to the time-invariant base-state potential temperature (θ'/θ 0 ). This is essentially the anelastic approximation. However, linearized buoyancy is properly defined as a mass density ratio (e.g., ρ /ρ 0 ), which has an additional term proportional to the linearized Exner function ratio (π /π 0 ). MRAMS now includes the Exner function term, which necessitated the re-derivation of matrix coefficients for the implicit tridiagonal solver. The inclusion of the additional compression and buoyancy terms can generally cause some energy non-conservation and inconsistencies in the equations, but these were found to be acceptable in practice, particularly since the lack of inclusion made the realistic simulation of atmospheric tides and other circulations on Mars impossible. Since MRAMS is a limited area model run over relatively small timescales (e.g., less than 10 sols), long term non-conservation is not as much a concern as it would be in a global climate model. The Pressure Cooker Effect MRAMS uses a terrain-influenced, geometric (not pressure) height coordinate. The lowest model levels follow the topographic relief and gradually relax to a purely horizontal surface at the model top. Typically, the lowest model level thickness is set to 10 m to 30 m and is gradually stretched to a spacing of~2 km up to a model top of >50 km. The upper boundary condition is w = 0 (vertical velocity), as described in [4]. The inclusion of full compression in a rigid lid, height-based model is not without issues. Consider a global model with domain boundaries at the surface and a model top at a specified geometric height. Assuming w = 0 boundary conditions at both the top and bottom of the model and no other sources of mass, the domain mass in the model is fixed. Now suppose the global domain is heated such that the mean temperature increases. From the Ideal Gas Law, the heating in the fixed volume will result in a compressional increase in pressure, including at the surface. Normally, however, the surface pressure is considered to be a diagnostic of the total column mass (to within hydrostatic accuracy). The model domain mass has not changed during the heating (the mass is fixed) while the surface pressure has. The domain mass and the surface pressure have decoupled so that the model surface pressure cannot be used as a diagnostic for atmosphere mass. The decoupling of the surface pressure from the domain mass is because of the upper rigid lid boundary condition in a model atmosphere of finite depth. When heated, the atmosphere should be able to expand upward to alleviate some or all of the compressional pressure increase. It is unable to do so, and thus the pressure field must absorb all the heating via compression. Further, the model has no knowledge of the atmosphere above the rigid lid. The implicit assumption is that the atmosphere above the model has no impact or influence on the atmosphere below. Effectively, the atmosphere above may be considered static such that the pressure (i.e., the mass above) is invariant. This sets up a discontinuity between the pressure at the top of the model inside the model domain and the pressure just on the other side of the domain. Notably, models with vertical pressure coordinates and fixed pressure upper boundary conditions can expand vertically and can properly deal with the compressional heating. The compressional decoupling of mass and surface pressure in rigid lid z-models has been known in the Earth modeling community for some time e.g., [10]. It is generally ignored because the heating of the Earth's atmosphere is generally small enough such that the discrepancy is small. Further, Earth models are usually started from close to a thermal equilibrium state such that secular changes in global temperature are small. This is not the case for Mars global models where the atmosphere is usually started from an isothermal state far from the final thermal equilibrium state. In the case of Mars, the global surface pressure can deviate substantially from the initial value due solely to changes in the mean thermal state with absolutely no change in domain mass. The noticeable compressional pressure effect on Mars has been termed the "pressure cooker" effect (possibly first coined by R. Haberle). Although MRAMS is not a global model, it can still suffer from the pressure cooker effect. A locally heated column cannot expand vertically beyond the model top. The column heating drives a horizontally divergent (expansive) flow with the balance of the heating producing some amount of compressional pressure increase. The partitioning will depend on the specific formulation of the model core and its handling of acoustic waves in particular. Regardless, it is reasonable to assume that there is some amount of spurious lateral flow and spurious compressional pressure signal. These spurious flows are superimposed on realistic lateral flows (e.g., the thermal tide) and compressional tendencies. Some lateral flow is correct. This divergence is, after all, what produces a thermal low pressure in response to heating. The pressure cooker effect in MRAMS is noticeable in spin up when the model is initialized with a General Circulation Model (GCM) field that is not in close thermal equilibrium with the end state. A secular trend in pressure is common for one or two sols after initialization. The signal is domain-wide and has not been found to have a noticeable impact on winds or temperature. Winds are driven by pressure gradients, so the minimal impact must indicate the pressure cooker forcing is nearly uniform over the domain. Temperature is largely driven by radiative forcing, and while there is some pressure dependence on this forcing, it is very small over the range of typical spin-up scenarios (<2% domain change in pressure). The diurnal (tidal and local circulation) pressure changes are much larger and are superimposed on the spin-up signal. The ability of MRAMS to reproduce observed variations in pressure [11] suggests that spurious lateral flows and column compressional pressure changes must be small enough in combination such that the real dominant tidal signatures plus local, topographically-driven circulations shine through [12]. This has never been properly confirmed. MRAMS can have a difficult time reproducing the mean pressure without either adjusting the initial state with a posteriori information or adjusting the mean pressure in the post-simulation analysis. In the first case, the quasi-equilibrium solution is used to determine a domain-wide, constant scaling factor by which the initial input pressure field (and boundary condition data) from the large-scale model is adjusted so as to equal the desired, observed mean pressure. Then MRAMS is re-run with the corrected initial and boundary conditions so that the modeled mean pressure relaxes close to the desired, observed value. In the second case, the output pressure field from the initial MRAMS simulation is corrected post facto in the analysis by the scaling factor, and no additional simulations are conducted. Tests between these two methods have shown that the difference in solutions is inconsequential. Thus, for most cases, the second method is used in order to minimize computation. Figure 1 displays an example of the pressure correction method that was applied to simulations conducted in support of entry, descent, and landing (EDL) engineering activities for the NASA Mars 2020 rover. MRAMS is capable of characterizing the variations in density, winds, and temperature, but the mean density and pressure are not constrained by MRAMS alone. An independent study using MGS Radio Science as input into a data assimilation simulation with the UK version of the Laboratoire de Météorologie Dynamique, or LMD Mars GCM was used to compute the mean diurnal pressure [13]. 152 The ability of MRAMS to reproduce observed variations in pressure [11] suggests that spurious [13] ingesting MGS Radio Science data via data assimilation. Note that neither MRAMS nor the pressure data from the GCM are perfectly repeatable. The data assimilation simulation indicates that the mean pressure is higher than that predicted by the raw MRAMS output. The correction factor is obtained by computing the diurnal means from both MRAMS and the GCM over the several-sol period. After multiplying the raw MRAMS data (red) by a correction factor (1.053), the mean diurnal pressure exactly matches the data assimilation value and the variations are also in good agreement (green). GCM data (blue) provided courtesy of S. Lewis. Deep atmospheric domains should, in principle, help to offset the pressure cooker effect, because a large volume produces a relatively small pressure response for a given temperature change. Further, a model top at low pressures should mean that the assumptions about the atmosphere above the model become less important. Fortunately, the deep circulations on Mars require deep model domains, but the response of the pressure cooker effect to changes in the model top has not been documented in the literature. Next-generation model development efforts that use a rigid z-lid should be aware of and will have to deal with the pressure-cooker effect. Nested Grid Feedback The default, two-way nesting scheme in RAMS is described by [14] and is based on a reversible and conservative methodology of the primary prognostic variables, namely ice-liquid potential temperature (θ il ) and the Exner function (π). The ice-liquid potential temperature in MRAMS is set equivalent to potential temperature since the amount of water is thermodynamically small. The average of these two atmospheric properties taken over the child grid cells contained within a single parent grid cell determines the up-scale value of the parent grid cell. Thus, θ = θ ij /N and π = π ij /N, where the sum is performed over the N child grid cells ij contained with the parent cell. Parent grid temperature is diagnosed from the prognostic thermodynamic variables, C p T = πθ. The temperature in the parent cell is then T = π ij θ ij /(N 2 C p ) while the average temperature of the child cells is π ij θ ij / NC p . The baseline feedback process can produce a diagnosed temperature in the parent grid cell that differs from the average temperature over the child cells. When topography is added, the discrepancy can be amplified, because even if potential temperature is conserved, the resulting temperature depends on altitude. On Mars, this effect can and often does produce non-physical values of temperature in the parent grid cell in regions of extreme topography. For example, over the Tharsis Montes, the variation of topography within child grid cells can be several kilometers with pressure values varying accordingly. We often found that the diagnosed parent grid air temperatures in these steep topography scenarios could exceed 340K or drop well below the CO 2 condensation temperature after feedback from the child grid with reasonable temperatures. The potential temperatures of parent and child grid values were compatible, but the very different altitudes of the parent and child grids produced incompatible temperatures. These temperatures then triggered incorrect dynamical and physical responses such as cloud condensation or bizarre thermal slope flows. On occasion, they could produce numerical instabilities that would bring the model down. While this effect is necessarily present in RAMS, the far tamer Earth topography and thermal structure mute the signal to apparently acceptable levels. In principle, the nonphysical parent grid solutions can be ignored since they are overwritten by the solutions from the child grid cells at the next time step. However, the nonphysical parent grid solutions can produce gravity waves that may propagate beyond the edge of the nested grid, as can cloud condensate. Also, the child grid utilizes parent values at the grid boundary and vice versa, which can result in substantial discontinuities. Thus, the overall solution may become contaminated. If nothing else, analyzing the parent grids' solutions in regions of nested grids becomes impossible due to the spurious values resulting from linear averages. One solution to this problem is to feedback density-weighted prognostic variables and then normalize those variables by the parent grid=density. In the current version of MRAMS, the parent grid values are now defined as θ = ρ ij θ ij /(Nρ o ) and π = ρ ij π ij /(Nρ o ). This formulation normalizes the average child properties to a value consistent with the parent grid density. The prognostic momentum variables are also fed back by weighting with the child base state density (i.e., momentum rather than velocity), and restored to velocity components on the parent grid using the parent grid base-state density. This ensures that momentum is conserved. [15] was the first result published using the new scheme, and that study was only possible with the modified feedback scheme. Dust and Dust Lifting The radiative impact of dust is a crucial element of Mars atmospheric simulation. Two dust fields are now carried in the model: foreground dust and background dust, as described below. Dust lifting, sedimentation, and cloud condensation (i.e., microphysics; see 4.2) operate only on the foreground dust field. Foreground and Background Dust MRAMS 2001 utilized a simple background dust field that interfaced directly with the two-stream radiation code originating from the NASA Ames GCM available at the time [16]. The dust distribution was described via total column opacity at the 610 Pa level and followed a Conrath-ν vertical profile [17]. Below the 610 Pa level, the dust was assumed to be well-mixed and constant in mixing ratio. The column opacity was user-specified (as a constant), as were Conrath-ν parameters. Optical parameters for the dust (e.g., single scattering albedo and the visible-to-infrared albedo ratio) were fixed. By the early 2000s, MGS-TES retrievals of total column dust opacity provided the means to initialize the background dust field with more complex and realistic distributions [18]. A zonally-averaged opacity map that mimicked the bulk properties of the TES data, which varied with season, was implemented around 2004. The opacity fields were combined with Conrath-ν parameters that also varied with latitude and season; deepest dust was prescribed near the subsolar point in a manner consistent with prior studies [19][20][21]: where L S is the season of interest, θ is the latitude, and ν is the traditional Conrath-ν parameter as described in [17]. A similar zonally-averaged map derived directly from TES Mars Year 24 was also added around that time, and this map was combined with the vertical distribution profiles suggested by the ESA Mars Express SPICAM dust retrievals [19]. The possibility for the user to create a custom opacity map with minor changes to the portion of the code that filled the 2D opacity array was also implemented. All of these options are still available in the current model. The desire to simulate the dust cycle with dust lifting, sedimentation, advection, diffusion, and volatile condensation on dust as nuclei drove the development and inclusion of a secondary dust field-the foreground dust-within the model. The foreground dust field is represented via dust bins discretized by particle mass. Besides allowing for a more realistic representation of dust cycle processes, the number concentration of dust as a function of the discretized size bins also provides information to more advanced radiation transfer parameterizations that explicitly make use of the information. The number of bins and their discretization are user-defined at the start of the simulation. The foreground dust field can be initialized in the same way as the background dust, or it can be initialized to zero. Unlike the background dust field, however, the foreground dust field evolves based on physical dust processes, with the individual size bins acting independently. For example, the dust in the larger size bins falls more rapidly than in the smaller size bins, leading to a depletion of dust in the larger size bins over time. The background and foreground dust fields are used individually or in combination to simulate a wide variety of scenarios. For example, one common implementation is to use the background dust field to represent the climatological dust distribution and then use the foreground dust to simulate positive perturbations on that climatological background. This scenario is useful for simulating dust devils or larger dust storms that result from active dust lifting on a pre-existing (background) atmospheric dust distribution. When interfacing with the radiative transfer physics, the total dust (foreground plus background) can be sent as input. In this case, the user must specify the size distribution and vertical distribution of the background field. This configuration scenario was used in [15] to simulate the spiral dust cloud above Arsia Mons. The spiral cloud was represented entirely in the foreground dust field. The radiative transfer activity of the foreground dust can be toggled on and off. This permits the investigation of the forcing associated with the positive dust perturbations. For small positive dust perturbations (less than~0.1 additional column opacity), the impact of the foreground dust on the model solution tends to be small, but there are exceptions. Positive dust perturbations result in heating perturbations that hydrostatically lower surface pressures, which then accelerates winds, which can lead to further dust lifting and reinforcement of the heating. The magnitude of this wind-enhanced interaction of radiation and dust (WEIRD) varies [22]. Also, it matters how the foreground dust is distributed vertically. The thermal impact of a small amount of dust spread deeply through the atmosphere is inconsequential, whereas that same amount of dust concentrated in a shallow layer can result in a notable thermal influence in the dusty layer. Generally, for dust perturbations greater than~0.1 in column opacity, the radiative impact of the additional dust is almost always found to be important. Dust Lifting Physics Dust lifting comes in two flavors and only contributes to the foreground dust field. The fairly standard threshold lifting scheme e.g., [23] was the first to be included in the model. In this formulation, only the resolved wind acts on the dust with a lifting threshold and lifting efficiency factor that are specified by the user. These values are set domain-wide, although fairly simple modifications within the code permit the user to provide spatially-variable values, if desired. There is no sub-grid scale dust devil lifting, which is also implemented in some models e.g., [24]. A second parameterization assumes a sub-grid Weibull wind distribution centered on the model-predicted wind speed in the grid cell [25]. The width of the distribution depends on stability (i.e., convective/unstable, neutral, stable) with the convective distribution assumed to be the most turbulent and, therefore, the widest. In contrast, stable conditions are more likely to produce quasi-laminar, narrow distributions. Given the distribution, the winds above a specified lifting threshold may be calculated, and this information is used to diagnose when lifting occurs and in what amount. Thus, it is possible for the modeled winds to remain below the lifting threshold while dust is still lifted due to the stronger sub-grid scale winds in the tail of the Weibull distribution. In this way, dust devil lifting and lifting by other sub-grid scale processes that are represented in the high wind speed tail of the distribution may be captured without the need for a separate, stand-alone parameterization. Both lifting parameterization options require the specification of poorly constrained parameters. In practice, the values are set through an iterative process, with the user examining the output from an initial simulation, adjusting the parameters up or down to drive the resulting dust field toward the desired result, and re-running the simulation. For example, [26] simulated a dust storm in Isidis Planitia, but in order to achieve dust opacities that were representative of typical storms, over a dozen simulations had to be conducted in order to appropriately tune the unknown lifting parameters. Similarly, by tuning dust lifting parameters, it was possible to generate a dust storm that was morphologically similar to the storms found in the region of Northeast Amazonis/Southwest Arcadia (Figure 2), as described by [27]. Cloud Microphysics To enable detailed Mars cloud simulations with MRAMS e.g., [28], a suitable cloud microphysics scheme was needed. The eventual choice was the Community Aerosol and Radiation Model for Atmospheres (CARMA) model [29] adapted for Mars H2O ice and CO2 ice clouds as in [30,31]. A monodispersed or bulk (moment) microphysical model e.g., [32,33] was not chosen, because of the inherent limits of an analytical distribution to represent complex size distributions that might be important for Mars. CARMA discretizes the mass (size) distributions of airborne dust, H2O ice, and/or CO2 ice particles into a user-specified number of bins with a user-specified width (e.g., eight dust bins from 0.05-5 mm; 18 H2O ice bins from 0.07-102 mm). All modeled particles are subject to the full range of atmospheric transport, including sedimentation/precipitation in a non-zero vertical velocity environment and turbulent mixing. Sedimentation/precipitation is calculated by the CARMA-based routines, while particle advection and turbulent diffusion time rates of change are calculated by the MRAMS dynamical core and applied by the microphysical driver routine. H2O ice is permitted to heterogeneously nucleate on or sublimate from foreground dust particles. CO2 ice is permitted to heterogeneously nucleate on or sublimate from either foreground dust particles or H2O ice particles. MRAMS also tracks the number/amount of airborne particles that fall to the ground, and the fallen dust may be optionally lifted again by turbulent wind gusts. Using the CARMA microphysics in MRAMS is computationally-intensive compared to a moment-based method. However, it has enabled MRAMS to simulate complex phenomena such as multimodal cloud particle size distributions in the lee clouds of the Tharsis Montes [28] and dust devil track generation [25]. An example of the modeled microphysical details of the afternoon clouds associated with Olympus Mons is given in Figure 3. These clouds are due to thermally-induced upslope flow transporting water vapor and dust along the flanks of the volcano from lower levels. The adiabatically cooled air produces a primary cloud particle growth region in the lee of the mountain. Rapid growth results in narrow distributions of water ice particles with relatively large effective radii (reff ~8 µm) in the lee of Olympus Mons. As they form, these particles are transported downstream (to the west) and out of the updraft core by the large-scale horizontal winds. A substantial portion of the cloud mass quickly falls to lower elevations above the flank of the volcano (the larger particles have fall velocities near 1 m s -1 ), but the smaller particles have settling times long enough to create a cloud "plume" oriented in the downstream direction, which elongates with time. A wide variety of cloud particle populations is present in the model results, as shown in Figure 3. Preferential sedimentation of large particles results in size sorting within the "plume" such that the Cloud Microphysics To enable detailed Mars cloud simulations with MRAMS e.g., [28], a suitable cloud microphysics scheme was needed. The eventual choice was the Community Aerosol and Radiation Model for Atmospheres (CARMA) model [29] adapted for Mars H 2 O ice and CO 2 ice clouds as in [30,31]. A monodispersed or bulk (moment) microphysical model e.g., [32,33] was not chosen, because of the inherent limits of an analytical distribution to represent complex size distributions that might be important for Mars. CARMA discretizes the mass (size) distributions of airborne dust, H 2 O ice, and/or CO2 ice particles into a user-specified number of bins with a user-specified width (e.g., eight dust bins from 0.05-5 mm; 18 H 2 O ice bins from 0.07-102 mm). All modeled particles are subject to the full range of atmospheric transport, including sedimentation/precipitation in a non-zero vertical velocity environment and turbulent mixing. Sedimentation/precipitation is calculated by the CARMA-based routines, while particle advection and turbulent diffusion time rates of change are calculated by the MRAMS dynamical core and applied by the microphysical driver routine. H 2 O ice is permitted to heterogeneously nucleate on or sublimate from foreground dust particles. CO 2 ice is permitted to heterogeneously nucleate on or sublimate from either foreground dust particles or H 2 O ice particles. MRAMS also tracks the number/amount of airborne particles that fall to the ground, and the fallen dust may be optionally lifted again by turbulent wind gusts. Using the CARMA microphysics in MRAMS is computationally-intensive compared to a moment-based method. However, it has enabled MRAMS to simulate complex phenomena such as multimodal cloud particle size distributions in the lee clouds of the Tharsis Montes [28] and dust devil track generation [25]. An example of the modeled microphysical details of the afternoon clouds associated with Olympus Mons is given in Figure 3. These clouds are due to thermally-induced upslope flow transporting water vapor and dust along the flanks of the volcano from lower levels. The adiabatically cooled air produces a primary cloud particle growth region in the lee of the mountain. Rapid growth results in narrow distributions of water ice particles with relatively large effective radii (r eff~8 µm) in the lee of Olympus Mons. As they form, these particles are transported downstream (to the west) and out of the updraft core by the large-scale horizontal winds. A substantial portion of the cloud mass quickly falls to lower elevations above the flank of the volcano (the larger particles have fall velocities near 1 m s -1 ), but the smaller particles have settling times long enough to create a cloud "plume" oriented in the downstream direction, which elongates with time. A wide variety of cloud particle populations is present in the model results, as shown in Figure 3. Preferential sedimentation of large particles results in size sorting within the "plume" such that the effective radii of the condensate particles generally decreases with both height and lateral distance from the volcano. The simulated cloud particle radii and relatively broad size distributions are consistent with telescopic observations of the afternoon clouds over the Elysium Mons volcano [34]. Rapid, often violently turbulent transport of diverse water-ice particle distributions (especially in the lee, just above and below the most massive portions of the cloud) creates multiple regions of bimodal particle distributions. The simulated particle size and spatial distributions are clearly in conflict with simple microphysical representations used in some models, and also with spectrometer retrievals of the aerosol effective radius that assume particle sizes are constant with height e.g., [35]. Therefore, the magnitude of the effect of these possible significant spatial and temporal variations of aerosol size distributions should be considered when interpreting radiance-derived and model-predicted fields that are based on simpler assumptions about the aerosol size and spatial distributions. . Red denotes maximum value (~5 × 10 -5 ) and blue minimum value (1 × 10 -5 ) with color-coded bullets showing the location of representative cloud particle size distributions (lower 2 panels). Horizontal axis has units of grid points (grid-spacing is~40 km). (b) Color-coded sample cloud particle size distributions chiefly along the lee cloud "plume" axis. (c) Color-coded sample cloud particle size distributions above and below the "plume". Radiative transfer There are now three radiative transfer options currently available, while only one was available in 2001. The new options use a two-stream correlated-k method [36] or an implementation of the DISORT discrete ordinates method [37]. One of the last two options must be used when the foreground dust is active or if microphysical species are selected to be radiatively active. Radiative transfer calculations are computationally demanding, often requiring one or even two orders of magnitude more time than the dynamical core calculations. Taking advantage of the fact that over short time intervals the radiative transfer solution (e.g., heating rates, radiative fluxes) does not change much, the radiative transfer is generally not calculated every dynamical core timestep in an MRAMS run. Instead the radiative transfer solution is computed only every radiative transfer timestep (usually set at~300 s for mesoscale runs and~30 s for microscale experiments), and this fixed forcing is applied repeatedly during the often much shorter dynamical timesteps. During the implementation of the DISORT option, a coding bug affecting the correlated-k scheme was discovered and fixed. The bug had the effect of producing slightly erroneous heating rates over a narrow range of column opacities between~1 to 3. The bug does not affect any prior published results in any significant way to the best of our knowledge. Pollack Two-Stream The original radiation scheme in MRAMS was adapted directly from the scheme present in the stable version of the NASA Ames Mars general circulation model available at the time [38]. As described in [4], that code uses look-up tables to determine the net solar flux as a function of pressure and opacity. Infrared forcing is split into two bands: the 15 mm CO 2 emission region and wavelengths outside that region. Column opacity is fixed, and a Conrath-ν profile is assumed [17]. That original radiation parameterization is still available in the model, although it is now rarely used. Correlated-k The correlated-k radiative transfer option is based on the CARMA radiative transfer code [36] and is a two-stream plane-parallel scheme. Gas opacities are calculated using the correlated-k approach with coefficients for CO 2 +H 2 O mixtures, using the same spectral intervals as the most current version of the NASA Ames Research Center GCM radiative transfer code [39]. Separate mass/size distributions of foreground and background airborne dust may be used with the restriction that they must use the same dust mass/size bin discretization as the CARMA microphysical parameterization (see Section 4.2). Additionally, the radiative transfer effects of airborne water ice and CO 2 ice particles can be calculated with this scheme. As described for dust in Section 4.1.1, the radiative transfer contribution of each foreground airborne particle type (dust, H 2 O ice, and/or CO 2 ice) can be individually toggled on or off. Optical properties, as a function of particle radius and radiation wavelength, of H 2 O ice particles are derived with a Mie code [40] and standard refractive indices [41], assuming sphericity. Optical properties of CO 2 ice particles are derived with the same Mie code and refractive indices from [42,43], assuming sphericity. Optical properties of dust particles are also derived using that Mie code, assuming sphericity, but one of three sets of published refractive indices may be selected. Two of these dust refractive index options are based on the optical properties of palagonite: 1) values from the ultraviolet to a wavelength of 4.23 µm [44], values from 4.23 µm to 24.88 µm [45], and 2) values from~0.2 µm to 24.88 µm [45]. Both use a power-law dependence (λ -0.25 ; as in [46]) at longer wavelengths. The third dust refractive index option uses values derived from spacecraft observations of Mars dust from the ultraviolet to~98.5 µm [47], with the same power-law dependence (λ -0.25 ) at longer wavelengths. DISORT The ability to use the DISORT radiative transfer parameterization was recently contributed by Dr. Hao Chen-Chen from the University of the Basque Country. DISORT assumes a plane-parallel atmosphere and solves the radiative transfer equation with multiple scattering over a specified number of streams using the discrete ordinates method [48]. The default in MRAMS is eight streams and can be increased up to a maximum value of 32. A key difference between DISORT and the two-stream codes is the use of dust phase function moments expressed as Legendre polynomials. In each wavelength band (i.e., stream), there are 16 quadrature points, each of which is represented by a 32-degree polynomial. The values are derived from [47]. Some additional helpful features were added during the implementation of DISORT. Firstly, the ability to reference the opacity to the surface pressure rather than a hardwired 610 Pa reference level was added. The code for the 610 Pa reference is still active and is the default setting, but it can be easily overridden by a toggle in the source code. This feature is likely to become a toggle available as an option in the namelist in future versions. Secondly, the ability of the user to more easily prescribe an arbitrary vertical dust distribution profile is now included in the source code. As a result, prior calculations that assumed a Conrath-ν shape have been generalized. Topographic Effects The use of topography in MRAMS brings up a need to consider its effects on the amount of radiation incident upon (or emitted from) the sloped surface of each horizontal grid cell. This is particularly important for downwelling shortwave radiation due to its primarily direct-beam (as opposed to omnidirectional) nature. MRAMS can be configured to take slope angle and orientation and/or horizon obstruction/extension into account-or simply assume all grid cells are flat for the purposes of radiative transfer only. Slope angle is the vertical angle between the surface and a horizontal plane, and slope orientation (or aspect) is the horizontal circular angle between north and the direction the slope faces. The more directly a sloped surface faces a beam of radiation, the more radiative flux it will receive (and vice versa). For example, if the sun is directly to the south at an elevation angle of 40 • , an unobstructed south-facing 40 • slope would receive significantly more insolation than either a flat surface or a NW-facing sloped surface (or indeed, any other sloped surface). If their use is desired, during initialization, MRAMS calculates (and saves for runtime use) the slope angle and orientation values for each horizontal grid cell from the finalized topography field for that grid. At runtime, a standard trigonometric expression (dependent on orbital parameters, time, and geographic location) is used to calculate the factor which, when multiplied by the incoming radiative flux, specifies the flux incident upon the sloped surface. In the case of zero topography, the horizon would be 90 • from vertically straight overhead (Z h ; horizon zenith angle) in all directions. This is not the case with realistic (generally non-zero) topography, where the horizon may also be obstructed (i.e., Z h < 90 • ; resulting in the surface being shadowed part of the time) or extended (i.e., Z h > 90 • ; resulting in the surface receiving enhanced illumination before sunrise or after sunset). Note that horizon obstruction/extension is not a parameter that can be calculated using a single grid cell's information, but instead requires taking the planet's curvature and relatively distant topography into account. If this option is desired, during model initialization, a process akin to ray-tracing is used to determine the circular "viewshed" (Z h profile discretized into 48 regular angular bins) around each grid cell and saves these for use during the run. For each viewshed bin, a modified 400 km long radial topographic profile (chosen to account for the greatest topographic relief on Mars) is constructed, subtracting the planet's curvature from the usual topography values. Note that all topographic information used in this process is prepared just as the regular MRAMS topography is. This modified topographic profile is then used to determine the Z h (as well as the lateral distance to and apparent height of an obstruction) for that viewshed bin. The process is repeated for the other viewshed bins and for all grid cells. For each grid point and radiative transfer timestep during a model run, Z h and any obstruction distance/height are interpolated to the current solar azimuth angle. If the solar disk is presently behind an obstructed horizon, the current local top height of the shadow is calculated from the obstruction distance and height, and all direct shortwave radiative flux is removed from that height to the ground and surface, and solar heating rates are zeroed within the shadowed portion of the column. Alternatively, if the horizon is extended, a small shortwave flux equal to that at a solar elevation angle of 87.7 • (cosine of the solar zenith angle = 0.04) is used until the solar disk is higher in the sky. Subgrid-Scale Diffusion and Steep Topography The parameterized calculation of sub-grid turbulent diffusion requires true horizontal (i.e., not simply along a coordinate surface, which may be sloped) gradients of momentum and/or temperature. RAMS version 4.4 had an option to either use simple decomposed sigma-z coordinate gradients or a significantly more involved true horizontal (local Cartesian surface) gradient calculation. The decomposed sigma-z gradient method both better numerically conserves mass/energy and requires less computational effort. However, the scheme produces poor or non-physical results when steep topography is combined with vertically-stratified momentum and/or temperature fields-a situation frequently encountered in the mountainous regions of Earth (e.g., [49]) and for much of Mars. In particular, for deep craters or canyons with steep walls, spurious countergradient diffusion often caused the thermal fields to run away, eventually bringing down the model simulation. The existing true horizontal gradient calculation was modified to improve its ability to handle the often complex and high-relief topography of Mars. The current method involves more careful interpolation/extrapolation of fields across multiple coordinate surfaces, especially at local narrow ridgetops and narrow valley floors where a simpler scheme may run into issues. With the inclusion of the additional terms in the model core, the new scheme can be slightly non-conservative, but it does allow the model to proceed in cases where the more conservative diffusion scheme fails. The user has the option of using the original or updated diffusion scheme. Typically, MRAMS is configured to use the updated horizontal gradient method for all simulations where the topography is not extremely flat. Initialization and Boundary Conditions The general process by which MRAMS prepares its initial state and boundary conditions has changed little from that described in [4], with dataset-based surface characteristics (e.g., topography, albedo) being prepared first, then boundary conditions (if not an idealized run) based on GCM output. As before, the prepared surface characteristics are saved in files ("surface files"; one per grid), as well as the boundary conditions ("varfiles"; one per grid and boundary condition time interval), for later reference. However, the surface characteristic datasets and GCMs have changed over the years. Also, a simple Python-based visualization tool is now included with the model in order to assist with grid configuration and placement (particularly with respect to topography). MRAMS currently bases its terrain and surface characteristics on up to 1/128 • gridded MGS Mars Orbiter Laser Altimeter (MOLA) topography [50], 1/20 • gridded MGS TES albedo [51], and 1/20 • gridded MGS TES-based nighttime thermal inertia [52]. During initialization, these are all low-pass filtered at an appropriate spatial resolution for each grid to avoid variations that cannot be properly handled by the model numerics (e.g., those with a wavelength of less than or equal to two times the horizontal grid spacing). Higher-resolution grids can also be configured to use a coarser-resolution grid's surface characteristic field, which can be useful if the underlying dataset is judged to be too noisy at smaller scales (e.g., thermal inertia) for a given application. Individual users/projects have also used custom surface characteristic datasets (e.g., as in [53], or HRSC-derived small-scale topography). Unlike the horizontally-homogenous (idealized) initialization mode, which uses purely numerical boundary conditions, the "variable initialization" mode of MRAMS requires spatially-variable 3D initial atmospheric state and time-dependent 3D boundary conditions. To produce these, relevant input data must first be carefully processed onto the typically dissimilar MRAMS computational domain. In [4], MRAMS used a package based on RAMS called ISAN (Isentropic Analysis, modified from its terrestrial version) to do this, but for many years, a similar custom set of routines within MRAMS known internally as IPP (Ingestion and Preprocessing Package) has been solely used for this task. NASA Ames Research Center Mars GCM output is typically used as the input data source, but other GCM output or similar 3D atmospheric state information can be used with a minimal amount of coding effort. Input data are first horizontally interpolated (using geographic latitude and longitude) to the MRAMS oblique polar stereographic projection via bilinear interpolation. However, the topography for MRAMS is locally higher or lower than that of the input because the input data's map projection and spatial resolution differ from that of the MRAMS grids. The vertical mapping of the input data to the MRAMS computation domain is, therefore, more involved than the horizontal interpolation. The general aims of the IPP algorithm are that the atmospheric state aloft (usually MRAMS is run with a model top of 50 km altitude or more) is preserved as much as possible and that the vertical profile shapes in the lower~5 km are retained as much as possible (since temperature and even momentum isopleths near the surface of Mars are typically more parallel to the topography than not). In the "seam" between those two zones, a portion of the atmospheric state (proportional to the disparity between the MRAMS and input data topography) must be either "deleted" (input topography < MRAMS) or extrapolated (input topography > MRAMS). The absolute values of the near-surface zone fields are then modified to ensure a monotonic transition across the "seam". Input data with a higher spatial resolution are preferred, as it generally reduces the amount and impact of this vertical processing. After all other vertical operations have been performed, the pressure on the MRAMS grid is obtained by a hydrostatic integration downward from the model top. Preprocessors are currently available for several versions of the NASA Ames Research Center Mars GCM and the LMD Mars GCM. Additional preprocessor routines can be added by the user as needed. Issues related to mother domain size and configuration were raised by [6], who argued that a super-hemispheric domain centered at the pole and draping over the equator was preferable over alternate configurations. Such a configuration allowed the thermal tide to propagate seamlessly around the tropics without encountering grid boundaries. There are several disadvantages, however, that were not fully considered. Locations in the tropics are in a region of highly geometrically-distorted grid geometries when the pole point of the map projection is located at the geographic pole, and the grid spacing is considerably larger (by many factors) in the opposing hemisphere compared to the spacing at the pole point. Although the bulk of the thermal tide does not pass through grid boundaries, it does propagate through rapidly changing and distorted grid cells. These changes in grid geometry can introduce nonphysical computational modes. Also, if the location of interest is strongly influenced by meridional flows from the opposing hemisphere, the solution becomes highly dependent on the treatment of the boundary, and can also be influenced by the very coarse and distorted solution of the model near the boundary. Relatedly, a great deal of computational time can be wasted simulating high latitude dynamics that may have little bearing on a tropical location. Experience with MRAMS suggests that domain configurations should be determined on a case by case basis. When simulating the tropical location of the Mars Science Laboratory in Gale Crater, a domain centered near the landing site was found to provide good agreement with observations with little to no evidence of issues in representing the tidal signature [11]. Since different models handle lateral boundary conditions differently, the domain configuration may be a model-dependent consideration; what is true for one model may not be universally true for all models. Frequent updates at the boundaries and large mother domains should also help to mitigate potential problems with the tide (or other waves and phenomena) entering or leaving the lateral boundaries. Typically, MRAMS uses boundary condition that updates at 1.5 h intervals. This nominal frequency was driven by the typical output frequency from the GCM, which is 1.5 h. Given the phasing and structure of tidal modes, 1.5 h could result in aliasing of the tidal forcing at the boundaries. Updates at a frequency of at least 1 h are now recommended, but this requires the source GCM to be configured to provide an output at that interval. Regardless of the boundary condition update frequency, some aliasing of the pressure field will occur. Pressure errors across the domain result in a pressure gradient that will then drive spurious accelerations. For a fixed error, the spurious pressure gradient will scale inversely with the domain dimension. Thus, all things being equal, larger domains will minimize the spurious circulations associated with the aliasing of pressure waves at the lateral boundaries. Deprecation of Global Simulations The original release of MRAMS included the capability for global simulations in addition to the standard limited area modeling. This was achieved by "zipping" together two super-hemispheric grids. In the simplest configuration, each grid was located at the geographic pole with an overlap in the tropics. Prognostic model properties were averaged and interpolated in the overlap region between the grids in order to provide global coverage. Experiments with the global zipping technique showed the process to be inadequate for Mars. There were two major, but related problems. The first is that the interpolation and averaging scheme were not conservative in mass or energy. The properties in each of the grids within the overlapping regions were not identical, and the averaging technique necessarily produced values that had mass and energy distributions that were necessarily different than the already differing inputs. The second problem was a global manifestation of the local non-conservation problem: neither mass nor energy was globally conserved. In particular, the model tended to produce a near-secular trend in mass gain or loss as a function of time, which was traced to the small mass leak associated with the zipping. At any given time step, the change in global mass was small, but when integrated over time, that mass loss became substantial over seasonal to annual time scales. A global correction scheme was implemented that computed the net global mass error and globally multiplied the pressure field by an appropriate factor to force hydrostatic mass conservation at each time step. This technique was successful in solving the mass problem without any major noticeable impacts on other fields. The correction process was, however, extremely computationally inefficient. The correction process also proved to be irrelevant, because the model solutions, particularly in the hemispheric zipping regions, produced distorted and noisy results. The averaging technique triggered extensive gravity wave noise, the strong tropical thermal tide signal displayed poor phasing of the semi-diurnal and higher frequency modes, and the tropical jets had odd structures. Most of these poor results were assumed to be a result of the crude hemispheric zipping operator, but it is also possible, if not likely, that some of the poor results were due to numerical artifacts forced by the highly distorted and irregular grid geometries that result from a polar stereographic grid projection extending past the equator. Ultimately, the global experiments were abandoned, and the capability and corresponding code were completely removed from the model. Better, workable solutions, such as the construction of true global grids like those in PlanetWRF [7], were on the horizon. More recently, multiscale global models designed explicitly to produce global simulations with higher resolution structured or unstructured grids now provide a general solution e.g., [53][54][55]. NetCDF and File Input/Output In 2001, MRAMS and its early postprocessor application could only read and write files in an MRAMS-specific binary format. In order to improve portability between computer systems and data utility among users, output is now in NetCDF format. Most of the input data sets, such as topography, are now also formatted in NetCDF. The use of this data standard also permits the use of widely available tools to browse and analyze the model data. The current MRAMS postprocessor application (postp) enables derived fields to be calculated from the native model output variables, and output to a variety of formats. The most flexible option is for postp to write NetCDF-format files (along with small corresponding descriptor files for quickly viewing the postprocessed fields with the GrADS visualization software) that can be readily used by a variety of modern visualization and analysis tools (e.g., IDL, Python). Other postp output options include the legacy GrADS binary format, and a few obscure specialized formats that are used mostly for increasingly obsolete 3D rendering applications. Code Structure and Organization Like its forerunner RAMS, runtime configuration is strongly preferred in MRAMS over compile-time configuration (which is used by some existing models). This enables a user to compile once, and then run many types of numerical experiments using the same executable. A single (substantial) namelist specifies the model configuration and contains inline comments with helpful short descriptions and reminders of each setting. The MRAMS postprocessor application similarly uses a single human-readable namelist for its configuration. The primary codebase is now maintained as a modern git-format repository to enable version control and other related niceties. Build-and run-scripts written in Python and a bit of shell-script (sh) now enable the code to be more easily ported to other systems (Linux, MacOS; high-performance computing systems), simplify/automate the process of building the modeling system, and enable repetitious code to be autogenerated (reducing the chance for human coding errors). MRAMS is now primarily written in Fortran 95+ (i.e., Fortran 95 with a few Fortran 2003 items). Very little C-language code is now used, and even then, only for a few filesystem and operating system interface tasks. Almost all routines are contained within Fortran MODULEs, and their arguments are specified with INTENT attributes, allowing for comprehensive checks by modern compilers, greatly reducing memory leaks and other undesirable model behavior. Common and/or universal constants have been put into a few Fortran modules that are used repeatedly throughout the code (instead of the error-prone practice of locally defining the values in multiple places). General modular code practice has been followed, encapsulating repeated or easily compartmentalized code as functions and subroutines called by driver routines. MRAMS now has parallel computing capability (using an MPI-based distributed machine paradigm), which is important for making large or more complex runs practical. Future Directions RAMS_2.9_r39 is the current stable release; however, the code forked at version r37 with one line maintained at SwRI and one line at SETI. Very little further development has happened on the SwRI side, except for bug fixes, but communication between the SwRI and SETI groups continues, and there is the possibility that some or all of the two branches will be merged again in the future. SwRI is now working on a next-generation model that will eventually replace MRAMS, which is the reason that further development at SwRI has largely ceased. However, much of the physics in MRAMS is expected to transfer over to the new model. On the SETI side, the most recent development has centered on streamlining and modernizing model output options, including adding runtime postprocessing. Other developments include the ability to use DEMs based on spacecraft imagery as part of the model topography (useful at small spatial scales) and recent observation-based dust climatologies e.g., [56] for the atmospheric dust loading. Conclusions The Mars Regional Atmospheric Modeling System has been in use for nearly two decades and has been utilized for both fundamental and applied research problems of the Mars atmosphere. Since the initial paper introducing the model [4], numerous updates, changes, and capabilities have been added. In addition, substantial experience specific to the uniqueness of Mars has been gained. MRAMS has been fully updated to modern Fortran standards with extensive use of modules, explicit variable type, and I/O declarations. All of the original C code used primarily for dynamic memory allocation has been removed and replaced by Fortran equivalents. Supporting input data files and model output data files are almost all in the NetCDF standard. Installation of the code is now managed by modern configuration and build tools. The code base is maintained in a version control repository. All of these changes have made for a more user-friendly interface and model compared to the original release. One of the most important changes to the model was the inclusion of full compressibility to the model core with additional modifications for the complete linearized buoyancy term. This was found to be absolutely necessary in order to properly capture the pressure signal associated with the diurnal tide. Although winds have never been comprehensively observed on Mars, it is likely that compressibility is necessary to properly represent the strong irrotational flows associated with the atmospheric tide. Fully compressible physics should be considered an absolute necessity for any future or next-generation model development. Bin microphysics for dust, water, and CO 2 are now part of the standard release, and mixed-phase aerosols consisting of combinations of these three groups are possible. Dust may be lifted from the surface, and sources/sinks are tracked throughout its lifecycle, including lifting, condensation nuclei, sublimation of condensates, and sedimentation and precipitation on the surface. All the microphysical species can be radiatively active. Dust is represented with both a background and foreground field, which behave independently and are useful for simulating phenomena that positively perturb the nominal atmospheric dust loading (e.g., dust storms). Numerous other updates, including more sophisticated horizontal diffusion, subgrid scale dust lifting schemes, and bug fixes have been incorporated into the current version of MRAMS. Whereas the original model was only capable of simulating the very basic mesoscale and microscale circulations of Mars, the current version can be applied over any geographical area, from pole to equator, and has the ability to represent many of the key physical processes. The MRAMS model is still being used by many groups, and even with the development of the next generation models, MRAMS is likely to continue as a useful tool for the investigation of the atmosphere of Mars. Researchers should contact the authors if they have an interest in using the code. Funding: The research summarized in this manuscript was funded by dozens of grants from internal and external sources, primarily various NASA programs, over the years. The writing of this manuscript involved no external funding.
13,903.4
2019-11-27T00:00:00.000
[ "Physics", "Environmental Science" ]
Life on the Edge : Latching Dynamics in a Potts Neural Network Chol Jun Kang 1,2 ID , Michelangelo Naim 1,3 ID , Vezha Boboeva 1 and Alessandro Treves 1,4,* ID 1 Cognitive Neuroscience, SISSA—International School for Advanced Studies, Via Bonomea 265, 34136 Trieste, Italy<EMAIL_ADDRESS>(C.J.K<EMAIL_ADDRESS>(M.N<EMAIL_ADDRESS>(V.B.) 2 The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy 3 Department of Physics, La Sapienza Università di Roma, Piazzale Aldo Moro, 5, 00185 Roma, Italy 4 Centre for Neural Computation, Norwegian University of Science and Technology, 7491 Trondheim, Norway * Correspondence<EMAIL_ADDRESS>Tel.: +39-040-3787-623 Introduction How can the human brain produce creative behaviour?Systems neuroscience has mainly focused on the states induced, in particular in the cortex, by external inputs, be these states simple distributions of neuronal activity or more complex dynamical trajectories.It has largely eschewed the question of how such states can be combined into novel sequences that express, rather than the reaction to an external drive, spontaneous cortical dynamics.However, the generation of novel sequences of states drawn from even a finite set has been characterized as the infinitely recursive process deemed to underlie language productivity, as well as other forms of creative cognition [1].If the individual states, whether fixed points or stereotyped trajectories, are conceptualized as dynamical attractors [2], the cortex can be thought of as engaging in a kind of chaotic saltatory dynamics between such attractors [3].Attractor dynamics has indeed fascinated theorists, and a major body of work has shown how to make relevant for neuroscience the concepts and analytical tools developed within statistical physics, but the focus has been on compact, homogeneous neural networks [4][5][6][7].These have been regarded as simplified models of local cortical networks-as well as, e.g., of the CA3 hippocampal field-and have not been analysed in their potential saltatory dynamics, given that it would make no sense to consider local cortical networks as isolated systems.Even in the case of a ground-breaking investigation of putative spatial trajectory planning [8], the hippocampal activity that expressed it was thought not to be entirely endogeneous, but rather guided by external inputs, including those representing goals and path integration.Therefore, formal analyses of model networks endowed with attractor dynamics have been largely confined to the simple paradigm of cued retrieval from memory.Attempts have been made to explore methodologies to study mechanisms beyond simple cued retrieval [9,10], for example those involved in drawing, confabulation, thought processes in general, and language, which are all considered to be largely independent of external stimuli, at their core, and to combine generativity with recursion [11][12][13][14][15][16]. Potts neural networks, on the other hand, originally studied merely as a variant of mathematical or potentially applied interest [17][18][19][20][21], offer one approach to model spontaneous dynamics in extended cortical systems, in particular if simple mechanisms of temporal adaptation are taken into account [22].They can be subject to rigorous analyses of e.g., their storage capacity [23], of the mechanics of saltatory transitions between states [24] and are amenable to a description in terms of distinct "thermodynamic" phases [25,26].The dynamic modification of thresholds with timescales separate from that of retrieval, i.e., temporal adaptation, together with the correlation between cortical states, are key features characterizing cortical operations, and Potts network models may contribute to elucidate their roles.Adaptation and its role in semantic priming [27] have been linked to the instability manifested in schizophrenia [28]. The Potts description is admittedly an oversimplified effective model for an underlying two-level auto-associative memory network [29].The even more drastically simplified model of latching dynamics considered by the Tsodyks group [30,31], however, has afforded spectacular success in explaining the scaling laws obtained for free recall in experiments performed 50 years ago.The Potts model may be relevant to a wide set of behaviours and to related experimental measures, once the correspondence between model parameters and the quantities characterizing the underlying two-level network are elucidated.On this correspondence, we elaborate in a separate study [32].Here, we ask when does the Potts network latch? The Model We consider an attractor neural network model comprised of Potts units, as depicted in Figure 1.The rationale for the model is that each unit represents a local network of many neurons with its own attractor dynamics [4,6], but in a simplified/integrated manner, regardless of detailed local dynamics.Local attractor states are represented by S + 1 Potts states: S active ones and one quiescent state (intended to describe a situation of no retrieval in the local network), We call this autoassociative network of Potts units a Potts network, and refer to our earlier studies of some of its properties [22][23][24][25]33].The "synaptic" connection between two Potts units is in fact a tensor summarizing the effect of very many actual connections between neurons in the two local networks, but still following the Hebbian learning rule [34], we write the connection weight between unit i in state k and unit j in state l as [23] where c ij is 1 if two units i and j have a connection and 0 otherwise, C is the average number of connections per unit, a is the sparsity parameter, i.e., the fraction of active units in every stored global activity pattern ({ξ and p is the number of stored patterns.The last two delta functions imply that the learned connection matrix does not affect the quiescent states.We will use the indices i, j for units, k, l for states and µ, ν for patterns.Units are updated in the following way: and where r k i is the input to (active) state k of unit i integrated over a time scale τ 1 , while U and θ 0 i are, respectively, the constant and time-varying component of the effective overall threshold for unit i, which in practice act as inverse thresholds on its quiescent state.θ 0 i varies with time constant τ 3 , to describe local network adaptation and inhibitory effects.The stiffness of the local dynamics is parametrized by the inverse "temperature" β (or T −1 ), which is then distinct from the standard notion of thermodynamic noise.The input-output relations (2) and (3) ensure that In addition to the overall threshold, θ k i is the threshold for unit i specific to state k, and it varies with time constant τ 2 , representing adaptation of the individual neurons active in that state, i.e., their neural or even synaptic fatigue.The time evolution of the network is then governed by equations that include three distinct time constants: where the field that the unit i in state k experiences reads The "local feedback term" w is a parameter, first introduced in [25], that modulates the inherent stability of Potts states, i.e., that of local attractors in the underlying network model.It helps the network converge to an attractor faster by giving positive feedback to the most active states and so it effectively deepens their basins of attraction.Note that, in this formulation, feedback is effectively spread over (at least) three time scales: w is positive feedback mediated by collective attractor effects at the neural activity time scale τ 1 , θ k i is negative feedback mediated by fatigue at the slower time scale τ 2 , while θ 0 i is also negative, and it can be used to model both fast and slow inhibition; for analytical clarity, we consider the two options separately, as the "slowly adapting regime", with τ 3 > τ 2 , and the "fast adapting regime", with τ 3 < τ 1 .It would be easy, of course, to introduce additional time scales, for example by distinguishing a component of θ 0 i that varies rapidly from one that varies slowly, but it would greatly complicate the observations presented in the following. The overlap or correlation of the activity state of the network with the global memory pattern µ can be measured as Randomly correlated memory patterns are generated according to the following probability distribution P(ξ a, while correlated patterns are generated by the multi-parent algorithm sketched in [22], which will be discussed in a separate study [35]. Results When does robust latching, as a model of spontaneous sequence generation, occur?We address this question with extensive computer simulations, mostly focused on latching between randomly correlated patterns.We consider first the slowly adapting regime (τ 1 τ 2 τ 3 ) in which active states (τ 2 ) adapt slower than activity propagation to other units (τ 1 ), while inhibitory feedback is restricted to an even slower timescale, τ 3 .Next, we contrast with it the fast adapting regime (τ 3 τ 1 τ 2 ) in which, instead, inhibitory feedback is immediate, relative to the other two time scales. The critical parameters at play are the number of patterns, p, the number of active states, S, and the number of connections per unit, C, and we also look at the effect of the feedback term w.The other parameters, including T, τ 1 , τ 2 , and τ 3 , are kept fixed during simulations, after having chosen a priori values that can lead to robust latching dynamics in the two regimes. Slowly Adapting Regime In the slowly adapting regime, over a (short) time of order τ 1 the network, if suitably cued, may reach one of the global attractors, and stay there for a while; whereupon, after an adaptation time of order τ 2 , it may latch to another attractor, or else activity may die [25].However, how distinct is the convergence to the new attractor?One may assess this as the difference between the two highest overlaps the network activity has, at time t, with any of the memory patterns, m 1 (t) − m 2 (t): ideally, m 1 1 and m 2 is small, so their difference approaches unity.A summary measure of memory pattern discrimination can be defined as , where, of course, the identity of patterns 1 and 2 changes over the sequence. As discussed in [25], by looking at the latching length, how long a simulation runs before, if ever, the network falls into the global quiescent state, one can distinguish several "phases".Depending on the parameters, the dynamics exhibit finite or infinite latching behaviour, or no latching at all.Typically, when increasing the storage load p, the latching sequence is prolonged and eventually extends indefinitely, but, at the same time, its distinctiveness decreases, since memory patterns cannot be individually retrieved beyond the storage capacity; and, even before, each acquires neighbouring patterns, in the finite and more crowded pattern space, with which it is too correlated to be well discriminated. In Figure 2, we see that, for each S = (2, 3, 4), as p is increased beyond a certain value, latching dynamics rapidly picks up and extends eventually through the whole simulation, but, in parallel, its discriminative ability decreases and almost vanishes-the p-range where d 12 is large is in fact when there is no latching, and d 12 only measures the quality of the initial cued retrieval.For S = 1 no significant latching sequence is seen, whereas for higher values, at fixed p, its distinctiveness increases with S, but its length decreases from the peak value at S = 2. Since the latching length l is not itself sufficient to characterize latching and has to be complemented by discriminative ability, we find it convenient to quantify the overall quality of latching. With a new quantity Q defined as where η is introduced to exclude cases in which the network gets stuck in the initial cued pattern, so that no latching occurs; however, high d 12 and l are: η = 1 : if at least one transition to a second memory pattern occurs, 0 : otherwise. Q is therefore a positive real number between 0 and 1, and we report its color-coded value to delineate the relevant phases in phase space.Thus, low quality latching with small Q may result from either small d 12 or short l, or both.The parameters that determine Q which we focus on are S, C and p, after having suitably chosen all the other parameters, which are kept fixed.Their default values in the slowly adapting regime are N = 1000, a = 0.25, U = 0.1, T = 0.09, w = 0.8, τ 1 = 3.3, τ 2 = 100.0,τ 3 = 10 6 , unless explicitly noted otherwise.If activity does not die out before, simulations are terminated after N update = 6 × 10 5 steps, the total number of updates of the entire Potts network, and are repeated with different cued patterns.Re the values of S, C and p, we use the following notation, for simplicity: Figure 3 shows that there are narrow regions in the S-p and C-p planes, which we call bands, where relatively high quality latching occurs.The values of p with the "best" latching scale almost quadratically in S, and sublinearly in C.Moreover, one notices that, below certain values of S and C, no latching is seen, i.e., the band effectively ends at S ∼ 2, p ∼ 90 in Figure 3a and at C ∼ 50, p ∼ 70 in Figure 3b.Importantly, the band in Figure 3a is confined in the area delimited by the cyan solid and dashed curves above and below it.The dashed curve is for the onset of latching, i.e., the phase transition to finite latching [25], while the solid curve above is the storage capacity curve in a diluted network, given by the approximate relation beyond which retrieval fails [25].It should also be noted that overall Q values are not large, in fact well below 0.5 throughout both S-p and C-p planes.The reason is, again, in the conflicting requirements of persistent latching, favoured by dense storage, high p, and good retrieval, allowed instead only at low storage loads (in practice, relatively low p/S 2 and p/C values): In Figure 4, we show representative latching dynamics at three selected points in the (S, p) plane, in terms of the time evolution of the overlap of the states with the stored activity patterns (see Equation ( 8)).The three points, marked in red, span across the band in Figure 3a, and we see that latching is indefinite but noisy in the example at (5,250), which is apparently too close to storage capacity, while memory retrieval is good at (7,150), but the sequence of states ends abruptly, as the network is in the phase of finite latching [25].The two trends are representative of the two sides of the band, while in the middle, at (6,200), one finds a reasonable trade-off, with relatively good retrieval combined with protracted latching. We use two statistical measures, the asymmetry of the transition probability matrix and Shannon's information entropy [33,36,37] to characterize the essential features of the dynamics in different parameter regions.For that, we take all five red points from Figure 3a, such that they cut across the latching band in the S-p plane, and extend further upwards.We first compile a transition probability (or rather, frequency) matrix M from all distinct transitions observed along many latching sequences generated with the same set of stored patterns, as in [33].The dimension of the matrix M is (p + 1) × (p + 1), as it includes all possible transitions between p patterns plus the global quiescent state.M is constructed from the transitions between states having both overlaps above a given threshold value, e.g., 0.5, in a data set of 1000 latching sequences, by accumulating their frequency between any two patterns into each element of the matrix and then normalizing to 1 row by row, so that M µ,ν reflects the probability of a transition from pattern µ to ν. A, the degree of asymmetry of M, is defined as where M T is the transpose matrix of M and ||M|| = ∑ µ,ν |M µ,ν |.Note that A is small for unconstrained bi-directional dynamics and large for simpler stereotyped flows among global patterns, attaining its maximum value A = 2 for strictly uni-directional transitions.Note also that if the average had been taken over different realizations of the memory patterns, given sufficient statistics A would obviously vanish.(6,200); and (c) (7,150) in Figure 3a. Another measure we apply to the transition matrix M is Shannon's information entropy, defined as I µ takes positive real values from 0 (deterministic, all transitions from one state are to a single other state) to 1 (completely random), since it is normalized by log 2 (p + 1), which corresponds to a completely random case. We use these two measures, A and I µ , on the points, marked red in Figure 3a. (3, 350) − (4, 300) − (5, 250) − (6, 200) − (7, 150) that lie on a segment going through the latching band observed in the slowly adapting regime.If we focus on transitions between states reaching at least a threshold overlap of 0.5, Figure 5 appears to show two complementary, almost opposite U-shaped curves as the two measures, asymmetry and entropy, are applied to the five points along the segment.One branch of each U shape extends over the range that includes the high-Q latching band: these are the right branches of the two curves, in which asymmetry decreases from a large value A 1.6 at (7, 150) to a smaller one A 0.6 at (5, 250), while concurrently the entropy increases from I µ < 0.5 at (7, 150) to I µ > 0.8 at (5, 250).As Figure 4 indicates, at (7, 150), latching sequences are distinct but very short, and few entries are filled in the transition matrix: generally either M µν = 0 or M νµ = 0, so that asymmetry is high and entropy relatively low.This holds irrespective of the number of sequences that are averaged over.The opposite happens at (5,250), where many transitions are observed, and in filling the transition matrix they approach the random limit.The point with the highest Q-value, (6,200), is characterized by intermediate values of asymmetry and entropy which, we have previously observed, may be seen as a signature of complex dynamics [33].Extending the range upwards, it seems as if the asymmetry, with threshold 0.5, were to eventually increase again, reaching its maximum A = 2 at (3, 350), with a decreasing entropy, vanishing at the same point (3,350).These left branches are, however, dependent on the threshold values used, as Figure 5 shows, and do not imply that transitions become more deterministic because, in this region, there are simply fewer and fewer distinct transitions discernible above the noise (Figure 4).The left branches merely reflect the increasing arbitrariness with which one can identify significant correlations with memory states in the rambling dynamics observed at higher storage loads.In Figure 6, we see that the effect of the local feedback term, w, is first to enable latching sequences of reasonable quality, and then to also shift the latching band to higher values of S, effectively pushing this behaviour away from the storage capacity curve representing the retrieval capability of the Potts associative network.Hence, if one were to regard S as a structural parameter of the network, and w as a parameter that can be tuned, there is an optimal range of w values that allows good quality latching for higher storage.This argument has to be revised, however, by considering also the threshold U, since increasing w can be shown to be functionally equivalent, in terms of storage capacity, to decreasing U [32].Also for U, in fact, one can find an optimal range for associative retrieval to occur, in the simple Potts network with no adaptation and with w = 0 [23].This near equivalence between U and −w does not hold anymore in the fast adapting regime, to which we turn next. Fast Adapting Regime We characterize the fast adapting regime by the alternative ordering of time scales τ 3 < τ 1 τ 2 , such that the mean activity in each Potts unit is rapidly regulated by fast inhibition, at the time scale τ 3 .Equation (6) stipulates that ∑ S k=1 σ k i (t), the total activity of each unit, is followed almost immediately, or more precisely at speed τ −1 3 , by the generic threshold θ 0 i (t).Extensive simulations, with the same parameters as for the slowly adapting regime, except for w = 1.37, τ 1 = 20, τ 2 = 200 and τ 3 = 10, show that, similarly to the slowly adapting regime, there are latching bands in the Q(S, p) and Q(C, p) planes (see Figure 7).With these parameters, in particular, the larger value chosen for the feedback term w, the bands occupy a similar position as in the slowly adapting regime.Again, they appear to vanish below certain values of S and C, more precisely around S ∼ 3, p ∼ 120 in Figure 7a and around C ∼ 50, p ∼ 90 in Figure 7b, and to scale subquadratically in S and sublinearly in C. The band in the S-p plane is again confined by the storage capacity (solid cyan curve) and by the onset of (finite) latching (dashed curve).The storage capacity curve, which is independent of threshold adaptation, follows the same Equation ( 13).Examples of latching behaviour outside and inside the band are presented in Figure 8, at the same values for S but shifted by ∆p = 100, i.e., at the "red" points (5, 350), (6, 300), and (7, 250) in the S-p plane.Again, we see from Figure 7a that (5, 350) lies just above the band, while (6, 300) is right on the centre.To the right of the band, e.g., at (7, 250), the transitions are distinct but latching dies out very soon, while on the left, e.g., at (5,350), the progressively reduced overlaps are a manifestation of increasingly noisy retrieval dynamics.In all three examples, we observe that latching steps proceed slowly, even slower than the doubled time scale τ 2 = 200 would have led to predict.This appears to be because often a significant time elapses between the decay of the overlap of the network with one pattern and the emergence of a new one. Figure 9 shows the asymmetry and entropy measures, A and I µ , along the points (4, 400) − (5, 350) − (6, 300) − (7, 250) − (8, 200), in Figure 7a, where, again, we have chosen a series shifted by ∆p = 100 upwards in order to centre it better on the high quality latching band.Only an overlap threshold of 0.5 is considered.What one can see, in contrast with the slowly adapting regime, is that now the two measures are not quite complementary.The point (6, 300) that lies inside the band, very much at its quality peak, shows again an intermediate value for the asymmetry, but the highest value, given the overlap threshold, for the entropy.The discrepancy may be ascribed to the different prevailing type of latching transition observed in the fast adapting regime, Figure 8.As discussed in [24], in a Potts network latching transitions with a high cross-over, which can only occur between memory patterns with a certain degree of correlation, can be distinguished from those with a vanishing cross-over, which are much more random.In the fast adapting regime, as indicated by the examples in Figure 8, all transitions tend to be of the latter type.A more careful analysis indicates, in fact, that they are quasi-random, in that they avoid a memory pattern in which largely the same Potts units are active as in the preceding pattern.In fact, the value of the entropy at (6, 300) implies that on average from each of the 300 memory patterns there are transitions to at least 190 other patterns (to 190 if they were equiprobable, in practice many more); therefore, only the few patterns that happen to be more (spatially) correlated are avoided.Towards the left, the curves do not vary much depending on the threshold chosen for the overlaps, but the asymmetry eventually becomes maximal and the entropy vanishes simply because sequences of robustly retrieved patterns do not last long, so, in this particular case, it would take more than 1000 sequences to accumulate sufficient statistics. The effects of increasing the w term in the fast adapting regime are shown in Figure 10, where one notices two main features.First, there is heightened sensitivity to the exact value of w, so that relatively close data points at w = 1.33, 1.37, 1.41, and 1.45 yield rather different pictures.Second, although again increasing w shifts the latching band rightward, by far the main effect is a widening of the band itself.This is because in the presence of rapid feedback inhibition a larger w term ceases to be functionally similar to a lower threshold, which in the slowly adapting regime was leading in turn to noisier dynamics and eventually indiscernible transitions.In the fast adapting regime, the increased positive feedback can be rapidly compensated by inhibitory feedback, so that in the high-storage region overlaps remain large, until they are suppressed by storage capacity constraints (the cyan curve, which remains at approximately the same distance from the larger and larger latching band). We now turn to more explicit comparison of the transition dynamics in two regimes. Comparison of Two Regimes To look more closely at latching dynamics in the slowly and fast adapting regimes, we take the following points from Figures 3a and 7a, which allow us to cut through the bands at two different storage levels p = 200, S = (4, 5, 6, 7), p = 400, S = (6, 7, 8, 9).( 16) Figure 11 shows in different colors the overlaps of the state of the network with the global patterns, for sample sequences along the points (16), in the slowly adapting regime.For both p = 200 and 400, latching length is observed to decrease with S, unlike the discrimination between patterns, as measured by d 12 , in agreement with Figure 2. Note that the two rows in the figure are similar, indicating that the shift ∆p = 200 is approximately compensated by the rightward shift ∆S = 2.The fast adapting regime shows the same trends, again one sees in Figure 12 the approximate compensation between the two shifts ∆p = 200 and ∆S = 2, but latching appears in general less noisy.The main difference between the two regimes, however, is in the distribution of crossover values, those when the network has equal overlap with the preceding and the following pattern: their distribution (PDF, or probability density function) is shown in Figures 13 and 14 We see that, in the fast adapting regime, most transitions occur at very low crossover, i.e., the correlation with the preceding memory has to decay almost to zero before the next memory pattern can be activated.Only in regions of the (S, p) plane where latching sequences are very short, a few transitions only, we begin to see a small fraction of them with crossover values above 0.2.In most cases, the inhibitory feedback conveyed by the variable θ 0 i is so fast as not to allow transitions to be carried through by positive correlations, i.e., by the subset of Potts units which are in the same active state in the preceding and successive pattern.The choice of the next pattern is not completely random, as indicated by the relative entropy values still below unity, but is determined essentially by negative selection, as mentioned above: the next pattern tends to have few active Potts units that coincide with those active in the preceding pattern. In the slowly adapting regime, instead, due to the slow variation of the non-specific threshold, active Potts units can remain active, but they are encouraged by the variables θ k i to switch between active states if they have been in the same for too long.This can produce, particularly in the center of the latching band, sequences of patterns succeeding each other at high crossover, as shown by the distribution in Figure 13c.Even when latching is very noisy and approaches randomness, as in panels Figure 13a,e, crossover values are consistently above 0.2, indicating a preference for patterns insisting on the same set of active Potts units, unlike the fast adapting regime.Finally, when the number of states S is too large or, equivalently, that of patterns p too low, we observe some transitions with minimal crossover and a majority with very large crossover, as if occurring only with those patterns that were already partially retrieved when the network had still the largest overlap with the preceding pattern, but the main observation is that there are very few transitions at all, so that to plot a probability density distribution we need to used wide bins, in panels Figure 13d,h (and in Figure 14d). This difference between the two regimes is confirmed by an analysis of the correlations between successive patterns in latching sequences.In the Potts network, at least two types of spatial correlation between patterns are relevant: how many active Potts units the two patterns share, and how many of these units are active and in the same state.We quantify them with C 1 , the fraction of the units active in one pattern that are active also in the other, and in the same state; and with C 2 , the fraction that are also active, but in a different state.In a large set of randomly determined patterns, the mean values are C 1 = a/S and C 2 = a(S − 1)/S.The full distribution, among all pairs, is scattered around these mean values.However, do transitions occur between any pair of patterns? Figure 15 shows that relative to the full distribution, in blue, transitions tend to occur, in the slowly adapting regime on the left, only between patterns with C 1 above and C 2 below (or at most around) their average values.Thus, when the network has retrieved a memory representation, it looks for correlated ones, as it were, where to jump.In the fast adapting regime, this is not the case: transitions are almost random, except there appears to be a slight tendency to avoid those with C 1 well above its mean value.Note that the values of p and w are different in the two panels, and are chosen so as to be in roughly equivalent positions within the respective latching bands. The analysis of the crossover points, therefore, affords insight into the rather different transition dynamics prevailing in the fast and slowly adapting regimes, in particular in the center of their latching bands, suggesting that in a more realistic cortical model, which combines both types of activity regulation, there should still be a significant component of "slow adaptation" for interesting sequences of correlated patterns to emerge.The preceding simulations, however, were all carried out with randomly correlated patterns, in which the occasional high or low correlation of a pair is merely the result of a statistical fluctuation.Does the insight carry over to a more stuctured model of the correlations among memory patterns?This is what we ask next. Analysis with Correlated Patterns Correlated patterns were generated according to the algorithm mentioned by [22] and discussed in detail in [35].The multi-parent pattern generation algorithm works in three stages.In the first step, a total set of Π random patterns are generated to act as parents.In the second step, each of the total set of parents are assigned to p par randomly chosen children.Then, a "child" pattern is generated: each pattern, receiving the influence of its parents with a probability a p , aligns itself, unit by unit, in the direction of the largest field.In the third and final step, a fraction a of the units with the highest fields is set to become active.In this way, child patterns with a sparsity a are generated.In addition, another parameter ζ can be defined, according to which the field received by a child pattern is weighted with a factor exp (−ζk) where the index k runs through all parents.This is meant to express a non-homogeneous input from parents. It is clear that such patterns, however, cannot be considered as independent and identically distributed, as in Equation ( 9), because their activity is drawn from a common pool of parents.In fact, they are correlated, in the sense that those children receiving congruent input from a larger number of common parents will tend to be more similar.All of these observations are studied in more detail in [35], and here we only focus on how correlations affect the phase diagrams.In the following simulations, the parameters pertaining to the patterns are a p = 0.4, Π = 100, ζ = 0.1 while p par /p, the probability that a pattern be influenced by a parent is kept constant at 0.277. Simulations with correlated patterns were carried out across the same S-p and C-p planes in phase space, in the slowly adapting regime, as shown in Figure 16.We focused on the slowly adapting regime based on the results of the crossover analysis.All other simulation parameters were kept at the values used with randomly correlated patterns.We see from the figure that the presence of non-random correlations among the memory patterns, albeit weak, shifts the bands to the left and upward in phase space, keeping approximately the dependence of the viable storage load p on S and C, but at somewhat higher values.It is as if more memories could "fit", if correlated, into the same latching dynamics. Figure 17 shows the S-p plane cut along p = 200, to better compare the cases with correlated (blue) and random (red) patterns.It is apparent that there is a leftward shift, in the case of correlated patterns, from the red curve applying to the random case, but the dependence on S remains very similar. Conclusions In this paper, we have found the region in the Potts network phase space spanned by the number of Potts states S, the number of connections per unit C and the storage load p, where latching dynamics occur, and we have described their character, comparing and contrasting the slowly and fast adapting regimes.In relation to our earlier paper [22], where the possibility of such a latching region was pointed out on the basis of limited simulations, we have now a firmer basis to extrapolate to regions of parameter space of relevance to the human cortex, possibly a step toward quantitatively studying human specific capacities, including creative behaviour.A common hallmark in both regimes is that good quality latching occupies a band which scales almost quadratically in the p-S plane, while it is sublinear in the p-C plane.These bands are bounded by the storage capacity line, above, and by the boundary between no latching and finite latching, below.If, as discussed elsewhere [32], we were to take C ≈ 10 2 and S ≈ 10 2 as the orders of magnitude of interest for the human brain, we would conclude that the relevant storage load, or semantic depth, is in the region p ≈ 10 5 , in both regimes.At the center of the band in the slowly adapting regime, asymmetry and entropy take intermediate values, pointing at maximally complex and potentially useful dynamics, intermediate between the deterministic and the random extremes.High crossover values indicate that many transitions occur between highly correlated patterns.Using correlated patterns shifts the position of the band in phase space, but preserving the features observed with random patterns, still in the slowly adapting regime. In the fast adapting regime, instead, in the center of the band, which can be made wider and more robust, the entropy is higher, and correspondingly only low crossover transitions are observed, indicating that the network latches most of the time from one pattern to any other among the many with which it is weakly or anti-correlated, avoiding only those few with which it is highly correlated. Therefore, we can conclude that the fast adapting regime, modelling rapid inhibitory feedback, offers a robust framework for latching dynamics, but of an essentially random, not very useful nature; whereas in the slowly adapting regime, modelling slow inhibition or local fatigue, correlations can drive latching transitions, potentially enabling semantic content in a stream of thoughts or linguistic productions, but with fragile dynamics, living at the very edge between memory overload and sequence termination because of the inability of the network to jump forward.This suggests the opportunity of considering models that integrate both fast and slowly adapting dynamics in their non-specific thresholds, so as to combine the useful features of both regimes.It will be the object of future work. We would like to note, in the end, the inherent limitation of considering a simple homogeneous Potts network, with no differentiation among its units and no internal structure.In order to make contact with cognitive processes, of any kind, this limitation has to be overcome, as perhaps attempted, with one first step among many possible ones, by arranging Potts units on a ring [38].Nevertheless, even in its crudest form, the Potts network with its latching dynamics can be used to explore e.g., novel theories as to the evolutionary origin of complex cognition [39].It establishes a quantitative framework to understand phase transitions [25], complementary to the perspective offered by other modelling approaches to sequence generation in cortical networks [40].At the most abstract level, it can be considered an implementation of a fuzzy logic system [41,42], but with the critical advantage that its parameters can eventually be related to cortical parameters, as we begin to describe in a related study [32]. Figure 1 . Figure 1.Global cortical model as a Potts neural network.Reprinted with permission from [25]. Figure 2 . Figure 2. Trade-off between latching sequence length (solid lines) and retrieval discrimination (dashed lines).Different colors indicate different S values, while C = 400 throughout.The latching length l is in time steps (not in the number of transitions), normalized by the time of the simulation, N update = 6 × 10 5 . Figure 3 . Figure 3. Phase space for Q(S, p) in (a) and Q(C, p) in (b) with randomly correlated patterns in the slowly adapting regime.The parameters are C = 150 and S = 5, if kept fixed, and w = 0.8.The red spots in (a) mark the parameter values used in the following analyses. Figure 5 . Figure 5. (a) asymmetry A of the transition matrix and (b) Shannon's information entropy, I µ along the (3, 350)-(4, 300)-(5, 250)-(6, 200)-(7, 150) parameter series from Figure 3. Different curves correspond to different thresholds for the overlap of the two states between which the network is defined to have a transition.The error bars report the standard deviation of either quantity for each of 1000 sequences. Figure 6 . Figure 6.Latching quality Q(S, p) with increasing local feedback, w = 0.37, 0.55, 0.8, and 1.0 in the slowly adapting regime.Randomly correlated patterns are used, with C = 150 as in Figure 3a. Figure 7 . Figure 7. Phase space for Q(S, p) in (a) and Q(C, p) in (b) with randomly correlated patterns in the fast adapting regime.The parameters are identical to those in the slowly adapting regime, with the exception of w = 1.37, τ 1 = 20, τ 2 = 200, τ 3 = 10.The red spots in (a) mark, again, the parameter values used in the Figures below. Figure 10 . Figure 10.Latching quality Q(S, p) with increasing local feedback, w = 1.33, 1.37, 1.41, and 1.45 in the fast adapting regime.Randomly correlated patterns are used, with C = 150 as in Figure 7a. Figure 15 . Figure15.Scatterplots of the fractions C 1 and C 2 of Potts units active in one pattern that are active also in another, and in the same state or, respectively, in another active state.The panels show the full distribution between any pattern pair, in the slowly (a) and fast adapting (b) regimes, in blue; and the distribution between successive patterns in latching transitions, in red.The blue distribution for the fast adapting regime (for which a = 0.25, S = 6, p = 300 and w = 1.32) is similar to the one for the slowly adapting regime (for which again a = 0.25, S = 6, but p = 200 and w = 0.65), except that it is slightly wider, because of the higher storage load, while the red distributions are markedly different.Vertical lines indicate mean values: (a) slowly adapting regime; (b) fast adapting regime. Figure 16 . Figure 16.Phase space, cut across the Q(S, p) plane in (a) and Q(C, p) in (b), with correlated patterns in the slowly adapting regime.Red dots represent the quality peaks in the the same planes, with randomly correlated patterns.The parameters are C = 150 and S = 5, if kept fixed, and w = 0.8. Figure 17 . Figure 17.Comparison of S-p phase spaces along p = 200 with random (red dotted line) and correlated (blue dotted) patterns in the slow adapting regime.
9,473.8
2017-09-03T00:00:00.000
[ "Physics" ]
Antibodies as programmable, bipedal walkers Stochastic modeling of antibody binding dynamics on patterned antigen substrates suggests the separation distance between adjacent antigens could be a control mechanism for the directed bipedal migration of bound antibodies. Antibodies as program mable, bipedal walkers Stochastic modeling of antibody binding dynamics on patterned antigen substrates suggests the separation distance between adjacent antigens could be a control mechanism for the directed bipedal migration of bound antibodies. The question Regularly patterned arrays of antigensmolecules or structures that bind to molecular elements of the host immune system such as antibodiescommonly occur on the surfaces of natural pathogens such as viruses and bacteria. This spatial repetitiveness is recognized by the immune system as a marker of foreignness and patterned antigens therefore elicit a stronger response than unorganized antigens 1 . The spacing between patterned antigens occurs on a scale similar to the typical distance between the binding domains of bivalent antibodies. Previous work from our lab investigating antibodyantigen binding using DNA nanostructures to immobilize pairs of antigens at different separation distances revealed that binding stability depends on antigen separation distance 2 . In light of these data, we asked if stochastic modeling of binding dynamics between antibodies and complex antigen patterning scenarios could elucidate the interactions between bivalent antibodies and patterned antigen arrays. The discovery We created a stochastic model of antibody binding to monodispersed antigens, based on our previous data on antibody binding dynamics, to predict emergent antibody binding behavior in situations of complex antigen pattern geometry. The model treats antibody binding dynamics as a continuous-time Markov chain, with states based on empty antigen, monovalent antibodyantigen complexes, and bivalent antibodyantigen complexes, where transitions between these states are governed by elementary rates determined by fitting the model to experimental dynamic binding data (Fig. 1). Using a Markov chain Monte Carlo implementation of the model allowed the prediction of antibody binding trajectories across complex patterns with many adjacent antigens and over long timescales. By simulating regularly spaced arrays of antigens, it was possible to influence relative rates of binding and unbinding by tuning the antigen separation distance. Further, by creating gradients of separation distances between 10-22 nm, we found that antibodies migrate down this gradient towards antigens with smaller separation distances. We predict that this migration occurs owing to a biased random walk, where a bivalently bound antibody dislodges at one antigen and then will show a statistical preference for reassociating with an adjacent antigen with a more favorable binding separation distance. These findings support previous data from Preiner et al. that describe the bipedal movement of antibodies across reconstituted bacterial and viral surfaces with repeating antigen patterns 3 . The implications Viral and bacterial pathogens and their host immune systems are in a constant arms race to exploit mechanisms that could lead to a fitness advantage. This work further implicates a relatively overlooked property of antibodiesthe spatial reach of antibody binding arms -as an important aspect influencing antibody binding and host-pathogen interactions. The possibility for programmed antibody migration through the geometric tuning of the energy landscape hints at a complex picture of pathogenic surface molecules in which antigen patterns may have evolved in order to manipulate the spatial distribution of bound antibodies. This could be an important consideration in immunobiology, such as in the design of vaccines or antibody therapies meant to elicit a specific immune response, although further work is needed to demonstrate this in a biological setting. The next steps will be to validate the predictions made by these stochastic models in natural biological systems and incorporate additional structural aspects of antibodies -for example, the dependence of binding dynamics on antigen angular orientation -in order to improve the model's realism. Further work should aim to understand what natural constraints have evolved on both the pathogen and immune sides to influence this phenomenon. Finally, we hope that this information can be used to design better vaccines by informing the choice of antigen density and patterning geometry so as to promote a more specific immune response. Hoffecker et al. provide a simple but powerful modeling approach to study antibody binding and movement on defined antigen patterns. The importance of specific antibody binding to repeating patterns of defined spacing has become a hot topic and is certainly relevant not only to basic research, but also clinical biomedical applications, especially with the view of designing new antibody-based therapeutics and vaccine design." Amelie Heuer-Jungemann, Max Planck Institute of Biochemistry, Germany Behind the papeR A provocative experimental work by Preiner et al. 3 . inspired us to think about antibodies from a new perspectiveas walkers rather than simply binders. In their study, high speed atomic force microscopy imaging indicated that antibodies exhibit bipedal walking on patterned substrates. We found this prospect fascinating and, as we had access to precise measurements and models of specific antibody-antigen interactions from our earlier work 2 , were well positioned to explore this phenomenon having constructed our model pipeline. This in silico exploration likely would not have happened if not for the extended period of work-from-home caused by the COVID-19 pandemic. Forced away from the bench, we were able to make major advancements in the model that enabled the exploratory investigation into antibody migration. So it seems that, as with our findings, constraints can sometimes set things in motion. I.T.H. & B.H. fRom the editoR " This work stood out to me as the impact of antigen spacing and geometry on antibody movement, modeled as a discrete Markov process, indicates that antigen organization could be under selective pressure during host-pathogen co-evolution. This molecular programmability has extensive applications for designing new vaccines, which is currently a huge challenge." Ananya Rastogi, Associate Editor, Nature Computational Science Fig. 1 | Modeling antibody binding on antigen patterns. a, Models are based on experimental data measuring the multivalent binding of antibodies to antigens (red spheres) immobilized in precise locations on DNA nanostructures (blue cylinders). b, Model of antibody binding reduced to binding/unbinding and bivalent interconversion. c, An extension of the model to more complex pattern geometries by connecting states with elementary transitions. d, Visualization of the modulation of antigen separation distance to influence antibody binding strength. k 1 and k -1 are the on-and off-binding rates, respectively; k 2 and k -2 are the monovalent-to-bivalent and bivalent-to-monovalent interconversion rates, respectively; x is the separation distance between adjacent antigens. © 2022, Hoffecker, I. T. et al., CC BY 4.0.
1,492.4
2022-03-28T00:00:00.000
[ "Computer Science", "Biology" ]
Baxter posets We define a family of combinatorial objects, which we call Baxter posets. We prove that Baxter posets are counted by the Baxter numbers by showing that they are the adjacency posets of diagonal rectangulations. Given a diagonal rectangulation, we describe the cover relations in the associated Baxter poset. Given a Baxter poset, we describe a method for obtaining the associated Baxter permutation and the associated twisted Baxter permutation. Introduction The Baxter numbers B(n) = n + 1 1 −1 n + 1 2 −1 n k=1 n + 1 k − 1 n + 1 k n + 1 k + 1 count Baxter permutations [5], twisted Baxter permutations [10], certain triples of non-intersecting lattice paths [6], noncrossing arc diagrams consisting of only left and right arcs [13], certain Young tableaux [7], twin binary trees [7], diagonal rectangulations [1,8,10], and other families of combinatorial objects. In this paper, we define Baxter posets and prove that they are also counted by the Baxter numbers. Baxter posets are closely related to Catalan combinatorics. Specifically, Baxter posets (and the closely related diagonal rectangulations) can be realized through "twin" Catalan objects. Additionally, the relationship between Baxter posets and diagonal rectangulations is analogous to the relationship between two Catalan objects, specifically sub-binary trees and triangulations of convex polygons. As a prelude to our discussion of Baxter posets, we describe a few Catalan objects and bijections between them. Let S n denote the set of permutations of [n] = {1, . . . , n}. We say that σ = σ 1 · · · σ n ∈ S n avoids the pattern 2-31 if there does not exist i < j such that σ j+1 < σ i < σ j . The Catalan number C(n) = 1 n 2n n counts the elements of S n that The author was partially supported by NSF grant DMS-1500949. avoid the pattern 2-31. The map τ b , described below and illustrated in Figure 1, assigns a triangulation of a convex (n + 2)-gon to each element of S n , and restricts to a bijection between permutations that avoid 2-31 and triangulations of polygons. Let σ = σ 1 · · · σ n ∈ S n and let P be a convex (n+2)-gon. For convenience, deform P so that P is inscribed in the upper half of a circle, and label each vertex of P , in numerical order from left to right, with an element of the sequence 0, 1, . . . , n + 1. For each i ∈ {0, . . . , n}, construct a path P i from the vertex labeled 0 to the vertex labeled n+1 that visits the vertices labeled by elements of {σ 1 , . . . , σ i } in numerical order. The union of these paths defines τ b (σ), a triangulation of P . Given a triangulation ∆ of a convex (n + 2)-gon P , deform P (and ∆) as above. Construct a graph with an edge crossing each edge of ∆ except the horizontal diameter, as shown in red in the left diagram of Figure 1. (This is essentially the dual graph of ∆.) In what follows, we will call this the dual graph construction. Terminology for the resulting family of trees is mixed in the literature, with adjectives such as complete, planar, rooted, and binary appearing inconsistently. We will call the resulting tree a binary tree and provide a careful definition. For us, a binary tree is a rooted tree such that every non-leaf has exactly two children, with one child identified as the left child and the other as the right child. The dual graph construction gives a bijection between triangulations of a convex (n + 2)-gon and binary trees with 2n + 1 vertices. The root of the binary tree corresponds to the bottom triangle of ∆ and children are identified as left or right according to the embedding of ∆ in the plane. For a reason that will become apparent later, we deform each binary tree resulting from this bijection as shown in the right diagram of Figure 1 so that the root is the lowermost vertex. Removing the leaves of a binary tree and retaining the left-right labeling of each child, we obtain a sub-binary tree, a rooted tree in which every vertex has 0, 1, or 2 children, and each child is labeled left or right, with at most one child of each vertex receiving each label. The leafremoval map is a bijection between binary trees with 2n + 1 vertices and sub-binary trees with n vertices. In the example shown in Figure 1, the edges removed by this map are shown as dashed segments. We will make use of a second similar map from permutations to triangulations. We say that a permutation avoids the pattern 31-2 if there does not exist i < i + 1 < j such that σ i+1 < σ j < σ i . The map τ t described below restricts to a bijection between elements of S n that avoid 31-2 and triangulations of a convex (n + 2)-gon. Let σ ∈ S n and P a convex (n + 2)-gon. Deform P and label its vertices as shown in the example in Figure 2. For each i ∈ {0, 1, . . . , n}, construct the path P i that begins at the vertex labeled 0, visits in numerical order each vertex The union of these paths is τ t (σ). Performing the dual graph construction and then the leaf-removal map, we obtain corresponding binary and sub-binary trees. This time, we choose to deform the binary and sub-binary trees so that the root is the uppermost vertex, as illustrated in the right diagram of Figure 2. Although a sub-binary tree is an unlabeled graph, for each sub-binary tree with n vertices, there exists a unique labeling of its vertices by the elements of [n] such that every parent vertex has a label numerically larger than the labels of its left descendants and numerically smaller than its right descendants. An example of a sub-binary tree with such a labeling is show in Figure 3. Let T be a labeled sub-binary tree embedded in the plane as shown in Figure 3 and ∆ T the associated triangulation. View T as the Hasse diagram of a poset. We say that a total order L of the elements of T is a linear extension of T if x < T y implies that x < L y. The linear extensions of T , viewed as permutations in one-line notation, are exactly the permutations that map to ∆ T under τ b . To see why, label each triangle of ∆ T according to the label of its middle (from left to right) vertex, as illustrated in Figure 3. The linear extensions of T are exactly the permutations that map to ∆ T because x < T y if and only if the triangle labeled y is "above" the triangle labeled x. Similarly, given a sub-binary tree T ′ , embedded in the plane as illustrated in Figure 2, and associated triangulation ∆ T ′ , we obtain a labeling of T ′ such that the linear extensions of T ′ are exactly the permutations that map to ∆ T ′ under τ t . We now relate the Catalan objects described above to Baxter objects. Specifically, we will see that diagonal rectangulations are made by gluing together binary trees, and we will construct Baxter posets so that they play the same role for diagonal rectangulations that sub-binary trees play for triangulations. Twisted Baxter permutations are related to certain decompositions of a square into rectangles. Given σ ∈ S n , glue the binary trees corresponding to τ b (σ) and τ t (σ), called twin binary trees, along their leaves to obtain a decomposition of a square into n rectangles, and then rotate the resulting figure π/4 radians clockwise. The result of applying this binary tree gluing map to the permutation 52147862 is shown in the left diagram of Figure 4. The binary trees which are glued together in this example are shown in Figures 1 and 2. We call each decomposition resulting from this binary tree gluing map a diagonal rectangulation (defined precisely in Section 2) because the top-left to bottom-right diagonal of the square contains an interior point of each rectangle of the decomposition. The map restricts to a bijection between twisted Baxter permutations and diagonal rectangulations. Given a diagonal rectangulation, label the rectangles of the decomposition according to the order in which they appear along the diagonal, labeling the upperleftmost rectangle with 1 and the lower-rightmost rectangle with n. We refer to the rectangle with label i as "rectangle i." Construct a poset P on [n] by declaring x < P y if the interior of the bottom or left side of rectangle y intersects the interior of the top or right side of rectangle x, and then taking the reflexive and transitive closure of these relations. Remark 6.7 in [10] explains that, before taking the reflexive and transitive closure, these relations are acyclic. Thus P is a partial order on [n]. This poset, which we call the adjacency poset of the diagonal rectangulation, is defined in [8,10]. (A more general set of posets, corresponding to elements of the Baxter monoid, are defined in [9].) Each adjacency poset captures the "right of" and "above" relations of the diagonal rectangulation just as each sub-binary tree captures the "above" relations of the corresponding triangulation. Additionally, given an adjacency poset P and the corresponding diagonal rectangulation D, the set of linear extensions of P is the set of permutations that map to D under the binary tree gluing map [10,Remark 6.7]. We note that two permutations σ and ψ map to the same diagonal rectangulation if and only if τ b (σ) = τ b (ψ) and τ t (σ) = τ t (ψ). Thus, the set of linear extensions of the adjacency poset of a diagonal rectangulation is the intersection of the sets of linear extensions of the labeled sub-binary trees obtained from τ b and τ t . As a diagonal rectangulation can be constructed from twin binary trees, the adjacency poset of a diagonal rectangulation can be constructed using the corresponding labeled sub-binary trees. Let D be a diagonal rectangulation, P the associated adjacency poset, and T b and T t respectively denote the corresponding labeled sub-binary trees obtained from τ b and τ t . By declaring x < P y if x < T b y or x < Tt y and then taking the transitive closure, we obtain all of the relations of P . Although it is simple to use the relations of T b and T t to list the relations of P , it is not so straightforward to obtain a description of the Hasse diagram of P or to characterize the set of adjacency posets of diagonal rectangulations. In any poset P , we say that x covers y, denoted x ⋖ P y, if x < P y and there exists no z such that x < P z < P y. In Theorem 3.2, the first main result of this paper, we show that x covers y in the adjacency poset P if and only if, in the associated diagonal rectangulation, rectangles x and y form one of the configurations shown in Figure 7. This theorem allows us to obtain a Hasse diagram for the adjacency poset from a diagonal rectangulation just as we easily obtain a sub-binary tree from a triangulation. For our second result, which characterizes adjacency posets, we require the following definitions. A poset P is bounded if it has an element that is greater than all other elements and an element that is less than all other elements. Given a poset P on [n], a 2-14-3 chain is a chain b < P a ⋖ P d < P c of P such that a < b < c < d in numerical order. We similarly define a 3-14-2 chain, a 2-41-3 chain, and a 3-41-2 chain. Given a partially ordered set P , construct a graph G such that the vertices of G are labeled by the elements of P and there is an edge joining vertex x to vertex y if and only if x ⋖ P y or y ⋖ P x. An embedding of G in R 2 is a Hasse diagram for P if and only if for all x ⋖ P y, vertex y is above vertex x in the plane and each edge of the embedding is a line segment. A planar embedding of a poset P is a Hasse diagram for P in which no two edges intersect. (1) P is bounded. (2) If x ∈ P , then x is covered by at most two elements and covers at most two elements. (3) P contains no 2-14-3, no 3-14-2, no 2-41-3, and no 3-41-2 chains. (4) If [x, y] is an interval of P such that the open interval (x, y) is disconnected, then |x − y| = 1. (5) There exists a planar embedding of P such that for every interval [x, y] of P with (x, y) disconnected, if w, z ∈ (x, y) and w is left of z, then w < x < z in numerical order. We can now state our main result. Remark 1.3. One might hope for an unlabeled version of the Baxter poset from which the labeled poset can be obtained, just as sub-binary trees have a canonical labeling. However, without "decorating" the poset with additional combinatorial information, this is not possible. This is quickly apparent since, when n = 4, of the 22 Baxter posets, 20 of these are chains. Decorating each poset to indicate the numerical order of each pair x < P y with (x, y) disconnected is insufficient. Additionally, decorating every edge of the Hasse diagram to indicate the numerical order of the elements of the cover relation does not allow us to determine a unique Baxter poset. The original Baxter object, Baxter permutations, have a pattern avoidance definition similar to the definition of the twisted Baxter permutations. A Baxter permutation σ = σ 1 · · · σ n is a permutation that avoids the patterns 2-41-3 and 3-14-2, i.e. there does not exist i < j < j + 1 < k such that σ j+1 < σ i < σ k < σ j or σ j < σ k < σ i < σ j+1 . Given a diagonal rectangulation D, the set of permutations that map to D under the binary tree gluing map contains a unique twisted Baxter permutation and a unique Baxter permutation (see Theorem 2.1). Other authors (see [10, Proof of Lemma 8.4], [8, Proof of Lemma 6.6]) have described algorithms for obtaining these permutations from a diagonal rectangulation. Our final results describe how to obtain these pattern avoiding permutations directly from a Baxter poset. Here, we describe a method of obtaining the Baxter permutation. Let P be the natural embedding of a Baxter poset. The edges of the embedding separate the plane into maximal connected components. We call the closure of a bounded connected component a region of the embedding. Assign an arrow to each region of the embedding as follows: If the maximal element of a region is greater (in numerical order) than the minimal element of that region, then that region is assigned a right-pointing arrow, and otherwise the region is assigned a left-pointing arrow. An example is shown in Figure 5. If a region R i contains a right-pointing arrow and σ is a linear extension of P in which all labels of elements contained in the left side of R i precede all labels of elements contained in the right side of R i , then we say that σ respects the arrow of R i . Similarly, we say that σ respects the arrow of a region R i containing a left-pointing arrow if all labels contained in the right side of R i precede all labels of elements contained in the left side of R i . If σ respects the arrows of every region of P , then we say that σ respects the arrows of P . The existence of a linear extension of P that respects the arrows of P should not be immediately obvious to the reader. Theorem 1.4. Given a Baxter poset P with its natural embedding, the unique Baxter permutation that is a linear extension of P is the unique linear extension that respects the arrows of the embedding. By adding a single relation for each region of the natural embedding of P , we obtain an alternate description of the map from an adjacency poset to its Baxter permutation. Specifically, for each region R with minimal element x and maximal element x + 1 we declare that the maximal element (with respect to the partial order P ) of the left component of (x, y) is less than the minimal element of the right component. Similarly, for each region R with maximal element x and minimal element x + 1, we declare that the maximal element of the right component of In Section 2, we describe the map ρ from permutations to diagonal rectangulations that coincides with the binary tree gluing map already described and provide some background related to diagonal rectangulations. We prove Theorem 3.2 (the characterization of the cover relations of the adjacency poset) in Section 3. Our main result, Theorem 1.2, is proved in Section 4. Finally, in Section 5, we describe how to obtain a twisted Baxter permutation from a Baxter poset and then prove Theorem 1.4. Diagonal Rectangulations A rectangulation of size n is an equivalence class of decompositions of a square S into n rectangles. Two decompositions R 1 and R 2 are members of the same equivalence class if and only if there exists a homeomorphism of the square, fixing its vertices, that takes R 1 to R 2 . We say that a rectangulation is a diagonal rectangulation if, for some representative of the equivalence class, the top-left to bottom-right diagonal of S contains an interior point of each rectangle of the decomposition. In our discussion of diagonal rectangulations, we often blur the distinction between an equivalence class and a representative of the equivalence class. We most often refer to a diagonal rectangulation using the distinguished representative with edges intersecting the diagonal in equally spaced points. We now define a map ρ from S n to the set of diagonal rectangulation of size n. Figure 6 shows the construction of ρ(23154). The map ρ agrees with the map (described in Section 1) in which a diagonal rectangulation is constructed from a permutation by gluing together twin binary trees and then rotating the result. Our description of ρ matches the description in [10, Section 6] and is essentially equivalent to maps described in [1, Section 3], [2, Section 4], and [8, Section 5]. Let σ = σ 1 · · · σ n ∈ S n and S a square in R 2 with bottom-left vertex at (0, 0) and top-right vertex at (n, n). Place n + 1 points at (i, n − i) for i ∈ {0, . . . , n}. Label each of the n spaces between these points in order with an element of [n], starting with 1 in the upper-leftmost space and finishing with n in the lower-rightmost space. We construct ρ(σ) by considering the entries of σ sequentially from left to right. Let T i−1 denote the union of the left and lower boundaries of S and the rectangles of ρ(σ) constructed using the first i − 1 entries of σ. In step i of the construction, we form a new rectangle that contains the diagonal label σ i . We refer to this rectangle as rectangle σ i . We construct rectangle σ i as follows. If the point Otherwise, the lower-right corner of rectangle σ i is the first point of T i−1 hit by the downward pointing vertical ray with base point at l. In the arguments that follow, we will use the observation that, by construction, the left side and bottom of rectangle σ i are contained in T i−1 for all i ∈ [n]. We will also use the observation that, since the interior of each rectangle of a diagonal rectangulation D intersects the upper-left to bottom-right diagonal of S, no set of four rectangles of D share a vertex. Theorem 2.1 ([10, Theorem 6.1, Corollary 8.7]). The map ρ restricts to a bijection between twisted Baxter permutations and diagonal rectangulations. The map ρ also restricts to a bijection between Baxter permutations and diagonal rectangulations. Given a rectangulation R, a line segment that is not contained in the boundary of S and is a maximal (with respect to inclusion) union of edges of rectangles is called a wall of R. Recall that a permutation σ is a twisted Baxter permutation if and only if it avoids the patterns 2-41-3 and 3-41-2. This pattern avoidance is equivalent to the requirement that if σ i > σ i+1 then either all values numerically between σ i+1 and σ i are left of σ i in σ, or all of these values are right of σ i+1 in σ. We say that two permutations σ and ψ are related by a (3-14-2 ↔ 3-41-2) move if σ contains a subsequence σ i1 σ i2 σ i3 σ i4 that is an occurrence of one of these patterns and switching the positions of the adjacent entries σ i2 and σ i3 in σ results in the permutation ψ. We say that σ and ψ are related by a (2-14-3 ↔ 2-41-3) move if σ and ψ satisfy the same conditions with these patterns. Given ψ ∈ S n , define inv(ψ) = {(ψ i , ψ j ) | i < j and ψ i > ψ j }. If σ, ψ ∈ S n then we say that σ ≤ ψ in the right weak order if and only if inv(σ) ⊆ inv(ψ). This definition implies that σ ⋖ ψ in the right weak order if and only if ψ can be obtained from σ by transposing adjacent entries σ i and σ i+1 of σ which satisfy σ i < σ i+1 in numerical order. 1 Figure 6. The map ρ is applied to the permutation 23154. The adjacency poset of a diagonal rectangulation In Section 1, we provided a definition of the adjacency poset of a diagonal rectangulation D. At times, we will make use of an equivalent definition. Given a diagonal rectangulation D of size n in R 2 with bottom-left corner at (0, 0) and top-right corner at (n, n), define the partial order Q on [n] as follows: if there exist a point p in the interior of rectangle x and a point q in the interior of rectangle y such that q − p has positive coordinates declare x ≤ Q y, and then take the transitive closure of these relations. Proposition 3.1. Given a diagonal rectangulation D of size n, the adjacency poset P is the poset Q defined above. Proof. If x ⋖ P y then, by the definition of the adjacency poset, the interior of the bottom (or left side) of rectangle y intersects the interior of the top (or right side) of rectangle x. Thus there exist points p ∈ int(rectangle x) and q ∈ int(rectangle y) such that q − p has positive coordinates. Therefore, by the definition of Q, we have that x ≤ Q y. If x ⋖ Q y, then there exist points p ∈ int(rectangle x) and q ∈ int(rectangle y) such that q − p has positive coordinates. Consider the line segment joining p to q. If this segment passes through the vertex of some rectangle, since D contains only finitely many vertices, we may perturb p or q, obtaining points p ′ and q ′ , so that p ′ and q ′ are respectively in the interiors of rectangles x and y, the segment joining p ′ and q ′ contains no vertices of D, and q ′ − p ′ has positive coordinates. Thus, we may assume that the segment joining p and q contains no vertices of D. The segment passes through the interiors of the sequence of rectangles x = z 0 , z 1 , . . . , z m−1 , y = z m . For all i ∈ [m], the segment exits rectangle z i−1 and enters rectangle z i at a point in the interior of a side of both rectangles so z i < P z i+1 . Therefore x < P y. We note that the transitive closure in the definition of Q is required (since we have chosen to refer to each diagonal rectangulation using the representative with edges intersecting the diagonal in equally spaced points). Consider the rectangulation ρ(312465) shown in Figure 10. Since the interior of the right side of rectangle 2 intersects the interior of the left side of rectangle 4, we have that 2 < P 4. Similarly, 4 < P 6, so by transitivity 2 < P 6. However, there do not exist p ∈ int(rectangle 2) and q ∈ int(rectangle 6) such that q − p has positive coordinates. We give a description of the Hasse diagram of the adjacency poset of a diagonal rectangulation by describing its cover relations. x y (vi) Figure 7. Configurations in a diagonal rectangulation that correspond to cover relations in the adjacency poset. Proof. Let D be a diagonal rectangulation and P the adjacency poset of D. Assume that in D, rectangles x and y form one of the configurations shown in Figure 7. In each configuration, by definition, x < P y. Assume that rectangles x and y form configuration (i) and there exists some z ∈ [n] such that x < P z < P y. Since z < P y and P is acyclic, y ≮ P z. Thus rectangle z contains no interior points in the lined region of Figure 8. Similarly, since z ≮ P x, rectangle z contains no interior points in the dotted region of Figure 8. Therefore, any rectangle z such that x < P z < P y is completely contained in an unshaded region of Figure 8. However, by the definition of P , no label of a rectangle contained in the lower-right unshaded region of Figure 8 is covered by y. Similarly, in P no label of a rectangle contained in the upper-left unshaded region of Figure 8 covers x. Additionally, no label of a rectangle contained in the lower-right unshaded region is covered by the label of a rectangle contained in the upper-left unshaded region. Thus there exists no z such that x < P z < P y. Hence x ⋖ P y. For the remaining configurations of Figure 7, similar considerations demonstrate that x ⋖ P y. To prove the other direction of the theorem, assume that x ⋖ P y. Since the set of linear extensions of P is the fiber ρ −1 (D) and x ⋖ P y, there exists a linear extension σ = σ 1 · · · σ n of P such that x = σ i and y = σ i+1 . Let T j−1 denote the union of the left and bottom boundaries of the square S and the partial diagonal rectangulation formed in the construction of ρ(σ) after considering the first j − 1 entries of σ. The bottom and left edge of rectangle σ j is contained in T j−1 for all j ∈ [n]. Using the definition of the adjacency poset from Section 1, since x ⋖ P y, we have that rectangles x and y are adjacent with rectangle x left of or below rectangle y. Thus, combining these requirements, rectangles x and y form one of the configurations shown in Figure 9. To complete the proof of the theorem, we observe that configurations (a) and (c) of Figure 9 cannot occur in any diagonal rectangulation. In a diagonal rectangulation, the upper-left to bottom-right diagonal of S passes through every rectangle of the rectangulation, but this is impossible in a rectangulation containing either x y (e) x y (a) x y Characterization of Adjacency Posets To prove Theorem 1.2, we require the following definitions and results. Given a planar embedding of a poset P , the embedding separates the plane into maximal connected components. Recall that we call the closure of each bounded connected component a region of the embedding. x ŷ 0 1 Figure 11. The shaded region shows S(x). Since y is not contained in S(x) and the left-pointing horizontal ray with base point at y intersects S(x), we say that x is left of y. Given a planar embedding of a lattice P , for each x ∈ P , define S(x) to be the union of the chains of P containing x and the horizontal line segments whose endpoints are contained in these chains. In Figure 11, the gray region is S(x). We say that x is left of y in the embedding if y is not contained in S(x) and a leftpointing horizontal ray with vertex at y passes through S(x). We similarly define right of and note that since P is a lattice, x is left of y if and only if y is right of x. Furthermore, if x and y are incomparable in P , then either x is left of y or x is right of y. Let L = {L 1 , . . . , L l } denote a collection of linear extensions of a poset P . We say that L is a realizer of P if the intersection of these total orders is P . The dimension of a poset P is the size of the smallest realizer. The following is a well-known result, which we will use to find a realizer of an adjacency poset. In the proposition and its proof, given σ = σ 1 · · · σ n ∈ S n , we declare σ i < σ σ j if and only if i < j. We will routinely pass between a permutation and its associated total order. Proof. Let σ = σ 1 · · · σ n and ψ = ψ 1 · · · ψ n . Denote the intersection of the total orders σ and ψ by σ ∩ ψ. Let u = u 1 · · · u n ∈ [σ, ψ] and assume that u is not a linear extension of σ ∩ ψ. Thus there exist i, j ∈ [n] with i < j such that u j < σ u i and u j < ψ u i . If u j > u i in numerical order, then (u j , u i ) ∈ inv(σ) and (u j , u i ) / ∈ inv(u), contradicting the assumption that σ ≤ u in the right weak order. If u j < u i in numerical order, then (u i , u j ) ∈ inv(u) and (u i , u j ) / ∈ inv(ψ) , contradicting the assumption that u ≤ ψ in the right weak order. Therefore, if u ∈ [σ, ψ], then u is a linear extension of σ ∩ ψ. Since each congruence class of a lattice congruence on the right weak order is an interval [12, Section 2] and since each fiber of ρ is such a congruence class [10, Proposition 6.3], each fiber of ρ is an interval of the right weak order. Let D be a diagonal rectangulation and let L 1 and L 2 be respectively the minimum and maximum elements in the right weak order on S n such that ρ(L 1 ) = ρ(L 2 ) = D. By Proposition 4.1, and since any poset is determined by its set of linear extensions, L = {L 1 , L 2 } is a realizer of the adjacency poset of D. Given a linear extension L = σ 1 · · · σ n of a poset P on [n], let π L : [n] → [n] be defined by π L (x) = i if and only if x = σ i . The inverse of the permutation σ 1 · · · σ n is π L (1) · · · π L (n). If P has realizer L = {L 1 , L 2 }, then the projection of L denoted by π L (P ) is a map from [n] to R 2 given by π L (x) = (π L1 (x), π L2 (x)). This is an embedding of P into the componentwise order on R 2 . To view this embedding of P as a Hasse diagram for P , we take "up" to be the direction of the vector 1, 1 . . If P is a lattice with realizer L = {L 1 , L 2 }, then the embedding of P into the componentwise order on R 2 given by π L (P ) is a planar embedding of P . The following proposition is [3, p 32, Exercise 7(a)]. Since every Baxter poset is finite, bounded, and has a planar embedding, this proposition implies that every Baxter poset is a lattice. Proposition 4.3. A finite planar poset P is a lattice if and only if P is bounded. The following lemma is [4, Lemma 2.1]: Lemma 4.4. Let P be a bounded poset such that every chain of P is of finite length. If, for any x and y in P such that x and y both cover some element z, the join x ∨ y exists, then P is a lattice. We now have the necessary tools to prove our main result. (Proof of Theorem 1.2). Let D be a diagonal rectangulation of size n and P the associated adjacency poset. We first demonstrate that P satisfies the five conditions of Definition 1.1. The rectangle x of D whose lower-left corner coincides with the lower-left corner of S contains interior points below and left of interior points of all other rectangles of D. Thus for every y ∈ [n]−{x}, we have that x < P y . Similarly, the label of the rectangle of D whose upper-right corner coincides with the upperright corner of S is greater, in P , than every other element of P . Therefore, P is a bounded poset. Observe that any rectangle x of D is the left rectangle of at most one of the configurations shown in Figure 7 and the bottom rectangle of at most one of the configurations shown in Figure 7. Thus, x is covered by at most two elements of P . Similarly, x covers at most two elements of P . To show that P meets Condition 3 of Definition 1.1, for a contradiction assume that P contains a 2-14-3, a 3-14-2, a 2-41-3 or a 3-41-2 chain. This implies that some linear extension σ of P contains this pattern with the "4" and "1" adjacent. By Proposition 2.2, transposing the "4" and "1" in this linear extension results in a permutation σ ′ such that ρ(σ) = ρ(σ ′ ). Since the fiber ρ −1 (D) is the set of linear extensions of P , the permutation σ ′ is also a linear extension of P . However, this contradicts the assumption that the "4" and the "1" are related in P . Since the labeling of the rectangles of D comes from the map ρ from permutations to diagonal rectangulations, to demonstrate that P meets Condition 4 of Figure 12. Given that x ⋖ P x a and x ⋖ P x r with x a = x r , in diagonal rectangulation D rectangles x, x a and x r form one of the three configurations shown. Definition 1.1, we rely on observations about this map. Consider an interval [x, y] of P such that (x, y) is disconnected. There exist x r = x a such that x ⋖ P x r and x ⋖ P x a . By Theorem 3.2, since no four rectangles of a diagonal rectangulation share a vertex, rectangles x, x a , and x r form one of the configurations shown in Figure 12. In Diagram (i), the left side of rectangle x a is missing to indicate that the lower-left vertex of rectangle x a coincides with or is left of the upper-left vertex of rectangle x. The bottom of rectangle x r is missing in Diagram (ii) to similarly indicate that the lower-left vertex of rectangle x r coincides with or is below the lower-right vertex of rectangle x. First assume that rectangles x, x a , and x r are in the configuration shown in Diagram (i) of Figure 12 and let W be the vertical wall on the right side of rectangle x. The lower-right vertex of rectangle x and the lower-left vertex of rectangle x r coincide, so rectangle x is the lowermost rectangle on the left side of W . By the definition of ρ, rectangle x + 1 is the uppermost rectangle adjacent to the right side of W and the lower-left corner of rectangle x + 1 is below the upper-right corner of rectangle x. Since the interiors of the right edge of rectangle x a and the left edge of rectangle x + 1 intersect, we have that x a < P x + 1. Since the upper-right corner of rectangle x + 1 is strictly right of W and above rectangle x r , we have that x r < P x + 1. We wish to show that x + 1 = y, i.e., there does not exist z < P x + 1 such that x a < P z and x r < P z. We will prove a stronger statement: x a ∨ x r exists and x a ∨ x r = x + 1. Since x + 1 is an upper bound for x a and x r , it suffices to demonstrate that any other upper bound z satisfies x + 1 ≤ P z. To obtain a contradiction, assume that x + 1 P z for some upper bound z. We use an argument similar to the argument used in the proof of Theorem 3.2. Since x < P z, we have that z P x. Thus rectangle z contains no interior points that are both left of the vertical line containing W and below the horizontal line containing the top of rectangle x. Since x + 1 P z, rectangle z contains no interior points that are both right of the vertical line containing W and above the horizontal line containing the bottom of rectangle x + 1. Thus z is contained in either the region left of the vertical line containing W and above the horizontal line containing the top of rectangle x or the region right of the vertical line containing W and below the horizontal line containing the bottom of rectangle x + 1. Note that these regions are disjoint, that rectangle x a is contained in the first region, and that rectangle x r is contained in the second region. In P , the label of a rectangle contained in the first region cannot cover the label of a rectangle contained in the second region and vice versa. Thus x a ≮ P z or x r ≮ P z, a contradiction. Therefore x a ∨ x r = x + 1. When rectangles x, x a and x r form the configuration shown in Diagram (ii) of Figure 12, by considering the horizontal wall W above rectangle x and the rightmost rectangle above W , rectangle x − 1, we similarly show that y = x − 1 and that x a ∨ x r = x − 1. In the case illustrated in Diagram (iii) of Figure 12, we first observe that since D is a diagonal rectangulation, the wall above or on the right side of rectangle x extends beyond the upper-right corner of rectangle x. In either case, using the previous arguments, we show that y = x + 1 or y = x − 1 and y = x a ∨ x r . To demonstrate that P meets Condition 5 of Definition 1.1, note that by Condition 1 of the definition, and since we verified that y = x a ∨ x r in each case of the proof of Condition 4, Lemma 4.4 implies that P is a lattice. Let L 1 and L 2 be respectively the minimum and maximum elements in the right weak order on S n such that ρ(L 1 ) = ρ(L 2 ) = D. By Proposition 4.1, L = {L 1 , L 2 } is a realizer of P . By Theorem 4.2, the Hasse diagram obtained from π L (P ) is a planar embedding of P . Let [x, y] be an interval of P such that (x, y) is disconnected. Let x ⋖ P x l and x ⋖ P x r where x l is left of x r in the planar Hasse diagram obtained from π L (P ). Let π L (x l ) = (a, b) and π L (x r ) = (c, d). Since x l and x r are incomparable with x l left of x r in the planar Hasse diagram, we have that a < c and b > d in numerical order. This implies that x l precedes x r in L 1 and x l follows x r in L 2 . Since L 1 ≤ L 2 in the right weak order, (x l , x r ) ∈ inv(L 2 ). Thus x l < x r in numerical order. Rectangles x, x l , and x r form one of the configurations shown in Figure 12 (with x l replacing x a ). In every diagram of Figure 12, since each rectangle x i such that x l ≤ P x i < P y is contained in the region above the horizontal line containing the top of rectangle x and left of the vertical line containing the left side of rectangle y, rectangle x i intersects the diagonal of S in that region. This implies that x i < x in numerical order. Additionally, for each x j such that x r ≤ P x j < P y, since rectangle x j intersects the diagonal of D in the region right of the vertical line containing the right side of rectangle x and below the horizontal line containing the bottom of rectangle y, we have that x < x j in numerical order. Thus one connected component of (x, y) contains elements numerically smaller than x and y while the other connected component contains elements numerically larger than x and y. Since x l < x r in numerical order with x l contained in the left component of (x, y) and x r contained in the right component, given w, z ∈ (x, y) such that w is left of z in this planar embedding of P , we have that w < x < z in numerical order. We have shown that the adjacency poset P satisfies each of the conditions in Definition 1.1, so P is a Baxter poset. Now let P be a Baxter poset. To demonstrate that P is an adjacency poset, we first show that the set of linear extensions of P is a union of fibers of ρ. In what follows, we assume that P is embedded as described in Condition 5 of Definition 1.1. Let σ = σ 1 · · · σ n be a linear extension of P and suppose ψ = σ 1 · · · σ j−1 σ j+1 σ j σ j+2 · · · σ n such that ρ(σ) = ρ(ψ). We will show that ψ is also a linear extension of P . Since ρ(σ) = ρ(ψ) and σ ⋖ ψ or ψ ⋖ σ in the right weak order, by Proposition 2.2, the permutations σ and ψ are related by a single (2-41-3 ↔ 2-14-3) or (3-41-2 ↔ 3-14-2) move. Let aσ j σ j+1 b be an occurrence of one of these four patterns in σ such that swapping σ j and σ j+1 is a move. Since σ is a linear extension of P , the permutation ψ is also a linear extension of P if and only if σ j and σ j+1 are incomparable in P . To proceed via contradiction, assume that σ j and σ j+1 are comparable in P . Because σ j precedes σ j+1 in σ and σ is a linear extension of P , we have that σ j+1 ≮ P σ j . Thus σ j < P σ j+1 . This implies that σ j ⋖ P σ j+1 since any σ k such that σ j < P σ k < P σ j+1 would be between σ j and σ j+1 in every linear extension of P (and in particular in σ). By Condition 3 of Definition 1.1, at least one of {a, b} is incomparable with at least one of {σ j , σ j+1 }. We assume that a is incomparable with σ j or σ j+1 and note that if b is instead incomparable with σ j or σ j+1 , then the argument is analogous. Since a precedes σ j in σ, our assumption implies that either a < P σ j+1 and a and σ j are incomparable, or a is incomparable with both σ j and σ j+1 . In either case, a and σ j are incomparable. By Proposition 4.3, P is a lattice so we may consider S(a) and S(σ j ). First assume that a is left of σ j and consider the maximal chain C 1 of P from a to the minimal element of P , denoted0, that follows the right boundary of S(a). Let C 2 denote the maximal chain of P from σ j to0 that follows the left boundary of S(σ j ). Note that C 1 and C 2 intersect at a ∧ σ j and let C ′ 1 and C ′ 2 denote the chains from a and σ j to a ∧ σ j obtained by truncating C 1 and C 2 respectively. Figure 13 shows an example of the chains C ′ 1 and C ′ 2 . Each edge of C ′ 1 and C ′ 2 is the edge of a region of P that lies right of C ′ 1 and left of C ′ 2 . Starting at a, traveling down C ′ 1 to a ∧ σ j , label the sequence of regions right of and adjacent to C ′ 1 with R 1 , . . . , R l . Starting at a ∧ σ j , and traveling up C ′ 2 to σ j , continue by labeling the sequence of regions left of and adjacent to C ′ 2 with R l , R l+1 , . . . , R m . In Figure 13, l = 4 and m = 6. For each i ∈ [m − 1], by Condition 2 of Definition 1.1, the region R i shares an edge with the region R i+1 . (Otherwise C 1 is not the right boundary of S(a) or C 2 is not the left boundary of S(σ j ).) Since P is a lattice, for i ∈ [m], each region R i has a minimal element, denoted r i contained in the boundary of and that region's minimal element is not on C ′ 1 ∪ C ′ 2 , then again either C 1 is not the right boundary of S(a) or C 2 is not the left boundary of S(σ j ).) For each i ∈ [l], the minimal element r i is contained in the left side of the region R i+1 . Thus, by Condition 5 of Definition 1.1, we have that a < r 1 < · · · < r l = a ∧ σ j in numerical order. For each i ∈ {l + 1, . . . , m}, the minimal element r i is contained in the right side of region R i−1 . Thus a ∧ σ j = r l < r l+1 < · · · < r m < σ j in numerical order. Combining these strings of inequalities, we conclude that a < σ j in numerical order. In a similar way, construct a sequence of regions S 1 , . . . , S p using the section of the right boundary of S(a) from a to a ∨ σ j+1 and the section of the left boundary of S(σ j+1 ) from σ j+1 to a ∨ σ j+1 . If a < P σ j+1 , then a ∨ σ j+1 = σ j+1 . Whether a < P σ j+1 or a and σ j+1 are incomparable in P , using the sequence of maximal elements of these regions together with Condition 5 of Definition 1.1, we obtain a chain of inequalities and conclude that a < σ j+1 in numerical order. However, combining the conclusions that a < σ j and a < σ j+1 contradicts to the assumption that aσ j σ j+1 b is an occurrence of a 2-41-3, a 2-14-3, a 3-41-2 or a 3-14-2 pattern. If σ j is left of a in P , then to construct sequence of regions R 1 , . . . , R m , let C 1 be the right boundary of S(σ j ) and C 2 be the left boundary of S(a). To construct the sequence of regions S 1 , . . . , S p , use the right boundary of S(σ j+1 ) and the left boundary of S(a). Using these sequences and the corresponding chains of inequalities, we conclude that in numerical order σ j < a and σ j+1 < a. This conclusion again contradicts the assumption that aσ j σ j+1 b is an occurrence of a 2-41-3, a 2-14-3, a 3-41-2 or a 3-14-2 pattern. In both cases, we see that σ j and σ j+1 are incomparable in P . Therefore the set of linear extensions of P is a union of fibers of ρ. Twisted Baxter and Baxter Permutations from Baxter Posets Let P be a poset. We say that a subset I of the elements of P is an order ideal of P if and only if for every a ∈ I, if b < P a, then b ∈ I. We say that an ordering a 1 · · · a i of a subset of the elements of P is a partial linear extension of P if {a 1 , . . . , a j } is an order ideal of P for all j ∈ [i]. Given a poset P on [n], the permutation σ is a linear extension of P if and only if σ satisfies the definition of a partial linear extension. Given a partial linear extension σ 1 · · · σ i−1 of P , we define A i ⊆ [n] by u ∈ A i if and only if σ 1 · · · σ i−1 u is a partial linear extension of P . We label this set A i because it forms an antichain (a set of pairwise incomparable elements) of P . Theorem 5.1. Given a Baxter poset P , the unique twisted Baxter permutation σ = σ 1 · · · σ n that is a linear extension of P is constructed by choosing σ i = min(A i ) for each i ∈ [n]. Note that min(A i ) denotes the smallest, in numerical order, element of A i . If a Baxter poset P is given a natural embedding, then this selection is equivalent to choosing the leftmost (in the embedding) element of A i for each i ∈ [n]. Proof. Let P be a Baxter poset and D the associated diagonal rectangulation. By Theorem 1.2, the total order σ is a linear extension of P if and only if ρ(σ) = D. Since ρ restricts to a bijection between diagonal rectangulations and twisted Baxter permutations (Theorem 2.1), there is a unique linear extension σ = σ 1 · · · σ n of P that is a twisted Baxter permutation. To construct σ one entry at a time, we must describe a method for choosing σ i from A i . By Proposition 2.3, the permutation σ is the minimal element of the right weak order such that ρ(σ) = D. That is, σ is the linear extension of P that contains the fewest inversions. Therefore, σ i = min(A i ) for all i ∈ [n]. The following results will be used in the proof of Theorem 1.4. The next lemma is equivalent to Corollary 4.2 in [10] which states that σ is a Baxter permutation if and only if σ −1 is a Baxter permutation. Lemma 5.2. The permutation σ is a Baxter permutation if and only if σ contains no subsequence σ i σ j σ k σ l such that |σ l −σ i | = 1 and the subsequence is an occurrence of the pattern 2-4-1-3 or the pattern 3-1-4-2. By Theorem 2.1, there exists a unique linear extension of P that is a Baxter permutation. Lemma 5.3. Let P be a Baxter poset and σ be the unique Baxter permutation that is a linear extension of P . Then σ respects the arrows of P . Proof. Let P be a Baxter poset with a natural embedding. Let σ denote a linear extension that does not respect the arrow of some region R of P . Let min R and max R respectively denote the minimal and maximal elements of R. By Condition 4 of Definition 1.1, we have that min R and max R differ in value by one. Since σ does not respect the arrow of R, there exists a subsequence min R σ i σ j max R of σ such that σ i and σ j are contained in the boundary of R, one of these contained in the left component of (min R , max R ) and the other contained in the right component of (min R , max R ), and this subsequence is an occurrence of a 2-4-1-3 or a 3-1-4-2 pattern. Thus, by Lemma 5.2, σ is not a Baxter permutation. We make several useful observations about the map ρ. Given a diagonal rectangulation D, if W is a horizontal wall of D and rectangle a is the leftmost rectangle below and adjacent to W , then rectangle a − 1 is the rightmost rectangle above and adjacent to W and a precedes a − 1 in every permutation σ such that ρ(σ) = D. Each rectangle below and adjacent to W has a label larger than a and each rectangle above and adjacent to W has a label smaller than a − 1. Similarly, if W is a vertical wall of D and rectangle a is the lowermost rectangle left of and adjacent to W , then rectangle a + 1 is the uppermost rectangle right of and adjacent to W and a precedes a + 1 in every permutation σ such that ρ(σ) = D. Additionally, every rectangle left of and adjacent to W has label smaller than a and every rectangle right of and adjacent to W has label larger than a + 1. The lemma below follows from the definition of a Baxter permutation, the above observations, and Lemma 5.2. Lemma 5.4. Let D be a diagonal rectangulation and σ = σ 1 · · · σ n ∈ S n such that ρ(σ) = D. If σ is a Baxter permutation, then σ satisfies the following properties: • If rectangles σ i and σ j are adjacent to a horizontal wall W with rectangle σ i below W and rectangle σ j above W , then σ i precedes σ j in σ and • If rectangles σ i and σ j are adjacent to a vertical wall W with rectangle σ i left of W and rectangle σ j right of W , then σ i precedes σ j in σ. To complete the proof of Theorem 1.4, we refer to a second family of rectangulations, called generic rectangulations. We need generic rectangulations exclusively to prove Lemma 5.7, a lemma about diagonal rectangulations, so we only provide the required background related to generic rectangulations from [11]. We say that a rectangulation R is a generic rectangulation if and only if there exists no set of four rectangles of R that share a vertex. The set of diagonal rectangulations with n rectangles is a subset of the set of generic rectangulations with n rectangles. As with diagonal rectangulations, there is a map γ that takes a permutation on [n] to a generic rectangulation of size n (see [11,Section 3]) and restricts to a bijection between a subset of S n and generic rectangulations containing n rectangles. We will not need a complete description of γ, so we instead quote the required results. Given a generic rectangulation R and a wall W of R, the wall shuffle of W , denoted σ W , records the order in which the rectangles adjacent to W appear along W . Specifically, let W be a horizontal wall of R. To find the wall shuffle of W , temporarily label each vertex contained in W as follows. If the vertex is the upper-left vertex of some rectangle x, then label the vertex with x. Otherwise, the vertex is the lower-right vertex of some rectangle y, and we label it with y. The left-to-right ordering of the vertices along W provides an ordering of these vertex labels, and this ordering is σ W . Similarly, if W is a vertical wall of R, we temporarily label the vertices contained in W . We label a vertex with x if it is the lower-right vertex of rectangle x. Otherwise, the vertex is the upper-left vertex of some rectangle y and we label the vertex with y. The bottom-to-top order of these labels along W gives us σ W . The map γ constructs a generic rectangulation R from a permutation in two steps. Given σ ∈ S n , we first construct ρ(σ). Then, for each wall of ρ(σ), the vertices are labeled as described above. Finally, the vertices (and the attached edges) are reordered along each wall so that the wall shuffle of each wall is a subsequence of σ. For us, the key point is that, to specify a generic rectangulation, it suffices to identify the associated diagonal rectangulation and an order of the vertices along each wall (i.e. a wall shuffle for each wall). Given a Baxter permutation σ, the conditions given in Lemma 5.4 specify the wall shuffles of the associated generic rectangulation γ(σ). As a result, we can make use of generic rectangulations to prove the following lemma. Lemma 5.7. Let D be a diagonal rectangulation. Then there is a unique permutation σ such that ρ(σ) = D and such that σ satisfies the properties given in Lemma 5.4. This permutation σ is the Baxter permutation associated with D. Proof. Let D be a diagonal rectangulation and σ the unique Baxter permutation such that ρ(σ) = D. The permutation σ satisfies the properties given in Lemma 5.4. Assume that there exists a second permutation ψ such that ρ(ψ) = D and ψ satisfies the properties given in Lemma 5.4. Since ρ(σ) = ρ(ψ) and the wall shuffles of γ(σ) agree with the wall shuffles of γ(ψ), we have that γ(σ) = γ(ψ). Thus, by Proposition 5.6, the permutations σ and ψ are related by a sequence of adjacent transpositions in which each transposition is a (2-4-15-3↔ 2-4-51-3) move, a (4-2-15-3 ↔ 4-2-51-3) move, a (3-15-2-4 ↔ 3-51-2-4) move, or a (3-15-4-2↔3-51-4-2) move. This implies that some subsequence of σ is an occurrence of one of these eight patterns. First, assume that σ i σ j σ k σ k+1 σ l is an occurrence of the pattern 2-4-15-3 in σ. This means that σ k < σ i < σ l < σ j < σ k+1 in numerical order. However, this implies that the subsequence σ j σ k σ k+1 σ l is an occurrence of the pattern 3-14-2 in σ, contradicting our assumption that σ is a Baxter permutation. If σ contains an occurrence of one of the other seven patterns, then we similarly show that σ is not a Baxter permutation. We conclude that the unique permutation mapping to D under ρ and satisfying the properties of Lemma 5.4 is the Baxter permutation σ. Proof. To show that σ satisfies the properties of Lemma 5.4, we will show that σ satisfies these properties for each possible configuration of rectangles adjacent to the wall. First assume that on at least one side of the wall W there is only one adjacent rectangle. Let W be a horizontal wall with a single rectangle, rectangle r 1 , below W and sequence of rectangles r 2 , . . . , r l above W . For all i ∈ {1, . . . , l − 1}, an interior point of rectangle i is strictly below and left of an interior point of rectangle i + 1. Thus, by the definition of the adjacency poset and Theorem 1.2, we have that r 1 < P r 2 < P · · · < P r l . If W is horizontal with a single rectangle, rectangle r l , above W and sequence of rectangles r 1 , . . . , r l−1 below W , then we reach the same Figure 14. Illustrations for the proof of Lemma 5.8. conclusion. In either case, in P , the labels of the rectangles adjacent to W form a chain and, in this chain, all labels of rectangles below W precede all labels of rectangles above W . When W is a vertical wall with a single rectangle either left of or right of W , the argument is the same. In these cases, we conclude that the labels of rectangles adjacent to W form a chain in P and the labels of rectangles left of W precede the labels of rectangles right of W in this chain. Thus every linear extension of P satisfies the properties of Lemma 5.4 for walls that are adjacent to exactly one rectangle on at least one side. Now assume that on both sides of the wall W there are at least two adjacent rectangles. We will prove the claim that if W is a horizontal wall, then the labels of rectangles adjacent to W form a subset of the labels adjacent to some region of P . Let W be horizontal and, as illustrated in the left diagram of Figure 14, label from left to right the rectangles adjacent to and below W with the sequence b 1 , . . . , b i . Label the rectangles adjacent to and above W , again from left to right, a 1 , . . . , a j . Since D is diagonal and rectangles b 1 and a 1 are the leftmost rectangles adjacent to W , these rectangles form the configuration shown in Diagram (i) of Figure 7. Thus, by Theorem 3.2, we have that b 1 ⋖ P a 1 . If a 1 < P b 2 , then there exists a sequence of x k s such that a 1 ⋖ P x 1 ⋖ P · · · ⋖ P x l ⋖ b 2 . Since b 1 ⋖ P a 1 , and b 2 < P a j , for each k ∈ [l] we have that b 1 < P x k < P a j . Thus each rectangle x k is contained either in the region above W and left of the line containing the left side of rectangle a j or below W and right of the line containing the right side of rectangle b 1 . But in P , no rectangle in the first of these regions covers a rectangle in the second of these regions. We see by this contradiction that a 1 ≮ P b 2 . Since b 1 < P b 2 and a 1 ≮ P b 2 , there exists some c such that b 1 ⋖ P c and c = a 1 . By Theorem 3.2, rectangle c is adjacent to the right side of rectangle b 1 . Since rectangles b 1 , a 1 and c form a configuration shown in Diagram (ii) or (iii) of Figure 12, we have that a 1 ∨ c = a j (as shown in the proof of Theorem 1.2). This implies that b 1 and a j are contained in a shared region R of the embedded poset. Observe that for each k ∈ [i], the lower-left vertex of rectangle b k is strictly below and left of the upper-right vertex of rectangle b i so b k < P b i < P a j . Similarly, for each l ∈ [j], we have that a l < P a j . For a contradiction, assume that there exists a label of a rectangle adjacent to W that is not contained in the boundary of R. We consider the case in which some a l is not contained in the boundary of R, as illustrated in the right diagram of Figure 14. Since a l < b 1 in numerical order, a l is contained in the left connected component of the interval (b 1 , a j ). Since a l is not contained in the left boundary of R, the element a l is contained in the left boundary of some other region, R ′ . Let d denote an element contained in the right boundary of R ′ . The planarity of the embedding of P implies that d satisfies b 1 < P d. Thus d ≮ P b 1 , implying that that no interior points of rectangle d are strictly left of and below the upper-right corner of rectangle b 1 . Additionally, a l ≮ P d so no interior points of rectangle d are strictly right of and above the lower-left corner of rectangle a l . Since d and a l are contained respectively in the right and left boundaries of R ′ , we have that a l < d in numerical order. This implies that rectangle d is contained in the section of the diagonal rectangulation D below the horizontal line containing W and right of the vertical line containing the right side of rectangle b 1 . Thus b 1 < d in numerical order. However, this contradicts the assumption that P is embedded naturally in the plane. We conclude that each a l for l ∈ [j] is contained in the left boundary of R. A similar argument demonstrates that each b k for k ∈ [i] is contained in the right boundary of R. Thus, the claim holds. Since W is horizontal, b 1 − 1 = a j , implying that the arrow of R points to the left. By assumption, σ respects the arrows of R so each b k occurs before every a l in σ, i.e. for every horizontal wall, σ satisfies the first condition of Lemma 5.4. A virtually identical argument demonstrates that if W is vertical, and on both sides of W there are at least two adjacent rectangles, then σ satisfies the second condition of the lemma. Proof of Theorem 1.4. Let P be a Baxter poset, X be the set of linear extensions of P that respect the arrows of P and let σ be the Baxter permutation that is a linear extension of P . By Lemma 5.3, the Baxter permutation σ is in X. By Lemma 5.8, each element of X satisfies the properties given in Lemma 5.4. However, by Lemma 5.7, only one linear extension of P satisfies these properties so X = {σ}.
16,242.8
2016-10-13T00:00:00.000
[ "Mathematics" ]
Digital Technology and Health Workers’ Performance: A Case of Hospitals in Nigeria and South Africa Digital healthcare is a concept that creates an intersection between technology and healthcare in the healthcare system thus incorporating software, hardware and services. Digital technology plays an increasingly important role in healthcare today. Without doubt, the digital transformation of healthcare has raised several challenges that affect stakeholders especially healthcare workers and patients. However, effective adoption of digital technology enhances performance and increased efficiency. More so, it has made effective communication between healthcare providers and patients very easy the paper presents digital technology as a driver of change in the healthcare system especially in Africa and its positive impact on health care worker’s performance. Introduction The increasing prioritisation of medical care quality across the six areas of safety, patientcentredness, efficiency, effectiveness, timeliness and accessibility as telemedicine, remote diagnostic tools and advanced communication tools have been reported to improve the quality of care and reduce the cost of care. 6 It is instructive to note that the implementation of digital facilities has positive impacts on healthcare, but also comes with its challenges. Cases of destructive insulin overdose due to erroneous bar-coded wristbands; 7 and wrong prescription of medication as a result of computerised pick list. 8 It has been said that the efficacy and quality of digital system may not be harnessed except with change management in order enjoy the benefits of digital health system. 9 It is in this sense that this study examined the implementation of digital facilities on employees' performance at selected hospitals with change management in Nigeria and South Africa. Problem statement There has been a lot of problem with systematic management of health records in developing contexts, which has led to continuous data corruption. 10 For instance, a qualitative study in South Africa revealed that the development of mobile health system is impeded by non-availability of digital technology and privacy of information or data. 11 South Africa is a middle-income country with free public health care provision by the government, with the aim to support poor rural areas, being the most burdened country with HIV/AIDS, and diabetes in Africa. Through the National Health Insurance scheme, the South 6 Sukkird, Vatcharapong, and Kunio Shirahada. "Technology challenges to healthcare service innovation in aging Asia: Case of value co-creation in emergency medical support system." Technology in Society 43 (2015): 122-128. 7 McDonald, Clement J. "Computerization can create safety hazards: a bar-coding near miss." Annals of internal medicine 144, no. 7 (2006): 510-516. 8 Koppel, Ross, Joshua P. Metlay, Abigail Cohen, Brian Abaluck, A. Russell Localio, Stephen E. Kimmel, and Brian L. Strom. "Role of computerized physician order entry systems in facilitating medication errors." Jama 293, no. 10 (2005): 1197-1203. 9 Adeleke, Ibrahim Taiwo, Adedeji Olugbenga Adekanye, Abdullahi Daniyan Jibril, Fausat Fadeke Danmallam, Henry Eromosele Inyinbor, and Sunday Akingbola Omokanye. "Research knowledge and behavior of health workers at Federal Medical Centre, Bida: A task before learned mentors." Elective Medicine Journal 2, no. 2 (2014): 105-109. 10 Salleh, Mohd I. M., Raja A.R. Yaacob, and Mohamad S. Saleh. "The effect of performance impact on the integrity management of electronic medical records." Australian Journal of Basic and Applied Sciences 7, no. 6 (2013): 237-245. 11 Leon, Natalie, Helen Schneider, and Emmanuelle Daviaud. "Applying a framework for assessing the health system challenges to scaling up mHealth in South Africa." BMC medical informatics and decision making 12, no. 1 (2012): 1-12. African government implemented the use of Electronic Health Record system to improve and manage healthcare services. 12 Despite the implementation of various interventions focused at reinforcing primary health care, the impact on the population is limited. The implementation of EHR and other ICT tools in the public healthcare industry is associated with complexities and challenges. 13 It has been argued that 70% of intervention effort by the South African government failed to deliver the expected outcomes. 14 The major problem has been attributed to deficiencies in the health system, such as poor health information system, inadequate ill-trained medical personnel, under-resourced medical facilities, and the challenge of moving from paper-based system to electronic system. 15 In Similar vein, most health care organizations in Nigeria are predominantly on manual system; 16 health information documents are hugely on paper-based procedure; 17 and health providers have been reported to lack digital skills related to their profession. 18 Consequently, most of the public health organizations are burdened with delayed patients waiting time, and shortage of digitally-skilled physicians. Being the most populous country in Africa, the public healthcare industry does not have the capacity to provide essential medical service for the larger part of its people. Moreover, the health information system in Nigeria is 12 Weeks, Richard Vernon. "Electronic health records: managing the transformation from a paperbased to an electronic system." Journal of Contemporary Management 10, no. 1 (2013): 135-155. 13 Leon, Natalie, Helen Schneider, and Emmanuelle Daviaud. "Applying a framework for assessing the health system challenges to scaling up mHealth in South Africa." BMC medical informatics and decision making 12, no. 1 (2012): 1-12. 14 highly provoking and requires urgent adoption and implementation of digital facilities in order to improve patients waiting time, patients' profile information, scheduling of appointments, and provision of online medical services. Global references indicated that health information technology has the potential to assist in reducing these deficiencies and lowering transaction cost, by moving from manually driven system to automation system. 19 Digital technology in the healthcare industry Digital transformation of life and society in the healthcare industry has generated a wide discourse in recent times. Advanced information communication technology has given rise to progressive transformation of the healthcare industry. 20 Digital transformation has been described as the process of integrating technology into formerly held analogous process. 21 It refers to the transformation from partly digitised to completely digitised business models. 22 According to Iyawa, Herselman and Botha, digital health is the improvement in the provision of health care through information and communication technology to monitor and enhance the wellbeing and health of patients, and that of their families. 23 The definition supports the advocacy for change management in the healthcare industry, especially hospitals to focus on becoming more patients-centric in future. 24 This advocacy has led to many healthcare institutions especially in developed economies to deploy digital technologies, such as Artificial Intelligence. 25 The emergence of AI has made significant and positive contributions to the healthcare industry by providing precise data-driven decisions. 26 This is because data from large domains is used for early diagnosis of chronic diseases, which include cardiovascular diseases, cancer and diabetes. 27 It is instructive to note that about 10% of patients' death and 6 to 17% of hospital problems is caused by diagnostic mistakes. It is also important to remember that diagnostic mistakes are not always caused by poor medical performance. Medical experts argued that diagnostic errors are caused by inefficient integration of health information technology, communication breakdown between doctors, patients, and their families, and poor health work system designed to diagnostic processes. 28 The inability to effectively manage health information technology, as a result of increasing data led to the success of AI in the healthcare institutions. This suggests that the potential for disruption in the healthcare industry is enormous. 29 Besides, the study conducted by Aruba revealed that more than 60% hospitals around the world have introduced Internet of Things (IoT) in their organisations. 30 Biomarker testing is a medical tool that uses artificial intelligence. It is used to perform group of tests to identify molecular signs of health so that physicians can recommend the best treatment available to the patients. 31 Natural Language Processing technology is an example of machine learning, which is now being used to generate patients' record such as treatment plans, patients' prescriptions and health problems. 32 Virtual Nurse is another automated system that is driven by digital technology. Patients can avoid long queues, and expensive trip to the healthcare facilities by interacting Health workers' performance in implementing digital technology The benefits of digital health facilities have accelerated its adoption by health management institutions and hospitals, especially during Covid-19 pandemic. 36 Digital technology has become more in demand by hospitals than ever before. The need to ameliorate the challenges of increase in various types of diseases and population have intensified the use of digital health facilities. However, health workers find it difficult to adopt and provide digital health solution due to lack of training on new digital tools, problem of internet connectivity, and poor technical support. 37 This implies that in implementing new technology requires the need to train medical health workers on how to use digital tools for improved performance. Ethiopia and Uganda have been successful in the implementation of digital tools to improve health workers performance to combat diseases and provide care support for patients. Given this, health care workers contributed significantly to the decline in maternal and child morbidity and mortality rate. 38 The use of telemedicine in the US, China, Canada, Australia and Norway has drastically reduced the risk of infection, and patients waiting 33 time, with increase online prescription support for patients by physicians. Evidence indicated the use of upSCALE digital platform in Mozambique by health workers to provide basic health care. 39 Digital technology and health workers' performance Discourse on the impact of digital technology on health workers performance is well explored. For example, the research study conducted in Canada by O'Connor & O'Reiley (2018) on the infusion of mobile technology by healthcare practitioners in hospital context. The study established that a significant association exists between mobile health infusions and health practitioner performance. This finding also aligns with the empirical study conducted in Malaysia by Ghaleb Dominic, Fati, Muneer and Ali on the adoption of Big Data technology in healthcare organizations. 40 The authors found that technology adoption significantly influenced healthcare employees and organizational performance. The use of mobile phones to developed community health management information was adopted in Zambia. The study demonstrated that the use of mobile phones for data collection, tracking and management of information improved the performance of community health workers. 41 A qualitative study in rural South Africa among patients and health workers revealed that the use of mobile phones in poor and remote areas promote opportunities and capabilities in accessing health care services. 42 However, the lack of digital facilities and lack of digital literacy are challenges in effective implementation of mobile health services. An empirical study by Adeleke found that 98.8% of health workers acknowledged the impact of IT tools to their professional development. 43 The study by Luthuli (2017) Pandey on the transition from paper-based system to tablet-andmobile-based data collection system indicated that there was no significant difference across the three modes of data collection. 44 The authors revealed that despite positive feelings about the movement from paper-based to digital system, the health care workers retained and prefer paper-based system in the actual practice. 45 The authors further suggest the need for future study on the transformation from paper-based system to digital system within healthcare organizations. The aim of this study is to investigate digital transformation and health workers performance in hospitals in Nigeria and South Africa. St. Mary hospital, Marian Hill South Africa For over 100 years St Mary's Hospital, Marian Hill located in KwaZulu-Natal (KZN) has been providing quality healthcare services for communities residing around the area. The hospital was built by monks who arrived in South Africa in 1882. The 200-bed district hospital serves a population of approximately 3 million. In 2017, the DOH took over St Mary's hospital due to the financial difficulties encountered by the hospital. Fortunately, the DOH intervened when they did as the closure of this hospital would have severely compromised access to healthcare services for the community and hampered efforts to reduce the burden of diseases in the province. The loss of jobs and skills would have also been a problem, as the hospital employees would have been out of employment. Coates IV (2014) St Joseph hospital, Adazi-Nnukwu, Nigeria St Joseph hospital, Adazi-Nnukwu, is located in Aniocha local government area of Anambra State. It was initially built as an outstation clinic under the management of the missionary sisters of the Most Holy Rosary in 1938. The hospital under the care of the sisters were registered with 101 beds in 1939. It soon grew to 188 beds in recent years. The missionary sisters played an important role in the formation of the school of midwifery. The hospital continued to run and remained small in terms of structures, until February 2012 when the administration of Governor Peter Obi decided to invest in the hospital and midwifery school. Hence, major digital transformations took place. St Mary hospital Marian Hill, South Africa and St Joseph hospital Adazi-Nnukwu, Nigeria share the same history and administrative system. Both hospitals were established and run by missionaries before being taken over by the governments. The uncertainty resulting from acquisition and restructuring, which can increase stress levels of employees if not handled effectively. The perspectives of hospital workforces during a redevelopment have been poorly explored. Hospital redevelopment is often considered as a physical action rather than organisational one. Pomare et al. posit that organisational change in hospitals does not require only the physical environment to change, but also the behavioural operations, structural relationships and roles, and the organisational culture may transform at large. 46 According to Lady Cilento Children's Hospital Clinical Review (2015), in a recent example of a new hospital opening in Australia (the Children's Health Queensland hospital), employees attitudes shifted from excitement during early stages of change to frustration as the development progressed. This suggests that the role and support of frontline workers is crucial to implementation of any change (Lourens and Ballard 2016). This study 46 Research methods This research is framed within the interpretivism philosophical assumption. Interpretivist philosophical worldview allows for the adoption of various means or multiple research approaches to understand a phenomenon or uncover the truth. 47 The adoption of interpretivism ideology permits the use of a mixedmethod research approach for this study. Both quantitative and qualitative research approach was found suitable for this study. The case-study research design was found appropriate in investigating hospitals in Nigeria and South Africa. Case study research design has become one of the most widely used in technology management research and information system studies, 48 because it allows to get better understanding and deep knowledge of a complex problem in reality. The population of the study includes physicians, mid-wives, nurses and managers of the hospitals. St. Joseph has 325 staff, while St. Mary has 325 staff, making a total of 540 target population. The sample size was derived by using Krejcie and Morgan's table as illustrated by Sekaran and Bougie, 49 and a total of 301 sample size was determined for survey questionnaire. Purposive sampling technique was adopted in selecting 5 medical staff from each hospital for the purpose of interviews. The 10 selected health workers comprise senior and junior staff who were directly involved in the implementation process. The research questions were derived from extensive literature review, and were critically validated by two experts from the department of Human Resource Management in Durban University of Technology. The research instruments were measured on a six-point Likert scale ranging from "Strongly Agree" to 47 "Strongly Disagree". Further, a pilot study of 20 sample size was conducted to test the reliability and validity of the items in the questionnaire, and the result of the Cronbach alpha indicated that both the independent variable (digital technology) and the dependent variable (workers' performance) were well above 0.7 minimum threshold. 50 The principle of accountability and authenticity was adopted to determine the reliability and validity of the research questions for in-depth interviews. Analysis The quantitative data was analysed via SPSS version 25, while the qualitative data was analysed through NVivo 12 software. Below shows the result of the factor analysis where the KMO value were above 0.5 accepted minimum value, with significant value of 0.05 Bartlett test of sphericity. Further, a regression analysis performed by regressing the dependent variable on the independent variable as shown in Table 2 below. Predictors: (Constant), Digital technology Dependent variable: Health workers performance As shown in Table 2 above, the outcome of the regression analysis reveals that use of digital technology has an influence on health workers performance (EP) at St Joseph hospital AdaziNnukwu. Furthermore, the p-value = 0.007 indicates statistically significant effect at 5% level of significance. The coefficient of determination which is 0.370 reveals that approximately 37% of the variation observed by the dependent variable (health workers performance) is caused by the independent variable (use of digital technology). The F value and the p-value (24.322, p < 0.001) in Table 2 shows that these regression results are significant. This reveals that use of technology has a potential influence on employee performance in St Mary's hospital Marian Hill. Furthermore, the p-value = 0.014 which indicates a statistically significant effect at 5% level of significance. The coefficient of determination which is 0.557 reveals that approximately 55.7% of the variation observed by the dependent variable (health workers performance) is caused by the independent variable (use of digital technology). The F value and the p-value (39.665, p < 0.001) in Table 3 shows that the regression result is significant. Qualitative analysis The transcribed data received from the structured questions revealed that organisational change by implementing digital technology as shown in Figure 1 and 2 below. The interview responses that emerged from the in-depth interview sessions with regards to the digital transformation process were coded into different themes and sub-themes as shown below. Figure 1: Digital implementation in St. Joseph hospital. The composition in Figure 1 above presents the components of digital implementation as it influences health workers performance. The themes that emerged as implemented by the St. Joseph hospital were internet, Nanometre BP apparatus, computers, digital infrared thermometer, electronic health record and billing software. The introduction of digital equipment is the major drivers of change at St Joseph's Hospital. Some of the participants identified the digital gadgets and shared their previous experiences with the current administrative system, giving credit to the present administrative regarding the new digital implementation by moving from manual process to digital, and how it has improved workers performance. Some of the participants asserted this: When I came here, there was only one computer in the hospital and that was in the administration department. But right now, there are computers at every department. We are working digitally now with many technologies. Right now, there are many computerised systems that doctors work with, even consulting from their phone. We are trying to facilitate patient care. A lot of patients are very satisfied with the level efficiency in the hospital (participant 2, interview -St Joseph's). This comment is buttressed by other participants: Currently, the organisation has introduced computers to document information, though it not available to all departments. However, it is available in the medical, pharmacy, data, billing and laboratory departments (Participant 3, interview -St Joseph's). A lot, everywhere technology has brought about a lot of changes. It is a welcome development, and it is very encouraging. Some of the changes are technology infrastructure, systems, automations and tools (Participant 4, interview -St Joseph's). The digital infrastructural facilities were the major practical implementations that influenced changes within the organisation. These changes had a drastic transformation with significant impact on the patients. Some of the participants were in agreement that the digital transformation has improved their finance system and staff performance. Technological innovations in the healthcare system have facilitated easier communication in hospitals. In the area of finance, I would say the change is useful, at least the management can easily see and track the accounts after the billing process. I prefer the current change because like I said it makes our job more effective and the patients are very satisfied (Participant 4, interview -St Joseph's). Yes, I would say beyond measures. The benefit of technology developments can't be overemphasised. It is difficult for someone to evaluate themselves. But in the last two years a lot of changes have taken place which is unbelievable. We have been able to control the flow of finances (Participant 2, interview -St Joseph's). My dear, I can tell you categorically now that there is a lot of improvement, because initially when we were using the manual equipment, especially during billing process, we would write it out with pen before calculating the bill and sometimes there are errors. But right now, every patient information and billing process is automated. It has decreased patients long waiting times in all departments. The electronic billing system is a game-changer and brings accuracy and speed (Participant 3, interview -St Joseph's). One of the participants affirmed that the implementation of electronic prescription system has also improved doctors' work efficiency. Currently the doctors have been introduced to prescribing from their system to the pharmacy and to the billing office, and that has been able to reduce unnecessary queues, cut costs, streamline excessive HR and enhance efficiency and improved customer satisfaction (Participant 4, interview -St Joseph's). Most of the participants also opined that the implementation of the electronic health record has improves patient data and efficiency. Well, I'm one of the managers here and hence don't treat patients. But for my department I would say the electronic health record (EHR) -this is a software that digitalises patient data and has improved our productivity by 90%. We also have billing software, internet and electronic communication. Now we have shorter waiting periods and patients are happier (Participant 3, interview -St Joseph's). We are working digitally now with many technologies. Right now, there are many computerised systems that doctors work with, even consulting from there phone. We are trying to facilitate patient care. A lot of patients are very satisfied with the level efficiency in the hospital (Participant 5, interview -St Joseph's). With respect to enhancing performance as part of a team, all the interview participants concurred that the introduction of digital facilities had had a significant impact on their performance. Technology has revolutionised the way organisations carry out business transactions, hence organisations must embrace an array of technology to develop a competitive advantage in the economic marketplace. Digital facilities increase employees' productivity through the uses of computer programmes and software which allows employees to process more information than the manual methods. Technology has the ability to improve the efficiency of a hospital as an organisation and improve the quality of care delivered to patients. Participants 1 and 5 affirmed this claim below: I will say that this impacted me positively because it makes my job easier and more efficient. For me technology and training is a driving factor for employee's efficiency (Participant 5, interview -St Joseph's) Figure 3: Digital implementation in St. Mary hospital Three sub-themes emerged under digital equipment which includes, automated computer systems, payroll, personnel systems, and electronic lab systems. These digital facilities were the mostly used at St. Mary's Hospital. Participant 4 and 5 indicated below: Over the years with the previous administration, we used to do things manually, and then with the new era, we have transformed from doing things manually to automated computer systems. Just to mention but a few, we have the metro filing-it deals with filing. And with the capturing of patients' information we have a system called Rev-Light. Rev-Light is user-friendly and helps us perform our job effectively and efficiently (Participant 4, interview -St Mary's). After the change to the current management by the DHA, there were serious transformations in our technology systems to accommodate the new change. Like the introduction of PERSAL which is an HR system, and BAS -a basic accounting system for payments to suppliers (Participant 5, interview -St Mary). Yes, we have improved in terms of service delivery and waiting times. Before it was taking time to access patient's information manually. It has contributed greatly to my performance. I'm saying this because it is easier to get the required information and tracking patient's information which is not time consuming (Participant 1, interview -St Joseph's). Furthermore, all the employees were in agreement that the current dispensation is better than the previous era. This current dispensation is much better because of the digital transformation in the hospital. Previously it was private owned and things were done manually (Participant 3, interview -St Mary's). It is important to note that the outcome of the qualitative analysis from the two hospitals supports the result of the quantitative analysis, in which, digital technology significantly impact health worker's performance. According to Laudon and Laudon (2014), information systems and technologies are some of the most important tools available to managers for achieving higher levels of efficiency and productivity in business operations. It is imperative that today's managers should note that technology not only eases work task but promotes flexibility and aids the development of employees. When healthcare workers lack the efficient tools, they get frustrated and so do the patients. Employees must realise that the provision of technology -be it software or hardware has a direct significance on their happiness and wellbeing of employees in every organisation. Therefore, healthcare practitioners must endeavour to adapt with new technological changes to improve service delivery. Discussion of findings This study aims to establish how health worker's performance can be enhanced through the implementation of digital technology in the healthcare industry as a measure of organisational change strategy. Two hospitals were In similar vein, findings from the qualitative analysis via interviews revealed that newly implemented digital facilities have significant and positive impact on health worker's performance. The health workers from the two hospitals affirmed that the movement from paper-based operation to automated system has significantly improved care delivery. Most of the health workers alluded to the fact that the use of digital system has reduced patient waiting time and long queues. Doctors operations have become easier and faster yielding patients' satisfaction. This is consistent with a qualitative study conducted in rural South Africa among patients and health workers, in which, the use of mobile phones in poor and remote areas promote opportunities and capabilities in accessing health care services. 52 This outcome of this research is embedded in Kotter's (1996) model of change, in which, the author suggests a people-driven approach as a strategy of implementing a change model. It may be assumed that the success of digital transformation in the two hospitals is associated with the concept of need for 51 Ghaleb, Ebrahim AA, P. D. D. Dominic, Suliman Mohamed Fati, Amgad Muneer, and Rao Faizan Ali. "The assessment of big data adoption readiness with a technology-organization-environment framework: a perspective towards healthcare employees." Sustainability 13, no. 15 (2021) Conclusion Implementing digital technology in healthcare is very complex. It involves large amount of data, large number of personnel and lots of patients to manage. Moreover, healthcare processes are very dynamic. The movement from manual system to digital is increasing within the healthcare domain. Medical practitioners are engendered with the impact of digital technologies on care delivery. Therefore, this study demonstrated that healthcare organisations should use digital technologies via the implementation of ICT system and application of digital medical devices such as Electronic Health Record (EHR), Electronic Billing System, Computer system, Electronic Lab system, and Internet, for higher innovative clinical performance. Theoretical implications Most of the previous literature concentrated on the integration of digital technology within the context of the manufacturing industry. A major contribution of this study is the cross-sectional analysis through a mixedmethod approach in the implementation of digital technology within healthcare organisations in developing contexts. Our study also contributes to theoretical literature by establishing opportunities that digital technologies provide for healthcare institutions. We support that quality of clinical care should focus on patient-centeredness, efficiency and effectiveness. Managerial implications It is expected that health managers should understand when and how digital technology can provide long-term economic benefits for the organisation, since digitally-driven healthcare organisations are likely to achieve improved medical services. Study limitations Various studies have identified the challenges of training and resistance in the implementation of digital technologies in organisations. Future study may consider the mediating influence of training, and the impact of resistance to change in the use of digital technology in healthcare organisations.
6,585.6
2022-11-23T00:00:00.000
[ "Medicine", "Computer Science" ]
Virtual computational chemistry teaching laboratories – hands-on at a distance The COVID-19 pandemic disrupted chemistry teaching practices globally as many courses were forced online necessitating adaptation to the digital platform. The biggest impact was to the practical component of the chemistry curriculum – the so-called wet lab. Naively, it would be thought that computer-based teaching labs would have little problem in making the move. However, this is not the case as there are many unrecognised differences between delivering computer-based teaching in-person and virtually: software issues, technology and classroom management. Consequently, relatively few “hands-on” computational chemistry teaching laboratories are delivered online. In this paper we describe these issues in more detail and how they can be addressed, drawing on our experience in delivering a third-year computational chemistry course as well as remote hands-on workshops for the Virtual Winter School on Computational Chemistry and the European BIG-MAP project. Introduction The COVID-19 pandemic forced sudden changes in teaching practice, often with very little lead time, thus mobilising the education community to connect and share ideas [1,2,3]. For chemistry, see especially "Special Issue on Insights Gained While Teaching Chemistry in the Time of COVID-19" [2,3]. Remote chemistry teaching has most difficulty in addressing laboratory work -both what is colloquially known as wet and dry chemistry. For wet labs, COVID-19 provided the biggest shakeup. Naively, one may expect that dry computational chemistry laboratories would have no issues in remote education. However, this turns out not to be the case as was discovered several years ago when cost-cutting measures found departmental computer teaching laboratories being phased out in favour of centralised facilities [4] and instructors lost control of their teaching platform. This was countered by cloud service providers with offerings particularly suited for computer scientists, such as AWS Educate [5], CoCalc [6] etc. However, for chemistry a range of issues made the switch less straightforward: software licensing, lack of technical expertise of instructors to set up virtual laboratories, technical limitations and classroom management. In this article, we will go through these issues and how they can be addressed, using insight gained while acting on an AWS expert panel for remote learning [1], and demonstrated through our successes in delivering a third year computational chemistry course at the Australian National University (ANU) and remote hands-on workshops for the Virtual Winter School on Computational Chemistry [7], an initiative that has been running since 2015, (hereafter referred to as the Winter School), as well as the European BIG-MAP project [8]. Virtual computer teaching laboratories Software environment Many traditional computational chemistry teaching laboratories are based on students accessing specialised software, through departmental computing clusters. Commercial cloud platforms will often not have this specialised licensed software installed and many licence agreements have limitations on where the software can be installed, requiring legal assurances from the platform provider. The issue of licensing can be circumvented by using institutional rather than commercial cloud services but as with free software this then raises the issue of software installation. Commercial cloud providers will provide hardware but it is left to the client to install the software and act as system administrators which many educators lack the confidence to do. In this paper the software packages used were Gaussian 16 [9], the Amsterdam Modeling Suite (AMS) [10], Quantum ESPRESSO [11,12] and AiiDA [13,14]. Of these Gaussian 16 and the AMS suite are commercial software with licensing conditions such that teaching is generally carried out in local teaching labs through Graphical User Interfaces (GUI): GaussView [15] for Gaussian, and the AMS suite comes with its own in-built GUI. Quantum ESPRESSO and AiiDA are open-source software, released under the GNU General Public License (GPL) and the MIT license, respectively, and teaching is facilitated through the Quantum Mobile [16,17] virtual machine, with the flexibility of running on local computers and the cloud. Network connectivity Remote computer laboratories tend to have relatively high requirements on internet bandwidth and latency, which may not be available to students who live in geographical regions that are far away from the organisers' servers or who suffer from poor connectivity in general. While these concerns apply to live video transmissions in general, many computational chemistry packages, especially when used for teaching, come with a "native" graphical user interface (GUI) that puts a lot of stress on the communication network when streamed, resulting in very slow unwieldy responses. This frustrating experience likely explains why hands-on fully remote workshops in computational chemistry, which typically involve some sort of molecule or periodic systems builder, are not very common. One way of addressing this issue is for the student to use their personal computer where the aforementioned issues of licensing and software installation come into play (see Case Studies 2 and 3, and section "Software deployment" below). However, this brings with it its own challenges -for example, the student's personal device may not have compute power, memory, or disk space required. Another way is to use GUIs that are designed to run in the web browser (see "browser-based teaching"). An alternative, as chosen for Case Study 1, is to install the specialized software on a compute cloud and have students use the GUI through a Virtual Network Computing (VNC) client. This communication networking solution allows real-time responses while connected to a remote computer, and in recent years has been extended to Virtual Desktop Infrastructure (VDI), in which a cluster of computers can be emulated by virtual machines providing and managing virtual desktops. In this way, the desktop of physical computers at a regular teaching lab can be reproduced in cyberspace and this was used successfully to run computational chemistry Gaussview-based hands-on practicals with both real and virtual students present [18]. However, we do not yet have extensive experience on how well this solution scales, due to bandwidth limitations, with number of participants beyond about thirty. Software deployment In a virtual setting, students use their personal computers, i.e. a wide range of different hardware and operating systems. This makes installation of software a nontrivial task, and a wide range of distribution channels are available (some of which are compared in Table 1). Many package managers, such as apt, yum, macports [19] or homebrew [20] are integrated with a specific operating system, which makes for a great user experience but puts a large burden on the instructor to provide dedicated installation routes for every operating system and test the software on the different architectures (commercial software vendors can make this work as shown in Case Study 2). Multi-platform package managers, such as Spack [21] (Linux, MacOS) and conda [22] (Linux, MacOS, Windows) improve upon this aspect but first need to be installed by the students which often requires a certain familiarity with the command line and can lead to interference with existing software on their machine. Container technologies such as docker [23] are available on all platforms, address the problem of isolation from the host operating system and can be a great solution for providing a uniform software environment to a tech-savvy audience. In our experience, however, the lack of a graphical "desktop" with a familiar user interface can be a barrier for students who do not feel at home on the command line. This is where full-blown virtual machines enter the picture: a computer emulation which mimics a computer and operating system (OS) irrespective of the underlying hardware. This technology underpins many cloud providers as the same compute cluster can be used to provide on demand any flavour of OS -Mac/Windows/Linux for as long as it is needed. Using software like VirtualBox [24], virtual machine images can be run on all operating systems and provide both an isolated software environment as well as a familiar graphical user interface. 17], a virtual machine image based on Ubuntu Linux that comes pre-installed with a wide range of simulation codes, tools for structure analysis and plotting, as well as for the compilation of custom software. The Quantum Mobile setup is modular, allowing instructors to create their own flavor of the image with just the software they need, by picking and running the corresponding ansible roles [25]. Browser-based teaching Since any computer comes with a web browser preinstalled, why can students not simply use the browser to access a cloud platform that runs the specialized software on dedicated servers? Major advantages of this approach include a homogenous hardware environment that can be adjusted to fit the needs of the course, a homogenous software environment and no time spent on software installation. We believe that this approach will gain increasing traction going forward, starting with short courses where any time saved on software installation directly translates to being able to teach more science. Today, however, there are still a couple of barriers to overcome: 1. Many graphical user interfaces for computational chemistry software (e.g. GaussView) do not yet have implementations designed to run in the web browser, and streaming the native applications e.g. through a VNC client can be slow (see above). Besides tried-and-true browser-based structure viewers like JSmol [26], the 2011 WebGL standard has given rise to a flurry of 3d structure viewers and editors such as GLmol [27], ChemDoodle [28], NGL [29], and more that start rivalling some of the features of native implementations. We expect this trend to continue. 2. End user license agreements for commercial software may put restrictions on where the software can be installed, e.g. preventing installation on commercial clouds. In some cases, this can be circumvented by using institutional rather than commercial cloud services. We expect that license agreements will adapt to this new reality going forward. We also point out that this issue does not exist for open-source software. 3. Setting up a cloud platform for a course requires technical skills that many instructors lack. Less tech-savvy instructors may decide to turn to commercial pre-built platforms such as WebMO [30]. However, we also see large reductions in the barriers to creating your own scalable and fully customizable platform through recipes like the "Zero to Besides these barriers, which we believe to be of temporary nature, there are some more fundamental aspects of the platform-based approach. First, the platform model puts the burden on the organizers of the course to provide sufficient compute power for each participant. For reference, servers with 2 cores and 4GB of memory are available for roughly a dollar a day, which may be negligible for a one-day workshop with 100 participants or a tutorial week with 20, but can become expensive for long-running courses or courses that attract a very large audience. Substantial cost savings are possible with autoscaling setups like the one mentioned above which boot up extra servers only on demand (i.e. when students are actually logged in and using the hardware). And second, one great strength of the platform approachthe high level of technological abstraction it enables -can also be a weakness: when the course finishes and students lose access to the platform, they also lose access to the software without having learned how to set it up on their own hardware. Upon closer inspection, however, this is a matter of priorities: the browser-based approach simply makes the installation of software on the students' machines optional. If learning how to install the code is deemed important, it can be included as a dedicated session at a convenient stage in the course. Classroom management A traditional teaching laboratory is generally made up of an instructor with a team of demonstrators roaming the room, one for every 10-20 students or so depending on the demands of the coursework. Awareness of the ANU COVID-19 protocol for in-person teaching highlighted the extent to which the demonstrators relied on "hands-on" aspects, through the need to see the student's screen and occasionally taking over the mouse and keyboard in frustration. Remote learning obviously circumvents this problem but highlights the difficulty of working with a student with little "contact". This is also a frequently raised objection by students to remote instruction that solely relies on video. Some form of interaction is needed. For one-on-one tuition meeting calls such as with Zoom, WebEx, Skype, Teams with screensharing capability may be sufficient but when one-to-many are involved then there needs to be strategies for classroom management. Typical chat facility of meeting software is cumbersome as questions and answers get mixed and confused. Q&A or comment functionality is better and becoming more frequently used for keeping the communication ordered. However, both approaches are necessarily limited by the individuals' typing speed. As verbal discussion is still more effective than through text chat the use of "breakout rooms" -meeting offshoots where a subset of participants can go off to a separate session -are gaining popularity, but not all platforms have this capability and management tools for such sessions are not yet mature -it is still not straightforward to move participants in and out of these rooms and there is no good communication between them. Case Study 1 -CHEM3208 Chem 3208 -Molecular Modeling and Computational Chemistry -is a one-semester course for 3 rd year students offered through the Research School of Chemistry at the Australian National University. The quantum chemistry component discussed here was run over 4 weeks for 38 students with lectures being delivered in flipped format. The lectures were prerecorded in compact form, typically 10-15 minute videos, and the three lecture hours per week were spent in a Zoom call, in which the first half hour was silent giving students time to watch the lecture video and communicate questions synchronously through chat. Microphones were enabled for the second half of the session which took tutorial form for questions arising from the lecture, often with an exercise or discussion point to initiate conversation. The practical component was held in hybrid format in a university computer teaching lab seating 30 and allowing students remote access through the ANU VDI, which had been newly deployed [33]. The in-person version of the course in previous years had been held weekly but the ANU COVID-19 requirements regarding deep cleaning meant that only fortnightly 3 hr sessions were possible and so the course had to be rewritten to account for the reduced time for familiarisation with the software. In order to save valuable lab time, "hands-on demonstrations" that would previously have been held in-session were absorbed into the lecture material. The availability of remote access also meant that the students could access the labs asynchronously, though in-person attendance was mandatory and the practical part of the course was decreed to be such that it could be completed within the designated lab hours. This course was an affirmation of VDI as a solution. The four remote students connecting through Zoom on a laptop placed at the front of the laboratory could see and were part of the room as the in-person attendees but more importantly could use the same computer desktops. Minor inconveniences involved reduced visibility of the whiteboard and the ability to attract the attention of the demonstrators (hand raising being replaced by calling out loudly). However, this was more than compensated by the chance to discuss their problems amongst their virtual selves and those in the room through the Zoom meeting and screen sharing. Thus, we were able to treat the remote students equally to those in person, though one concern is whether the technology will scale to more students at further distances. Similarly, in 2020, Ferdinand Grozema, Professor at Delft University of Technology, set up an alternative lab using computational chemistry for first-year bachelor students in Molecular Science and Technology [34]. With just basic quantum chemistry knowledge and a graphical user interface which they could use on their home computers, the students quickly learned how to calculate properties of molecules relevant for opto-electronic materials. Contact during the project was handled via Zoom video sessions and a Slack discussion channel. Case Study 2 -SCM Workshop For the 2020 Virtual Winter School on Computational Chemistry we added, as an experiment, a hands-on computer lab to recreate typical "real" winter/summer schools in computational chemistry. For that we showcased the ADF, BAND, DFTB, and ReaxFF modules of the Amsterdam Modeling Suite from SCM (Software for Chemistry & Materials) [35], with whom we had organised their first ever hands-on virtual workshop in 2011 [36]. The issue of software licensing and connectivity was handled by SCM through providing a limited-time licence that could be installed on the student's machine. This approach can be problematic when faced with students with an assortment of machines but SCM have put a lot of work into robust installation packages for Windows, MacOS, and Linux operating systems and it was not an issue. The workshop program consisted of a morning session (10.30-13.00 CET) focusing on molecular systems and an afternoon session (15.30-18.00 CET) on periodic systems, presented by instructors from SCM. Winter School participants were required to register to download the software package and licence, which gave an indication of expected audience numbers. The webpage for the workshop (Ref. 35, Fig. 1) provided presentation slides, hands-on exercises and inputs. Delivery was by live Zoom lecture with the instructors working through the presentation and exercises with the attendees able to follow and run the exercises from home. Questions were handled through Zoom chat by the SCM staff who were not currently presenting and for the 60 or so participants this proved to be manageable. There was a slight issue in that such a format did not handle different learning speeds well. However, the workshop had a high level of engagement from the participants and received very positive feedback, as measured through comments in the chat, and such workshops will now be an established part of the Winter School program. Case Study 3 -Quantum ESPRESSO Workshop The Quantum ESPRESSO Workshop [37] was one of the hands-on workshops held during the 2021 Virtual Winter School on Computational Chemistry. This was chosen to demonstrate the Quantum Mobile [16,17] platform and capability of breakout rooms. Quantum Mobile offers a virtual machine using VirtualBox [24] and this was used rather than the cloud solution. This meant that like with the SCM workshop, connectivity problems were avoided by the program being run on the local machine of each participant. The workshop was held in the morning (9:00-12:00 CET) session and instruction was led by a team of Quantum ESPRESSO developers and required pre-watching an introductory lecture from their existing Webinar series. Asynchronous learning is gaining popularity especially when multiple timezones are involved as time is not taken up for an activity that does not need to be in-person. The live session could then make the most of important in-person interactions: the instructor started with a quick recap of the lecture for those participants who inevitably had not watched the pre-recorded lecture, followed by live hands-on demonstrations and exercises (handouts and input files being available on the workshop webpage: Ref. 37 and Fig. 2) with one-on-one help enabled through breakout rooms. Registration was limited to 100 participants with four tutors. The timeframe of 3 hours may not have been long enough to effectively use this format. Indeed, one attendee commented "I'm kind of sad that time flew away so quickly!". There was no negative feedback, but the hands-on tutors were not kept particularly busy for such an audience size. Nevertheless, the participants valued the opportunity to engage with the experts and the personal attention they received, and the workshop was judged a success based on the feedback collected from attendees. Case Study 4 -AiiDA tutorial This hands-on was part of a workshop that introduced the members of the European BIG-MAP project [8] to a range of software tools for computational materials science, including SimStack [38], ASE [39] and AiiDA [13,14]. The schedule is shown in Fig. 3 and started in the morning with 20-minute introductory presentations of each tool in the plenum. In the following, the ~80 participants were split up into three parallel Zoom sessions for the handson (2h on each tool). To minimize setup time, the AiiDA session relied on browser-based teaching: participants connected to a JupyterHub server, where they could log in with their email address and a password of their choice. After logging in, each participant was redirected to a Jupyter notebook server running inside a private docker container where all the necessary software for the tutorial had already been installed (based on the AiiDA lab [40]). After a brief introduction, the participants used the Jupyter notebook interface to work through the detailed hands-on materials (https://aiidatutorials.readthedocs.io/en/latest/pages/2020_BIGMAP/index.html), which included instructions on how to run a Quantum ESPRESSO calculation through AiiDA and introduced them to the concept of how AiiDA stores calculation provenance. Besides the presenter, two tutors were available for addressing specific questions in breakout Zoom rooms. The built-in polling feature of Zoom was used to track the progress of participants at defined points during the 2h period, followed by brief recaps of sections of the tutorial material. Despite this being our first trial of the fully browser-based approach, we received highly positive feedback via the form circulated after the event. 100% of responses stated that the browser-based approach worked well, 95% found the hands-on sessions easy to follow, and several comments specifically mentioned a preference for the browser-based approach over downloading a virtual machine image to run locally. From the tutors' perspective, the browser-based approach allowed to basically eliminate setup-related issues entirely, which so far we've not been able to achieve by other means (VirtualBox helps reduce issues substantially but some tend to remain, such as BIOS settings to adjust or participants running out of disk space). Running the JupyterHub on an autoscaling Kubernetes cluster on Amazon web services allowed to keep the size (and thus the cost) of the cluster small during the testing period before the workshop, let it grow dynamically during the workshop, and shrink again to a minimal configuration after the workshop had finished (terraform-based setup instructions are available at https://github.com/aiidalab/aiidalab-k8s). The JupyterHub was shut down two days after the event, giving all participants ample time to take their data home. Schedule of the BIG-MAP workshop (http://multiscalemodelling.eu/BigMapWorkshop2020). Conclusions and Outlook At the time of writing, it is not yet clear how much the universities we return to will look like the ones we left. However, the COVID-19 pandemic shake-up has forced a rethink of educational practice, and it is generally accepted there will be change [41]. Indeed, we are already seeing the rise of fully online universities, such as the Open University (http://www.openuniversity.edu) or Woolf University (https://woolf.university). It is fairly certain that teaching will move away from lecture theatres, with instructors distributing video lectures early and focusing more on in-person interactions with the students (see e.g. Dane [42] and references therein). The activation barrier to flipped teaching has been lowered with the lockdown providing the momentum to overcome it. Sanjay Sarma, the vice president for learning at MIT, which has been making courses available online for free since 2002, has been quoted as saying, "we don't want to waste our proximity on one-way stuff. It has to be two-way learning". This is particularly difficult to deliver especially for remote laboratory teaching, not having been handled particularly well or avoided completely in lockdown, certainly for chemistry "wet" labs but even chemistry "dry" labs come with their own problems. Delivery is very technology-dependent and video-based tuition can lose interaction and engagement. Including interactivity through email, chat, discussion forums and video calls only go part of the way. More sophisticated strategies based on spatial social platforms, such as Gather [43], SpatialChat [44], and Virtual Reality [45,46] show promise of effective engagement as found in our recent experiments in hosting virtual workshops [47]. These workshops were a follow-on activity from our Future of Meetings Symposium [48,49] held last year to explore best practices in remote interactions and focused specifically on delivering engaging workshops to a small, selected audience through the Gather [43] and Glue [50] platforms. The 3D and pseudo-3D interaction format of these platforms, especially when integrated with digital whiteboards, was generally agreed to provide a more engaging learning environment. While the technology is still maturing, we are encouraged and hope the momentum of innovation and development that the pandemic imparted onto the chemical education community continues to push developments in virtual space and will give rise to effective hands-on laboratories even at a distance.
5,650.4
2021-08-09T00:00:00.000
[ "Chemistry", "Computer Science", "Education" ]
A theoretical investigation of spectra utilization for a CMOS based indirect detector for dual energy applications Dual Energy imaging is a promising method for visualizing masses and microcalcifications in digital mammography. Currently commercially available detectors may be suitable for dual energy mammographic applications. The scope of this work was to theoretically examine the performance of the Radeye CMOS digital indirect detector under three low- and high-energy spectral pairs. The detector was modeled through the linear system theory. The pixel size was equal to 22.5μm and the phosphor material of the detector was a 33.9 mg/cm2 Gd2O2S:Tb phosphor screen. The examined spectral pairs were (i) a 40kV W/Ag (0.01cm) and a 70kV W/Cu (0.1cm) target/filter combinations, (ii) a 40kV W/Cd (0.013cm) and a 70kV W/Cu (0.1cm) target/filter combinations and (iii) a 40kV W/Pd (0.008cm) and a 70kV W/Cu (0.1cm) target/filter combinations. For each combination the Detective Quantum Efficiency (DQE), showing the signal to noise ratio transfer, the detector optical gain (DOG), showing the sensitivity of the detector and the coefficient of variation (CV) of the detector output signal were calculated. The second combination exhibited slightly higher DOG (326 photons per X-ray) and lower CV (0.755%) values. In terms of electron output from the RadEye CMOS, the first two combinations demonstrated comparable DQE values; however the second combination provided an increase of 6.5% in the electron output. Introduction Breast cancer, which is a common cause of death among female population, may manifest as microcalcifications. Modern breast examination techniques, includes irradiation with dual energy spectra [1][2][3][4][5]. Dual-energy subtraction imaging techniques offer an alternative approach to the detection and visualization of microcalcifications. With this technique, high-and low-energy images are separately acquired and "subtracted" from each other in a weighted fashion to cancel out the cluttered tissue structure so as to decrease the obscurity from overlapping tissue structures [6]. Although this technique reduces the contrast to noise ratio of the final image, it makes microcalcifications better visualized [1][2][3][4][5]. Digital mammography utilizes direct or indirect detection methods. The latter uses scintillators coupled to amorphous Silicon (a-Si) sensors [7][8]. Detector modelling has been carried out to determine the detector design and the incident X-ray spectra for an optimum detector performance. One method to determine the optimum detector parameters is linear cascaded systems theory (LCS). This theory calculates the output of a detector as a series of cascaded stages [7][8][9][10]. These stages describe the statistics of signal carrier interactions and are divided into gain stages and blur stages. In this study, the aforementioned theory was used in order to investigate the performance of three low energy (LE) and high energy (HE) X-ray spectra combinations incident at a commercially available CMOS detector. The performance was evaluated through image quality metrics like the detective quantum efficiency (DQE), showing the signal-tonoise transfer, the detector optical gain (DOG) showing the detector output per incident X-ray and the coefficient of variation (CV) of the output signal [7][8][9][10]. Materials and Methods In this study the LCS theory was used. This theory calculates the output of a detector as a series of cascaded stages. Every stage has a frequency domain input for deterministic blur stages [7][8][9][10]. In this work the following stages were considered: the X-ray absorption in the phosphor material, the optical photon production per absorbed X-ray, the optical photon escape and spread to the output, the impingement of the optical photons at the CMOS surface and the production of electrons at the CMOS output. A more extensive analysis of the above stages can be found in current literature. Through these stages the total Noise Power Spectrum (NPS(u)) was calculated, were u is the spatial frequency. In addition the total signal output in electrons M was determined. DOG was calculated as [10]: and [11]: where MTF(u) is the modulation transfer function. The X-ray spectra combinations tested were obtained by considering polyenergetic X-rays filtered with various filter materials and thicknesses [12]. An analytical model was developed for the calculation of the calcification signal to noise ratio (SNR tc ) and the mean glandular dose (MGD) for various LE and HE filter combinations. The filters selection was based on the maximization of the SNR tc /MGD ratio. This work is presented in an accompanied paper [13]. The data used for calculating the equations were obtained from literature [7,8,10,11]. The model was applied to a commercially available indirect CMOS detector (RadEye CMOS), incorporating a 34mg/cm 2 Gd 2 O 2 S:Tb phosphor screen in close contact with a 22.5µm pixel size photodetector array [14-15]. Results and Discussion The low and high energy spectra combinations are presented in Figure 1. It may be observed from the spectra that there is a small spectral overlap in the range between 35 keV and 40 keV. In table 1 the DOG and CV values for the presented spectra is shown. It can be observed from table 1 that for the low energy spectra the larger CV and the lowest DOG values have been calculated for the W/Pd (0.008 cm) target/filter combination. From the other two LE combinations the 40 kVp W/Cd (0.013 cm) is slightly better than the 40 kVp W/Ag (0.01cm), due to its lower calculated CV value. In contrast the DOG value is higher for the HE spectrum. Although the thin Gd 2 O 2 S:Tb phosphor screen (34 mg/cm 2 ) of the RadEye CMOS sensor provides better absorption characteristics the phosphor intrinsic gain (optical photons produced per absorbed X-ray) is higher for higher X-ray energies deposited. Figure 2 present the DQE for the low and the high energy spectra for the detector under consideration. It can be observed from Figure 2 that the W/Cd (0.013 cm) LE target/filter exhibit slightly better DQE values per spatial frequency than the other LE combinations. In addition the lowest DQE values are that of the HE combination due to the reduced X-ray absorption of the 34mg.cm 2 Gd 2 O 2 S:Tb phosphor screen. Conclusions In this work, the applicability of three target/filter combinations impinging on a commercially available RadEye CMOS detector was evaluated in terms of DOG, CV and DQE. It was found that the 40kVp W/ Cd (0.013 cm) provided the best low energy component. The low DQE and DOG values are mainly attributed (i) to the thin mammographic screen, which is designed for smaller Xray energies.
1,467.2
2015-09-21T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
A Neural Network-Based Mesh Quality Indicator for Three-Dimensional Cylinder Modelling Evaluating mesh quality prior to performing the computational fluid dynamics (CFD) simulation is an essential step to ensure the acceptable accuracy of cylinder modelling. However, traditional mesh quality indicators are often insufficient since they only check geometric information on individual distorted elements. To yield more accurate results, the current evaluation process usually requires careful manual re-evaluation for quality properties such as mesh distribution and local refinement, which heavily increase the meshing overhead. In this paper, we introduce an efficient quality indicator for varisized cylinder meshes, consisting of a mesh pre-processing method and a neural network-based indicator, Mesh-Net. We also publish a cylinder mesh benchmark dataset. The proposed indicator is trained to study the role of CFD meshes on the accuracy of numerical simulations. It considers both the effect of element geometry (e.g., orthogonality) and quality properties (e.g., smoothness and distribution). Thereafter, the well-trained indicator is used as a black-box to predict the overall quality of the input mesh automatically. Experimental results demonstrate that the proposed indicator is accurate and can be applied in the mesh quality evaluation process without manual interactions. Introduction Computational fluid dynamics (CFD) plays a vital role in a broad spectrum of scientific and engineering fields, such as bioengineering, aerospace, energy engineering, and manufacturing [1][2][3]. During the CFD simulation, the quality of the generated mesh directly influences the solution accuracy and error magnitude. Many mesh generation methods have been proposed aiming to generate high-quality meshes [4,5]. Unfortunately, the quality of the initial mesh is usually not acceptable. The minimal mesh quality requirement is seldom achieved except on the most elementary problems [6][7][8]. Therefore, the procedure used to handle high-quality mesh generation is divided into three steps: initial mesh generation, mesh quality evaluation, and mesh optimisation. In this meshing process, an efficient mesh quality indicator is particularly important. The indicator determines the direction of subsequent quality optimization and ensures the accuracy of the desired solution. It serves as a basis for assessing the ability of generated mesh to faithfully represent the physics of the flow. The high degree of complexity (non-linearity) between mesh quality and numerical accuracy makes the quality evaluation an extremely difficult task. It is hard to precisely define the relationship between mesh qualities and their correlations with numerical error [9]. Starting from the observation that regular or equilateral mesh elements are more pleasing, the traditional evaluation procedures focus on evaluating the shape of each element. Such a perspective leads to the formulation of quality indicators in terms of elemental geometric information such as area ratio, edge length, volume ratio, aspect ratio, skewness, minimum (maximum) angle, and gamma coefficient [10][11][12]. However, the above geometric-based indicators are often insufficient since: (1) They only yield geometric information on individual distorted mesh elements, and are useless for quality properties such as mesh distribution and local refinement. These properties affect the simulation accuracy in regions near boundary layers and wing-body configurations. (2) They may give inconsistent evaluation results for the same mesh (see [13,14] and references therein). Moreover, since geometric-based indicators may not guarantee an accurate result, the issue of mesh quality evaluation usually requires careful manual re-evaluation. This process relies heavily on the empirical, descriptive realm of a priori knowledge. As a result, the frequent human-computer interactions needed in the current evaluation process have become a bottleneck to the fully-automatic meshing process and significantly increase the meshing overhead. In order to ensure the cost-efficiency of meshing, it is essential to build an intelligent mesh quality indicator without manual interactions. In recent years, artificial neural networks have been proven capable of learning complex mapping and replacing human labour in various applications [15][16][17]. The network utilises multiple layers of neural units to learn important features automatically from high-dimensional parameter spaces. By performing an optimisation procedure based on the loss function, the network model is able to approximate the complex and nonlinear mapping from training samples [18]. Despite the widespread success of neural networks in various physical problems, there have been only limited attempts at neural network-based mesh quality evaluation. In this paper, we propose a mesh quality indicator for three-dimensional cylinders, resulting in a point-based mesh pre-processing method, a neural network Mesh-Net, and a cylinder mesh benchmark dataset. The proposed indicator takes mesh files as input and learns the potential correlations between the mesh quality and simulation error. Compared with traditional quality indicators, which focus on detecting distorted mesh elements, our indicator is more accurate. It considers both the effect of element geometry, such as orthogonality, and quality properties, such as smoothness and mesh distribution. Experimental results demonstrate that the well-trained indicator is able to predict the overall quality of cylinder meshes and achieves an accuracy of up to 98.05%. Moreover, it can be applied in the automatic mesh quality evaluation process without manual interactions, which significantly reduces the meshing overhead. We hope our work can provide future research directions that contribute to efficient mesh generation technology. The proposed benchmark dataset is publicly available at https://github.com/MeshDataset/3D-Cylinder (accessed on 4 October 2021). The rest of the paper is organized as follows: Section 2 describes related works about existing mesh quality indicators. Section 3 gives details of the proposed neural networkbased mesh quality indicator. The experimental results and discussion are presented in Section 4. The conclusion is finally outlined in Section 5. Traditional Mesh Quality Indicators It is well known that poorly shaped meshes tend to slow convergence and cause instability during the CFD simulation [19,20]. In order to ensure the accuracy of the numerical solution, many indicators have been proposed to check the mesh quality before simulation. Starting from the observation that regular or equilateral mesh elements are more pleasing, Strang and Fix [21] discussed the minimum angle condition of mesh elements. They stated that the smallest angle of mesh elements should be bounded away from zero. Berzins [6] supported this view and proved that elements with a relatively small included angle might have a negative effect on the solution of the linear algebra problem. Similar conclusions were proposed by Shewchuk [22], who showed that a prerequisite for high-quality mesh elements is that there should be no large included angles. To make the conclusion more specific, Liu and Joe [23] proposed a quality indicator Q L to identify 'sliver' elements. where V is the volume, and l i is the edge length of the examined mesh element. Bank [24] presented a geometric indicator Q B for CFD mesh quality control: where A is the area of the mesh element. Weatherill [25] introduced a similar mesh quality indicator in the evaluation process. It is defined by: Another indicator referred to as the Scaled Jacobian Quality Indicator was proposed in the CUBIT code for the mesh quality [5]. The Scaled Jacobian first computes the triple product at each node of the element corners using the other mesh nodes. It then computes the average of the corner Jacobians. The value of this indicator varies from minus one to plus one. A positive scaled Jacobian is usually considered the minimum quality for an acceptable computational mesh (called inversion-free). In contrast, the negative values of the Scaled Jacobian indicate the presence of distorted elements. Quality indicators such as Aspect Ratio, Diagonal Ratio, Edge Ratio, and Equiangle Skewness are widely-used in CAE software as quality metrics for mesh elements [26]. For example, The Diagonal Ratio Q DR is represented as the maximum ratio of the element diagonals: where d i is the length of the element diagonal. By definition, the higher the metric value, the less regularly shaped the examined element. For equilateral elements (square quadrilateral elements or cubic hexahedra), the Diagonal Ratio Q DR is 1. The above quality metrics provide shape specifications for mesh elements employing geometric formulas (the value usually ranges between 0 and 1, and 1 for an equilateral element). However, several sets of numerical results in [13] have demonstrated that employing different quality metrics to evaluate the same element may lead to inconsistent results. This conclusion is also confirmed by Gao et al. [14]. They performed a thorough numerical study to analyse widely-used quality indicators and their correlations with the stability and accuracy of the simulations. Nearly twenty quality indicators were tested on hexahedral elements. It was observed that the correlations among indicators are ambiguous. The derivation of some geometric element-based quality indicators applies only to specific applications. Overall, present-day mesh quality indicators tend to assess geometric imperfections (shape, edge length, included angle, Jacobian) on mesh elements. Other considerations such as mesh density and distribution that ensure desirable simulation accuracy are ignored. This deficiency imposes a burden on careful manual re-evaluation, which significantly increases the meshing overhead [9,27]. Neural Networks for Mesh Quality Evaluation In recent years, many researchers attempt to explore new methods for complex physical problems using artificial neural networks (ANNs). The main insight of ANNs is the capability of finding nonlinear approximations to complex functions based on the architecture of interconnected neurons. After suitable training, the ANNs are able to predict the desired output accurately. As a result, ANNs have been successfully applied to various CFD problems to improve efficiency and reduce overhead [9,18]. Multi-layer perceptron (MLP), which consists of several layers of neurons, is a specific type of ANNs [28]. The structure of an MLP is separated into three parts, the input layer, hidden layers, and output layer. A multi-layer perceptron with three hidden layers is shown in Figure 1. The neurons in the first hidden layer receives source signals from the input layer and propagate them to the succeeding layers. The signals are passed between all hidden layers (with activation functions) and finally converted into high-level features. The feature values in the output layer indicate the probability of the input belonging to a particular category. In this forward propagation process, the output of each layer is computed as: where n is the number of neural units in the hidden layer. m denotes the number of input units. ω is the weight, and b is the bias. The activation function is represented by σ. The loss function concludes the partial derivatives of the layer outputs with respect to the variables. After that, the adjustable variables in the neurons (weights and biases) are optimised via backpropagation to approximate the nonlinear mapping. To better learn the local and contextual information from input data, convolutional neural networks (CNNs) are proposed for complex applications such as image classification, regression, and scene recognition [15,16,29]. CNNs employ shift-invariant filters (kernels) followed by pooling units to extract local and global features from feature maps. By minimising the loss function with many hyperparameters, the network obtains the optimum weights and biases for the solving problem. Chen et al. [9,30,31] first introduced neural networks to the mesh quality evaluation task. They proposed an automatic quality indicator for 2D NACA0012 airfoil meshes using CNNs. The indicator takes geometric characteristics of each mesh element as input (the edge length x, edge length y, and maximum included angle), then feeds them into the construed neural network to identify poor-quality NACA0012 meshes. However, due to the geometric properties of the input features, the input constructing process is computationally expensive, and the proposed method is only applied to two-dimensional meshes. In this paper, we propose a neural network-based mesh quality indicator, accompanied by a benchmark dataset for three-dimensional cylinder meshes. In the mesh pre-processing phase, the indicator first splits the cylinder mesh into mesh surfaces and extracts mesh points from each surface. The proposed neural network Mesh-Net directly takes mesh points as input without geometric calculation. During the training, Mesh-Net employs fullyconvolutional and global average layers to learn the role of mesh geometry and distribution on the accuracy of CFD simulation. The well-designed architecture makes it attractive as an indicator of variable-sized three-dimensional cylinders. After suitable training, the indicator is able to predict the overall quality of the input mesh precisely. It can also be applied in the automatic mesh quality evaluation process without manual interactions. A Neural Network-Based Mesh Quality Indicator for Three-Dimensional Cylinder Modelling Neural network-based mesh training is the optimisation process by which the relation between the input mesh and quality prediction is established. This process usually requires a large number of labelled mesh samples to learn accurately. However, since annotating CFD meshes with simulation accuracy can be time-consuming and expensive, there has not yet emerged a public three-dimensional mesh dataset. To support our study and address the problem of available mesh datasets, we developed a cylinder mesh benchmark dataset for neural network-based mesh quality evaluation. Three-Dimensional Cylinder Mesh Benchmark Dataset In this section, we introduce the process of building the mesh benchmark dataset used for training. Each mesh sample generation can be divided into four steps: (1) modelling, (2) transforming, (3) simulation, and (4) annotation. In the initial modelling step, the geometric model of the three-dimensional cylinder was constructed. Then, we generated meshes that varied in mesh size and deformed them to obtain cylinder meshes with different qualities. To this end, we developed an automatic three-dimensional cylinder mesh generator. The generator takes mesh files as input and transforms the input mesh using point reposition, curve translation, or mesh surface rotation. Figure 2 illustrates some of the deformed cases. We can see that a large degree of variance in geometric transformations can be achieved. Using this generator, we have collected a large dataset with 20,480 cylinder meshes that span different mesh sizes and contain a wide variety of quality properties. Notice that the obtained non-fixed size meshes increase the richness of the proposed dataset and make it useful for mesh training tasks involving multiscale cylinder models. During the simulation step, we performed numerical simulations for each mesh sample on a classical problem. The problem models the steady laminar flow between rotating and stationary concentric cylinders (see material properties in Table 1) [32]. Considering the inner cylinder has radius r 0 , angular velocity w 0 , and temperature T 0 , while the outer is r 1 and T 1 , we calculate the tangential velocity in the annulus at certain radial locations. The motion equations include velocity component u θ , radius r, T, and p as: with boundary conditions: (1) High-quality Mesh: is a class of acceptable meshes with a very small error in the numerical solution. (2) Non-orthogonal Mesh: occurs when the curves or surfaces of the mesh are not vertically orthogonal. Numerical experiments in [5] show that skewed mesh with poor orthogonality can affect the order of accuracy and error magnitude. Non-orthogonal meshes also have a negative impact on the convergence speed. (3) Non-smoothness Mesh: is a class of meshes in which the length ratio is distorted, or elements are overlapped in complex domains. One approach to increase the quality is to smooth a collection of nodes (while preserving mesh connectivity) or to optimize node positions (vertex repositioning) [7,33]. (4) Poor-quality Mesh: represents meshes with poor orthogonality, smoothness, and distribution. According to the analysis in [34], poorly-shaped meshes can cause the ill-conditioned stiffness matrix problem and seriously affect the solutions of the partial differential equations. To verify the validity of the annotation procedure, we compared the numerical error of meshes in four quality categories. Figure 3 shows the numerical error of 20,480 meshes with different quality categories in the proposed benchmark dataset. We can learn that all high-quality meshes accurately simulate the fluid flow in the cylinder. For Non-orthogonal Mesh, there are small numerical errors (from −5% to 3%) during the CFD simulation. However, these meshes suffer from a slow convergence speed compared with meshes in the High-quality Mesh category. The numerical error of Non-smoothness Mesh ranges from 4% to 10%, while the poor-quality meshes leave a larger simulation error (up to 24.2%) compared to the target results. Overall, we seek to construct a large collection of mesh samples with accurate solutionbased labels. Such data is useful for supervised learning and neural network-based mesh quality evaluation. To achieve this, we built a three-dimensional cylinder mesh dataset containing a total of 20,480 meshes belonging to four categories, with an average of 512 meshes per size per category. The name and detail description of sizes and quality categories are listed in Tables 2 and 3. Figure 4 shows some mesh samples in the proposed benchmark dataset. The diversity of the meshing ensures the richness and validity of the proposed dataset. We believe that this benchmark dataset contributes to developing advanced mesh understanding algorithms. It can also stimulate innovative research for CFD mesh quality evaluation tasks. Mesh Pre-Processing For the CNN-based mesh quality evaluation task, developing a representation scheme applicable to mesh samples is a prerequisite for neural network training. Due to the locally dense nature of CFD meshes, existing three-dimensional quantisation methods (e.g., multi-view or volumetric) do not apply to mesh samples. Point cloud features are able to handle the locally dense areas in underlying meshes. However, traditional point cloud representation ignores the spatial correlation between neighbouring points, which is crucially important for CFD mesh quality evaluation. In our work, we introduce a point-based pre-processing method for cylinder mesh representation. In order to encode the spatial information of mesh points, we first split the cylinder into two-dimensional surfaces along the rotation axis, and then sequentially extract the mesh points from each mesh surface. After that, we combine the obtained point coordinates to form the three-channel point information matrix. Each channel of the matrix represents one of the dimensional coordinates (x, y, or z). The detail of the pre-processing method is shown in Figure 5. Since the coordinates of each point are explicitly stored in the mesh source file, we can directly use the source file as training input. Compared to the mesh pre-processing in [9], which represents meshes using specific element features (edge length and included angle), our point-based representation is more efficient. It does not require any additional computation for three-dimensional cylinder meshes, which significantly reduces the preprocessing cost. Moreover, benefiting from the fact that the point information matrix incorporates spatial information, we can easily process input meshes without paying attention to the mesh size. Normalisation is essentially a linear transformation that proportionally compresses and transforms a vector. This transformation keeps linear combinations and linear relational formulas intact, thus ensuring the robustness of a particular model. After normalising the input data, searching the optimal mapping in CNNs can be smoother (more likely to converge towards the optimal solution). Before training, we apply the standard deviation normalisation to the point information matrix. The normalisation formula is: where σ is the standard deviation of the original data, whilex,ȳ,z are the mean values of the original data, respectively. Finally, the normalised feature matrix is fed into the proposed network Mesh-Net. The training process stops after converging to a local optimum. Thereafter, the trained network can be used as a black-box to analyse the meshing properties (smoothness, orthogonality, and distribution) from the cylinder point feature and automatically output the quality of the input mesh. The Structure of Mesh-Net We now describe the design of the proposed neural network Mesh-Net. It consists of an input layer, five convolutional layers, and a softmax layer. To keep the training and prediction cost low, we did not consider very deep architectures. The network architecture employs fully convolutional layers with no fully connected layer, which enables the network to take input meshes of arbitrary size and produce fixed size output. Figure 5 shows the architecture of Mesh-Net. As depicted in Figure 5, the number of channels (feature maps) in five convolutional layers is 16, 32, 64, 32, and 4, respectively. As for the kernel size, we dynamically adjust the kernel size in each layer to obtain different receptive fields, rather than using the fixed kernels. At the beginning of the training, we prefer a relatively large receptive field to obtain more local point information. Inspired by the element dependency in the seven-point difference scheme, we set a 7 × 7 kernel in the first convolutional layer to capture the quality features of 49 adjacent mesh points. In the following layers, we gradually reduce the size of the convolutional kernel to obtain a smaller receptive field in high-level features. The kernel size of the next three layers is 5 × 5, 3 × 3, and 1 × 1, respectively. There is no max-pooling layer in the proposed architecture. Instead, we set the stride in the first three convolutional layers to two to shrink the dimension of feature maps. It is worth noting that we employ a global average operation to calculate the mean value of the elements across dimensions in the fourth convolutional layers. After global averaging, the compressed feature maps are propagated to the softmax output function. A loss function is employed during the network training phase to measure the discrepancy between the predicted output and the ground-truth tensor. In this work, we use cross-entropy cost function L 0 to measure the discrepancy between two probability tensors. L 0 is closely related to the Kullback-Leibler divergence, as given by: whereŷ represents the approximation of ground-truth y, and n is the number of samples in the mini-batch. Since each part of the network is differentiable, we can compute the derivatives of L 0 and update the parameters with respect to the input. The optimisation can then proceed via backpropagation (gradient descent). The weights of the network are updated iteratively by: where ω i is the weight in the i-th forward propagation and η > 0 is the learning rate. This process culminates with a vector-valued output that values in [0, 1]. It can be viewed as the neural network approximation of the desired function or the probability that the input mesh falls into one of four quality categories: High-quality Mesh, Non-orthogonal Mesh, Non-smoothness Mesh, and Poor-quality Mesh. Training As with any neural network, the choice of hyperparameters can strongly affect the prediction performance and the rate of convergence. In Section 3.3, we have determined hyperparameters, including the number of layers, the number of layer channels, and the kernel size. In addition, we need to define some training-related parameters, such as activation function in each layer, batch size, and learning rate. The activation function used in Mesh-Net is responsible for introducing non-linearity into the network. Since the parameter update in each iteration involves the gradient of the activation function, the obtained tiny gradient can lead to a slow convergence or trapping in the local optimum [29]. To accelerate the convergence and avoid vanishing gradient issues, we equipped the convolutional layers with a composite function of the ReLU activation function and batch normalisation. Moreover, we use mini-batches to take a single training step and reshuffle the training set in each epoch after exhausting the entire training set. We find that the stochasticity introduced by shuffling improves the stability and performance in test cases. The overfitting phenomenon is another problem existing in the training phase (i.e., the trained network ties too closely to the training set and behaves badly during testing). To tackle this problem, we combine the loss function with a regularisation term to avoid overfitting. where L 0 is the loss function in Equation (15), ω represents the weights in Mesh-Net, n is the batch size, and λ is the regularisation coefficient. We use the Adam optimiser [35] with an exponential learning rate decay, which prevents the training from trapping in a local minimum. A comparison of different settings (batch size and learning rate) is shown in Figure 6. To make the best use of Mesh-Net, we set the initial learning rate to 0.0005, while the batch size was set to 32. The training was performed using the open-source machine learning library TensorFlow [36]. Prediction To demonstrate the capability of Mesh-Net when used as a quality indicator, we compared the predictive power of different classifiers on three-dimensional benchmark datasets. During the experiment, we randomly shuffled the samples and employed the first 75% of meshes in each size for training and the latter 25% for testing. For each set, the proportion of different categories of meshes is equal. We ran the training 10 times and took the average accuracy as the final prediction result. We compared the performance of the Mesh-Net with that of three widely-used machine learning algorithms and one multi-layer perceptron (MLP). The machine learning algorithms are support vector machine (SVM), quadratic discriminant analysis (QDA), and Gaussian Naive Bayes (GNB) [36]. The MLP used in this paper contains five layers, i.e., an input layer, three hidden layers, and an output layer. The number of neural units in three hidden layers is 16, 32, and 64, respectively. The number of neural units in the output layer is 4, which equals the number of quality categories. Many metrics can be used to measure the performance of neural networks, such as recall, accuracy, and F1-score. Since the number of samples in each category balanced (512 × 10 per category), we only use accuracy to evaluate the performance of different classifiers. The accuracy acc is defined as: where true positive (TP) is the number of correctly classified positive instances, true negative (TN) is the number of correctly classified negative instances, and P + N represents the total number of instances. Table 4 reports the accuracy of different methods on the three-dimensional cylinder mesh dataset. Constrained by the classifiers' limitations on the dimensionality of the input samples, we divided the experiments into two parts. The first part is the mesh training on fixed-size samples (Size 1 test). In this part, all five classifiers are trained and tested. In the second part, Mesh-Net, which accepts meshes of arbitrary size, is trained to test the overall performance across different mesh sizes (Full-size test). As can be seen in Table 4, the accuracy rate of machine learning algorithms is relatively low on the fixed-size test. All machine learning classifiers, SVM, QDA, and GNB, show a prediction accuracy of less than 90%. MLP achieves an accuracy of 95.70% on the mesh quality evaluation task. However, it is clear that the proposed CNN-based indicator is more effective than widely-used machine learning algorithms and MLP. It outperforms other trained classifiers and achieves an accuracy of 98.05% on fixed-size meshes and 96.60% on non-fixed size meshes. To better understand the prediction across different categories, we present the confusion matrix of the full-size test (see Figure 7). We found that meshes with high-quality (HQ-M) and non-orthogonality (NO-M) achieve accurate quality prediction. Only four meshes (0.31%) were wrongly predicted. For meshes with non-smoothness (NS-M), 45 (3.52%) meshes were misclassified to poor-quality mesh (PQ-M). The category of PQ-M was predicted with the lowest accuracy. The results show that 121 (9.45%) poor-quality testing meshes were wrongly classified. Thirty-four (2.66%) PQ-M samples were misclassified to NO-M, and 87 (6.8%) PQ-M samples to NS-M. The inaccuracy is mainly because mesh quality properties such as non-orthogonality and non-smoothness are easily confused, especially when the point reposition or surface rotation happens. However, the incorrect predictions from Mesh-Net still make sense to the meshing procedure. It identifies part of the quality defects in the input mesh and guides the subsequent mesh optimisation. We note that the computational complexity increases considerably with the number of neural units. The structure of the CNN-based indicator must be intelligent enough to make the classification task possible and simple enough to keep the training and prediction cost low. Thus, we did not consider very deep architectures. Moreover, the proposed network employs full convolutional layers without fully connected layers, which greatly reduces the number of parameters (orders of magnitude) compared with MLP. The introduction of fully convolutional layers also allows the input of meshes with different sizes. Overall, we propose a CNN-based quality indicator for three-dimensional cylinder meshes. The proposed network Mesh-Net fully exploits the advantages of receptive field properties of convolutional neural networks. It employs different sizes of kernels to capture the local and global quality features of the preprocessed mesh. During the training, the network learns the relationship between the quality of the cylinder mesh and the error convergence of CFD simulation. Thereafter, the trained indicator can be used as an intelligent quality control model to evaluate mesh quality before CFD simulations. Conclusions Mesh-based methods have proved extremely useful in computational fluid dynamics simulations. During the cylinder simulation, the quality of the preassigned mesh affects the accuracy of numerical solutions. Poorly shaped meshes tend to slow convergence or cause analysing instability. Many quality indicators have been proposed to serve as quality control by analysing the geometric information of mesh elements. However, these element-based indicators do not necessarily provide reliable guidance for the subsequent optimisation process. They also require frequent human-computer interactions during the evaluation, which significantly increases the meshing overhead. Therefore, it is desirable to develop an intelligent indicator that automatically learns the quality of the mesh. In this paper, we present an efficient mesh quality indicator by using convolution neural networks (CNNs). To support our study, we also release a three-dimensional cylinder mesh dataset, which contains 20,480 meshes, with different sizes and qualities. The proposed indicator is trained offline and employs a feedforward approximation to learn the mesh quality properties, such as orthogonality, smoothness, and mesh distribution. It takes mesh files as input and outputs the overall quality of the input mesh to determine if it meets the solver's requirements. Experimental results show that the proposed method is accurate, computationally efficient, and straightforward. We believe that the applications of deep learning methods to mesh quality problems are expected to address the challenges posed by frequent manual interactions and reduce the meshing cost. We also hope that the release of large-scale datasets can stimulate innovative research on mesh quality evaluation and advance the development of fully automatic mesh generation.
6,911.8
2022-09-01T00:00:00.000
[ "Computer Science" ]
Comprehensive Format of Informed Consent in Research and Practice: A Tool to uphold the Ethical and Moral Standards Informed consent in research, clinical trial, and practice is a process in which a patient/participant consents to participate or undergo the proposed procedures after being informed of its procedures, risks, and benefits. Ideally, the patient/participant is expected to give his consent only after fully comprehending the information about the procedures, benefits, and risks involved in research/clinical trial/practice. Thus, many ethical issues are entwined in the process of obtaining a proper informed consent. Certain untoward events in the past led to propose guidelines to prevent exploitations and unhealthy practices in the field of life science. Eventually, the practice of obtaining informed consent was emphasized to make sure that a participant’s rights were not in jeopardy. Yet, there are flaws in the practical application of obtaining consent due to lack of understanding, barriers in communication, culture, custom, and various other factors. The present article highlights the need for a complete and comprehensive format of recording informed consent without compromising the rights of an individual and the standards of research or practice on ethical and moral grounds. How to cite this article Bhupathi PA, Ravi GR. Comprehensive Format of Informed Consent in Research and Practice: A Tool to uphold the Ethical and Moral Standards. Int J Clin Pediatr Dent 2017;10(1):73-81. Concomitantly expensive health care and scarcity of the required resources and demanding expectations of the public have led to a paradigm shift in the concepts of certain old ethical practices. Thus, new questions concerning the ethical principles are being raised time and again to adapt to the changing scenario. Most of the biomedical research is conducted in the developing countries, which are known to have limited resources and the populations live in high-risk health conditions. Further, social and cultural factors and beliefs vary, raising the ethical concerns, such as standard of care and posttrial obligations. Henceforth, the assurance for conducting research in these countries is being discussed very often. 1 For centuries, general medical practice has been guided by ethical principles and the basics can be dated to the Hippocratic code of conduct, which specifies that the physician will use the treatment to help the sick according to his ability and judgment, but never with the view to injury and wrongdoing. However, there was relative paucity of universally agreed guidelines or a framework for the ethical conduct of research, including medical experimentation. The Nuremberg Code of conducting research on human subjects was put forth following the atrocities post-World War II and, in 1964, the Helsinki Declaration was drafted by the World Medical Association. This was the first of its kind, a move toward developing guidelines for ethical regulation globally. 2,3 An important component of conducting research in any setting is obtaining informed consent, as it has been the cornerstone for ethical conduct and regulation of research. It is the focus of attention in the guidelines of conducting research and the ethical oversight of research. 3 The basic rights of a person cannot be ignored since the autonomy and responsibility of every person to decline or take part in the research is of extreme importance. The decisions concerning one's own body or health is universally recognized as a right. Hence, emphasis is placed on the importance of informed consent in research as well as clinical practice settings, and the need of it to be enterprising and innovative in obtaining it is justified. 1 The purpose of obtaining informed consent as a protocol for planned treatment differs from that obtained INTRODUCTION New advents in science and technology have expanded the horizons of every field including the field of medicine. for research context. This is because level of protection for the patients varies when compared with the research subject. As the levels of protection differ, exceptions to the policy have been allowed for situations when obtaining consent is impossible or not feasible. As the consent should be suitable to varying circumstances, they may be broadly categorized into implied consent, written consent, expressed consent, informed consent, proxy consent, loco parentis, blanket consent, and oral consent. 4 The purpose of this article is to highlight the importance of a complete, comprehensive format of consent, which upholds the rights of the individuals without compromising the standards of the research on ethical and moral grounds. Definition Consent has been defined by Webster's Dictionary as "to give assent or approval." This statement needs to be changed when applied to various fields, dentistry being no exception. The European Commission on ethical research has defined it as "Informed Consent is the decision, which must be written, dated and signed, to take part in a clinical trial, taken freely after being duly informed of its nature, significance, implications and risks and appropriately documented, by any person capable of giving consent or, where the person is not capable of giving consent, by his or her legal representative; if the person concerned is unable to write, oral consent in the presence of at least one witness may be given in exceptional cases, as provided for in national legislation." 5 The British Dental Association's "ethics in dentistry" advice sheet has defined the process of expressing consent as "A patient gives express consent when he or she indicates orally or in writing, consent to undergo examination or treatment or for personal information to be processed." 5 The Health Care Consent Act, 1996 Ontario has highlighted the salient features for informed consent, which include: (1) nature of proposed treatment, (2) expected benefits, (3) material risks and side effects, (4) alternative courses of action, (5) consequences of not having the proposed treatment, (6) answers to any questions the patient has regarding the proposed treatment, and (7) cost of the treatment. 6 An informed consent form is mandatory when the research/clinical trial involves any human volunteer like children, differently-abled individuals, immigrants, or healthy individuals. It is also required whenever the personal data, biological samples or specimens, or human genetic material are used or collected. 5 General Format for Consent-Practice and/or Research Commonly used formats of the consent include a statement that confirms that the participant has been explained about the proposed treatment plan/clinical trial/research and his/her participation is voluntary (Fig. 1). There is a provision for the witness to sign in the document to authenticate that the above-said protocol was followed in his/her presence. In addition, there will be the details of the investigator. The informed consent is considered to be valid only when the participant, investigator, and the witness sign the document at their designated areas. Limitations In this format, the content of the informed consent can be considered to be inadequate for the following reasons. This does not provide any written evidence explaining the role of the participant, investigator, and translator. Further, it lacks the structured format of explanation, which enables the participant to read about the proposed study design/treatment plan; risks involved; and assurance about the confidentiality of the identity. There is no separate declaration for participant, investigator, and translator, which commits each of them to their duties. Thus, an informed consent that upholds the rights of the individuals without compromising the standards of the research on ethical and moral grounds is needed. This can be formulated by adapting the guidelines of the Helsinki Declaration. Importance of having the Consent as per Helsinki Declaration If the informed consent is designed as per the norms of the International Declaration of Helsinki, it upholds the safety of those participating in research as well as seeking treatment in the practice. All the details shown in the template have to be filled for proper documentation. For better understanding, the entire format can be categorized into three parts. The initial part of the document should have the details of the title of the research/study along with the name, address, and contact details of the principal investigator, and the ethical committee reference number. The second part should consist of patient information sheet ( Fig. 2A), and the consent certificate or the declaration should be the last part (Figs 2B to D). The entire informed consent should be printed on the letter head of the institution or the organization, which is carrying out the proposed research or clinical trial. 7,8 The Invitation to the Subjects to participate in the Proposed Study The participants should be invited to take part in the proposed research/study/clinical trial. The participant must be instructed to take some time to read the information presented here, which will explain the details of this project. Then, assure them that they are free to ask the study staff/doctor/investigator any questions about any part of this project and clarify their doubts, as it is very important for them clearly understand and be fully satisfied with the details of the proposed research. This will further help the participants in knowing their involvement in the study. It should be clearly stated that their participation is "entirely voluntary" and the individual is free to decline to participate. If declined, this will not affect them by any means. It should also be mentioned that participant is free to withdraw from the study at any point, even if after agreeing to take part. Prior approval from the Committee for Human Research/Institutional Ethical Committee of the concerned dental or medical college or the hospital has to be obtained. Further, it has to be declared that the proposed study will be conducted according to the ethical guidelines and principles of the International Declaration of Helsinki, guidelines of the statutory body involved, and the Medical Research Council-Ethical Guidelines for Research of the country. Questionnaire-based patient information sheet is usually designed as it enables the participant to understand better. The proficiency of the language used should be simple. 8 The description of the details to be covered in the questionnaire is explained below. What is the Purpose of the Proposed Research/Study/Clinical Trial? Describe the details of the study in terms of: • Aims and objectives of the study. • Why this study has to be done? • How this study is intended to be done? • How are the observations of the study going to be useful to the individual/community? Why have I been asked to participate? Inform the participant that he or she has been chosen to participate because he/she would fulfill the selection criteria. Explain briefly the aims and objectives of the studies based on which selection is made. What is the Duration of the Proposed Research/Study/Clinical Trial? The duration required for the completion has to be mentioned clearly to all of the participants. This is beneficial to both the participant and the investigating team as it prevents bias due to sample attrition What are My Responsibilities as a Participant? The participants should provide the required information/samples/specimens whichever is required as per the study/research/clinical trial. Are there any Benefits for participating in the Proposed Research/Study/Clinical Trial? The participants have to be explained that he/she will not benefit from this research directly by themselves. Their participation would, however, be very valuable, as this contributes to medical/dental knowledge, in general. Further, it might lead to develop new diagnostic or preventive measures and better treatment modalities. Will I be at Risk during and after the Completion of the Proposed Research/Study/Clinical Trial? If any risks are involved in the research, they should be clearly explained and how it could affect the individual in future. Are there any Chances of Me getting injured during or after the Completion of the Proposed Research/Study/Clinical Trial (As a Consequence)? If applicable, it has to be clearly explained, as how this would affect the individual's life. If not applicable, assurance has to be given about the same. Is It Compulsory for all the Invitees to accept and participate? No, it is never a compulsion to the invitees to accept and participate. It is absolutely voluntary. Further, every individual can withdraw from participation at any given point of time. Will I be penalized for declining, withdrawing from Participation? None of the invitees or participants will be penalized for declining or withdrawing from participation.
2,812.6
2017-02-27T00:00:00.000
[ "Law", "Medicine", "Philosophy" ]
Evaluation of procedures for typing of group B Streptococcus: a retrospective study Background This study evaluates two procedures for typing of Streptococcus agalactiae (group B streptococci; GBS) isolates, using retrospective typing data from the period 2010 to 2014 with a commercial latex agglutination test (latex test) and the Lancefield precipitation test (LP test). Furthermore, the genotype distribution of phenotypically non-typable (NT) GBS isolates is presented. We also raise the awareness, that the difference in typing results obtained by phenotypical methods and genotype based methods may have implications on vaccine surveillance in case a GBS vaccine is introduced. Methods A total of 616 clinical GBS isolates from 2010 to 2014 were tested with both a latex test and the LP test. Among these, 66 isolates were genotyped by PCR, including 41 isolates that were phenotypically NT. Results The latex test provided a serotype for 83.8% of the isolates (95% CI [80.7–86.6]) compared to 87.5% (95% CI [84.6–90.0]) obtained by the LP method. The two assays provided identical capsular identification for all sero-typeable isolates (excluding NT isolates). The PCR assay provided a genotype designation to the 41 isolates defined as phenotypically NT isolates. Discussion We found that the latex test showed a slightly lower identification percentage than the LP test. Our recommendation is to use the latex agglutination as the routine primary assay for GBS surveillance, and then use the more labour intensive precipitation test on the NT isolates to increase the serotyping rate. A genotype could be assigned to all the phenotypically NT isolates, however, as a consequence genotyping will overestimate the coverage from possible future capsular polysaccharide based GBS vaccines. INTRODUCTION Streptococcus agalactiae (group B streptococcus, GBS) is a well-known pathogen primarily causing infections in newborns and the elderly (Brigtsen et al., 2015;Ballard et al., 2016). The disease in neonates is generally described as occurring in two different varieties (Bulkowstein et al., 2016). Early-onset disease (EOD) occurs in the neonate during the first six days of life, while late-onset disease (LOD) occurs later than the seventh day of life and can develop up to three months of age (Le Doare & Heath, 2013;Vinnemeier et al., 2015). Possible clinical manifestations of GBS infection in neonates are sepis, meningitis and pneumonia (Schrag & Verani, 2013). Among adults, GBS may also be associated with invasive infections, particularly in elderly persons with underlying medical conditions (Le Doare & Heath, 2013). Since introduction of screening programmes for pregnant women (Ballard et al., 2016) in some parts of the developed world, particularly early onset GBS infections in neonates has been reduced, and for many years the GBS disease incidence has been low (Heath, 2016). In contrast, early onset GBS infection is still a major problem in the developing world and presumably an underestimated problem (Heath, 2016). Also in recent years, the developed world has seen an increasing interest in incidence of invasive GBS infections, in particular among the elderly (Sheppard et al., 2016). Surveillance and identification of GBS in humans are therefore increasingly essential (Ballard et al., 2016;Sheppard et al., 2016). The GBS are currently divided into ten serotypes based on type specific capsular antigens and are designated as Ia, Ib, II, III, IV, V, VI, VII, VIII, and IX (Slotved et al., 2007;Le Doare & Heath, 2013). For decades, the precipitation test also known as the Lancefield precipitation test (LP test) has been considered the standard method for GBS serotype determination (Slotved, Sauer & Konradsen, 2002). However, the method is time-consuming and therefore not suited for typing large numbers of isolates (Slotved et al., 2003). At present, GBS isolates are in general serotyped by the phenotypical method latex agglutination test (latex test) (Afshar et al., 2011), for which several kits are commercially available. Increasingly simpler and affordable molecular techniques for genotyping of GBS isolates, predominantly based on PCR assays, have been introduced and are now commonly used (Brigtsen et al., 2015;Sheppard et al., 2016). In recent years, all GBS isolates received at the Statens Serum Institut (SSI) have been typed using both a LP test and a latex test (Slotved, Sauer & Konradsen, 2002;Lambertsen et al., 2010). Furthermore, some of the GBS NT isolates have been tested for genotype using the PCR assay described by Imperi et al. (2010) andPoyart et al. (2007). By using all our GBS typing data from the period 2010 to 2014, we evaluated the phenotypical typing procedure for GBS isolates, based on the comparison of the commercial latex test and the LP test. We will furthermore show the genotype distribution of phenotypically NT GBS isolates. METHODS This is a retrospective study based on typing data obtained in the period from 2010 to 2014 at the national Neisseria and Streptococcus Reference Laboratory (NSR), SSI. The Danish hospitals are serviced by regional departments of clinical microbiology, all of which are public. On a voluntary basis, they submit isolates of beta-hemolytic streptococci to the NSR for national surveillance (Lambertsen et al., 2010). Isolates The study was based on 616 isolates received at the NSR laboratory (SSI) in the period of 2010-2014. The majority of the isolates were from bloodstream infections and each isolate represented one patient case. Identification of GBS isolates The GBS isolates were identified as described by Lambertsen et al. (2010). Briefly, the submitted strains were examined for their characteristic beta-hemolytic colonies on 5% horse blood agar plates (SSI Diagnostica, Hillerød, Denmark) followed by serogrouping with group B latex (Oxoid A/S, Greve, Denmark) as recommended by the manufacturer. Isolates were stored at −80 • C in nutrient beef broth containing 10% glycerol (SSI Diagnostica, Hillerød, Denmark). Serotyping of the GBS isolates All isolates were tested both with the LP test and latex test (SSI Diagnostica, Hillerød, Denmark) (Lambertsen et al., 2010). Lancefield precipitation test (LP test) The precipitation test was performed as described by Slotved, Sauer & Konradsen (2002). Briefly, a resuspended centrifuged overnight broth culture was boiled and treated with 0.2N hydrochloric acid (0.2N HCl) to extract the capsular antigen. A LP test was performed, by mixing the extract with serotype specific GBS antisera (Ia-IX) (SSI Diagnostica, Hillerød, Denmark). If no reaction occurred then an extract with 0.1N HCl was made and tested. Non-serotypeable isolates were designated NT. See SSI Diagnostica (2017) for a video description of the LP test. Latex agglutination test (latex test) The latex test was performed with the Streptococcus latex test ImmuLex TM (SSI Diagnostica, Hillerød, Denmark). Briefly, isolates were cultured for 24 h in Todd-Hewitt broth. Ten microlitres from this culture was mixed with 10 microlitres specific antisera corresponding to one of each of serotypes Ia, Ib, and II-IX specific to capsular polysaccharide antigen latex test suspension, and agglutination was read after 5-10 s (Slotved et al., 2003). PCR test The multiplex PCR assay and primers (TAG Copenhagen) used in this study were described by Imperi et al. (2010) and Poyart et al. (2007). Briefly, 0.5 ml Chelex solutions were prepared of each isolate. The multiplex PCR for the genes was performed using a 20 µl PCR mix of 10 µl HotstarTaq Mastermix (Qiagen, Hamburg, Germany). The following PCR program used was: 15 min at 95 • C, 35 cycles of 15 s at 95 • C, 50 s at 55 • C, 60 s at 72 • C, finalized with 10 min at 72 • C. The presence and quality of expected PCR fragments were tested by gel-electrophoresis on 2% E-gels (Invitrogen). In the years 2010 and 2011 we chose to evaluate the PCR on a majority of the phenotypically NT isolates. A comparison of latex test with the LP test (Table 1) With the latex test it was possible to serotype 516 isolates (83.8%, (95% CI [80.7-86.6])), while 100 isolates (16.2%, (95% CI [13.4-19.4])) could not be identified, due to either multiple reactions (cross-reactions) or no reactions. The latex test provided results identical to those obtained by the LP test, except for 11 isolates that were NT by the LP test. Six of these 11 isolates were serotype V. Among 100 isolates that were NT with the latex test, 34 were serotyped with the LP test (Table 1). Molecular typing by PCR A total of 66 of the 616 isolates were tested for their genotype. Of these isolates 25 could be assigned a serotype, while 41 isolates were considered serotype NT (Tables 2 and 3). The 41 NT isolates included nearly all NT isolates for 2010 (24 of total 30) and 2011 (15 of total 16) ( Table 2). The genotyping of the 41 isolates designated as NT by the LP test, showed a high predominance of serotype V (15 isolates) followed by serotype III (9 isolates) ( Table 2). Nearly all phenotypically NT isolates could be genotyped (Table 2). When comparing the genotype and serotype by LP test of the 25 identified isolates, a difference was noted for two isolates. Both isolates were genotype II, while they were serotype III with 0.2N HCl (Table 3). Two isolates identified as genotype V were identified as serotype VI and VII with the LP test, while they were NT with the LP test ( DISCUSSION In general, laboratories use latex tests, and to a minor extent LP test for GBS serotyping (Sheppard et al., 2016;Afshar et al., 2011). In this study, we found the latex test able to serotype 83.8% (95% CI [80.7-86.6]), while the LP test was able to serotype 87.5% (95% CI [84.6-90.0]) of the isolates (Table 1). Both assays provided serotype identification to some isolates that were non-typeable with the other method, although to a varying degree Notes. a If no reaction with Ia to IX was found with 0.2N HCl, then Ia to IX were tested with 0.1N HCl. Red number represent isolates that were identified with contradicting typing. ( Table 1). We did not find any conflicting test results using the two assays except for the NT isolates. A serotyping identification rate between 80% and 90% is common (Brigtsen et al., 2015), and even lower serotyping rates have been observed (Slotved et al., 2003;Yao et al., 2013). Because the latex test is much easier to perform than the LP test, there is no question on which phenotypical method to use for routine serotyping (Brigtsen et al., 2015;Sheppard et al., 2016). At the NSR laboratory (SSI) we have chosen the following procedure for phenotypical serotyping (Fig. 1): we start with the latex test, if this method provide serotype identification, then the identification is accepted, and no further serotype testing is performed. If the latex test shows either cross-reactions or non-typeability, then we proceed to test the isolate using the LP test, by first testing for 0.2N HCl and then if necessary proceed to 0.1N HCl. If the LP test provides a specific serotype, then this is accepted, or else the isolate is defined as non-typeable. Using this procedure provides a serotyping percentage of 89.3% (95% CI [86.6-91.6]) (Fig. 1). In recent years several studies (Brigtsen et al., 2015;Sheppard et al., 2016) have presented molecular based methods for typing of GBS isolates, and a standard GBS PCR method has been described (Imperi et al., 2010). The most recent molecular GBS typing method described is the Whole-Genome Sequencing, which has the advantage, that besides providing information on the capsular genes, it also can provide information on multilocus sequence type, analyses of relatedness to other sequenced isolates, and detailed phylogenetic analyses (Sheppard et al., 2016). Our PCR assay provided a genotype for all the 41 phenotypically NT isolates (Table 2). According to other studies, nearly 100% of GBS isolates can be genotyped by the use of molecular based methods (Brigtsen et al., 2015;Yao et al., 2013;Sheppard et al., 2016). This typing rate is much higher than the approximately 90% rate obtained by phenotypical assays (Brigtsen et al., 2015) (Table 1). The two GBS vaccines currently under Phase 2 trials are based on capsular polysaccharide conjugated vaccines, while another vaccine based on GBS surface proteins is under Phase 1 trial (Heath, 2016). The capsular polysaccharide based vaccines cover either serotype III or serotype Ia, Ib and IIII (Heath, 2016). Evaluating the predicted vaccine coverage for Danish invasive GBS isolates from 2010 and 2011 in this study, we found that genotyping suggested an apparent increase in predicted vaccine coverage of 6.9% in 2010 and 4.4% in 2011. However, as these isolates were only typeable by molecular methods, the type identification represents lack of phenotypical expression and therefore possible lack of vaccine coverage. In conclusion, in this study we found that the latex test and the LP test showed similar identification percentages. Because of the greater workload with LP test, we recommend this method to be used only for latex test NT isolates (Fig. 1). Molecular typing methods are advantageous for the surveillance of GBS infections in terms of evaluating transmission chains and possible description of, e.g., early onset neonatal infections (Bergseng et al., 2009). In contrast, phenotypical methods must be applied when evaluating possible vaccine coverage or vaccine failures as well as planning of future capsular polysaccharide vaccines. Therefore, appropriate typing methods must be chosen according to the purpose of surveillance. Based on our findings, we suggest that general surveillance can be performed either by using the phenotypical procedure shown in Fig. 1 or by molecular techniques such as those described by Sheppard et al. (2016), depending on the laboratory capacity and cost. Coverage studies to provide data for possible future polysaccharide based GBS vaccines will require a phenotypical procedure. In case a polysaccharide GBS vaccine will be implemented in the future, phenotypical procedures will be necessary when evaluating vaccine failure in patients with infections caused by vaccine serotypes. Molecular techniques including Multilocus sequence typing (MLST) are necessary if information on clonal relation is needed, e.g., for identification of transmission chains in case of outbreaks or clustered infections.
3,208.8
2017-03-16T00:00:00.000
[ "Medicine", "Biology" ]
Caspase-8-mediated PAR-4 cleavage is required for TNFα-induced apoptosis // Fabian Treude 1 , Ferdinand Kappes 1 , Dirk Fahrenkamp 1 , Gerhard Muller-Newen 1 , Federico Dajas-Bailador 2 , Oliver H. Kramer 3 , Bernhard Luscher 1 , Jorg Hartkamp 1 1 Institute of Biochemistry and Molecular Biology, Medical School, RWTH Aachen University, Aachen, Germany 2 Faculty of Medicine and Health Sciences, Queens Medical Centre, University of Nottingham, Nottingham, U.K. 3 Institute of Toxicology, University Medical Center Mainz, Mainz, Germany Correspondence: Jorg Hartkamp, email: // Keywords : PAR-4, apoptosis, caspase-8, tumor suppressor, TNFα Received : November 27, 2013 Accepted : January 27, 2014 Published : January 29, 2014 Abstract The tumor suppressor protein prostate apoptosis response-4 (PAR-4) is silenced in a subset of human cancers and its down-regulation serves as a mechanism for cancer cell survival following chemotherapy. PAR-4 re-expression selectively causes apoptosis in cancer cells but how its pro-apoptotic functions are controlled and executed precisely is currently unknown. We demonstrate here that UV-induced apoptosis results in a rapid caspase-dependent PAR-4 cleavage at EEPD131¯G, a sequence that was preferentially recognized by caspase-8. To investigate the effect on cell growth for this cleavage event we established stable cell lines that express wild-type-PAR-4 or the caspase cleavage resistant mutant PAR-4 D131G under the control of a doxycycline-inducible promoter. Induction of the wild-type protein but not the mutant interfered with cell proliferation, predominantly through induction of apoptosis. We further demonstrate that TNFα-induced apoptosis leads to caspase-8-dependent PAR-4-cleavage followed by nuclear accumulation of the C-terminal PAR-4 (132-340) fragment, which then induces apoptosis. Taken together, our results indicate that the mechanism by which PAR-4 orchestrates the apoptotic process requires cleavage by caspase-8. INTRODUCTION The tumor suppressor protein prostate apoptosis response-4 (PAR-4) was initially discovered as a proapoptotic protein in prostate cancer cells undergoing apoptosis [1]. Mice lacking Par-4 are prone to enhanced tumor development and develop spontaneous tumors as well as displaying an increased susceptibility to hormone-or chemical-induced cancers [2]. Consistent with its role as a tumor suppressor PAR-4 expression is silenced in a well-defined subset of cancers including renal cancers, neuroblastomas, endometrial carcinomas, lung adenocarcinomas, and prostate carcinomas [3][4][5][6][7]. In addition, recent findings by Alvarez and coworkers document that down-regulation of PAR-4 is necessary for tumor cell survival and recurrence of breast cancer following targeted therapy in mouse models and in patients [8]. Down-regulation of Par-4 by oncogenic Ras expression has been shown to require the MEK/ ERK MAPK pathway [9] and consistent with this Par-4 knockout mice cooperate with oncogenic Kras to induce lung adenocarcinomas [6]. Moreover Par-4 was found to be an essential regulator of Hras G12V -dependent oncogenic growth in a genome-wide RNAi screen [10]. The protein encoded by the PAR-4 gene consists of a unique and central SAC (Selective for Apoptosis of Cancer cells) domain encompassing a nuclear localization sequence (NLS) and a C-terminal leucine zipper domain (LZ), which are both 100% conserved in human-, and rodent-orthologs [reviewed in 11]. Interaction with several proteins, including the atypical PKCs (aPKCs), the Wilms' tumor 1 (WT1) protein and DLK/ZIP kinase have been shown to require the leucine zipper domain of PAR-4 [12][13][14]. On the one hand binding of PAR-4 results in enzymatic inhibition of the aPKC isoforms PKCζ and PKCλ/τ, whereas the interaction with DLK/ZIP kinase and www.impactjournals.com/oncotarget WT1 suggests discrete nuclear functions for PAR-4. The central SAC domain has been identified by serial deletions of PAR-4 and has been described to be indispensable for the pro-apoptotic activities of PAR-4 [15]. It includes a nuclear localization sequence, which promotes nuclear entry and over-expression of this core domain alone induces apoptosis in a variety of cancer cells but does not cause cell death in normal or immortalized cells [15]. Moreover transgenic mice that ubiquitously express the SAC domain of Par-4 are resistant to the development of spontaneous as well as oncogene-induced tumors [16]. These data demonstrate an essential role of the PAR-4 SAC domain for its pro-apoptotic and tumor suppressor activities but how these activities are regulated remains elusive. Here we show that UV-induced apoptosis leads to a caspase-dependent cleavage of PAR-4 at EEPD131↓G, generating two PAR-4 fragments, the first comprising amino acids 1-131 and the second comprising amino acids 132-340. This cleavage separates the N-terminal part from the C-terminal region that contains the NLS, SAC and the leucine zipper domains. We further demonstrate that TNFα-induced processing of PAR-4 requires caspase-8 and leads to nuclear translocation of the C-terminal part of PAR-4 and thereby induces apoptosis. In summary we have demonstrated that PAR-4 is a novel caspase-8 substrate and provide evidence that PAR-4 cleavage downstream of caspase-8 is required for TNFα induced apoptosis. UV-induced apoptosis results in caspasedependent PAR-4 cleavage at EEPD131↓G Previous findings indicated that PAR-4 selectively induces apoptosis in cancer cell lines including HeLa cells [11]. To further evaluate these findings we treated HeLa cells with UV and analyzed the lysates after the indicated time points using PARP-1 cleavage as a marker for caspase activity (Fig 1A). Within 3 hours of UV treatment efficient PARP-1 cleavage was detectable and at the same time a PAR-4 fragment of ~17 kDa became visible using a PAR-4 amino-terminal antibody, suggesting that this protein may be cleaved during apoptosis (Fig 1A). To investigate whether PAR-4 is hydrolyzed by caspases, HeLa cells were treated with UV in the presence or absence of Z-VAD-FMK, a potent and pan-specific caspase inhibitor [22]. The pre-incubation with Z-VAD-FMK prevented PAR-4 and PARP-1 cleavage in HeLa cells, indicating that UV-induced PAR-4 hydrolysis is caspase-dependent ( Fig 1B). To analyze if UV-mediated PAR-4 processing was species specific we overexpressed human and rat PAR-4 in Hela cells and treated the cells with UV. Figure 1C shows that UV treatment resulted in the generation of a ~17 kDa N-terminal and a ~28 kDa C-terminal fragment for human PAR-4 and a ~15 kDa N-terminal and a ~30 kDa C-terminal fragment for rat Par-4, indicating the existence of a single cleavage site in both species. We scanned the PAR-4 sequence for potential caspase cleavage sites on the CASVM server (Server for SVM prediction of caspase substrate cleavage sites; www.casbase.org), which revealed a potential cleavage site at EEPD131↓G in the human protein [23]. To validate this finding we mutated Asp131 to Gly, overexpressed PAR-4 and PAR-4 D131G in HeLa cells and incubated them for the indicated times after UV treatment. Figure 1D demonstrates that a PAR-4 D131G mutant was resistant to UV-induced processing and no cleavage products were generated. These data confirmed the existence of a single caspase cleavage site at residue EEPD131↓G in human PAR-4. The cleavage site separates the N-terminal region from the SAC and leucine zipper domains (Fig 1E). This sequence is conserved in rat and murine Par-4, albeit slightly shifted towards the N-terminus, explaining at least in part the altered mobility of cleaved rat Par-4 ( Fig 1C and 1E). Inducible expression of PAR-4 but not PAR-4 D131G interferes with cell proliferation We were next interested if the observed PAR-4 cleavage exhibits any biological effects. Therefore, we generated multiple HeLa Flp-In T-REx cell clones, which either express PAR-4 wild-type or PAR-4 D131G from the identical locus after the addition of doxycycline (Fig 2A). Subsequent analysis of the growth characteristics of stable cell clones in colony formation assays revealed a marked reduction of colony number and also colony size upon induced expression of wild-type PAR-4 but not PAR-4 D131G (Fig 2B). This was observed with four individual clones, and shown here for two clones that express either empty vector (#2 and #4), PAR-4 wild-type (#4 and #6) or PAR-4 D131G (#1 and #2). Although the inducible expression of PAR-4 wild-type and PAR-4 D131G was comparable (Fig 2A), we noted that expression of PAR-4 wild-type led to the generation of a caspase cleavage fragment (Fig 2A). This suggested that moderate overexpression of PAR-4 was sufficient to induce caspase activation as observed previously [15]. Therefore we compared the capacity of PAR-4 wildtype and PAR-4 D131G to induce apoptosis. Figure 2C illustrates that only expression of PAR-4 wild-type but not the caspase cleavage resistant mutant led to increased PARP-1 cleavage indicating that caspase processing of PAR-4 is necessary to activate its pro-apoptotic properties. PAR-4 is a substrate of caspase-8 In order to identify caspases that are capable of cleaving PAR-4, immunoprecipitated Flag-tagged PAR-4 was subjected to a caspase cleavage assay with recombinant caspases 1 to 10 ( Fig 3A). Caspase-1, -7 and -8 were able to cleave PAR-4 to various degrees in vitro, with caspase-8 being the most efficient to hydrolyze full length PAR-4 ( Fig 3A). The tumor necrosis factor (TNFα) receptor family is an established mediator of the extrinsic apoptotic pathway and stimulates apoptosis through deathinducing signaling complex (DISC) formation, which includes engagement and activation of caspase-8 [24]. To study the role of caspase-8 in PAR-4 processing we stimulated HeLa S3 cells for various times with TNFα and cycloheximide and found that TNFα-induced signaling led to simultaneous PAR-4 and PARP-1 cleavage (Fig 3B). Next, we sought to investigate if caspase-8 is required for TNFα/CHX induced PAR-4 cleavage. For this purpose, we created HeLa S3 cell lines using lentiviral delivery of shRNA constructs either expressing two caspase-8-specific shRNAs (sh-caspase-8 #1, #3) or a non-silencing shRNA, which serves as a control (sh-control). The expression of caspase-8 was strongly reduced in HeLa S3 cells transduced with caspase-8 shRNA #1 and #3 as shown in Figure 3C (Fig 3C, upper panel). Stimulation with TNFα/ CHX only induced PAR-4 and PARP-1 cleavage in the presence of caspase-8 indicating that PAR-4 is downstream of caspase-8 (Fig 3C lower panel). Together these findings suggest that PAR-4 is a direct target of caspase-8. Recently Chaudhry and coworkers showed that PAR-4 is a substrate of caspase-3 and demonstrated that PAR-4 cleavage does not occur after cisplatin treatment of caspase-3-deficient MCF-7 cells [25]. As our in vitro experiment showed only very weak activity of caspase-3 towards PAR-4 (Fig 3A), we addressed the role of caspase-3 in our cells. Therefore we measured TNFα-induced PAR-4 cleavage in caspase-3-deficient MCF-7 cells and in caspase-3 reconstituted cells (Fig 3D). Stimulation of MCF-7 cells with TNFα led to PAR-4 cleavage regardless whether caspase-3 was absent or present indicating that TNFα-induced PAR-4 processing is caspase-3 independent (Fig 3D). Moreover pre-treatment of MCF-7 cells with the caspase-8 specific inhibitor Z-IETD-FMK demonstrated that TNFα-induced PAR-4 cleavage was caspase-8 dependent (Fig 3E). Caspase-8-mediated cleavage of PAR-4 leads to apoptosis and to nuclear accumulation of the C-terminal fragment of PAR-4 To further investigate functional consequences of caspase-8-mediated PAR-4 processing, we co-expressed wild-type PAR-4 and caspase-8 in HEK 293 cells. Forced expression of caspase-8 and PAR-4 on there own has been shown to trigger apoptosis and therefore we carefully titrated the amounts to generate conditions under which overexpression of each does not result in the induction of apoptosis. Figure 4A demonstrates that expression of caspase-8 and PAR-4 on its own does not induce apoptosis but co-expression of the two proteins induced PAR-4 and PARP-1 cleavage, indicating induction of apoptotic cell death. In contrast, co-expression of caspase-3 and PAR-4 did not result in PAR-4 cleavage or induction of cell death (Fig 4A), again underscoring the functional relation between caspase-8 and PAR-4. Induction of apoptosis in cancer cell lines by expression of the central SAC domain of PAR-4 has been shown to require nuclear localization (Fig 1E) [15]. To study the localization of the PAR-4 cleavage product containing the SAC and leucine zipper domains, we generated PAR-4 mutants with a C-terminal eCFP tag (Fig 4B). Whereas PAR-4 wild-type and PAR-4 D131G localized to the cytosol as expected, the PAR-4 mutant lacking the amino-terminal part localized to the nucleus ( Fig 4C). Moreover, stimulation with TNFα/ CHX or UV resulted in nuclear accumulation of PAR-4 wild-type, but was prevented in cells expressing PAR-4 D131G (Fig 4D). These data indicate that caspase-8mediated processing of PAR-4 might result in the nuclear accumulation of the C-terminal fragment of PAR-4 and induction of cell death. TNF-induced apoptosis requires caspase-8mediated processing of PAR-4 Next we wanted to analyze whether caspase-8mediated PAR-4 cleavage is required to trigger TNFαinduced cell death in caspase-3-deficient MCF-7 cells. Therefore we generated caspase-8-deficient MCF-7 cell lines and control cell lines using lentiviral delivery as described above (for knockdown efficiency see Fig 5A, left panel). The cells were then treated with CHX and TNFα/CHX and induction of apoptosis was measured by PARP-1 cleavage. While sh-control cells underwent apoptosis and showed PAR-4 processing after TNFα/CHX stimulation, caspase-8-deficient cells failed to do so ( Fig 5A, right panel). CHX treatment alone was not sufficient to induce PAR-4 processing and apoptosis. To investigate if PAR-4 expression was required for the induction of apoptosis in response to TNFα/CHX, we compared PAR-4-deficient with control MCF-7 cells stimulated with TNFα/CHX. Apoptosis was induced in sh-control cells but was significantly inhibited in PAR-4-depleted cells ( Fig 5B) and similar results were also obtained in HeLa S3 cells (data not shown). To expand on these findings we analyzed the localization of endogenous PAR-4 after TNFα/CHX induced apoptosis with a C-terminal PAR-4 antibody. Under apoptotic conditions PAR-4 localized to the nucleus while this effect was largely inhibited in caspase-8-knockdown cells (Fig 5C). These results suggest that PAR-4 cleavage is a direct consequence of caspase-8 activation and is required for nuclear accumulation and induction of apoptosis mediated by the C-terminal fragment of PAR-4. DISCUSSION PAR-4 is a multi-domain protein and functions as a tumor suppressor in a subset of human cancers. It contains pro-apoptotic activities but the signaling pathways functioning upstream of PAR-4 are ill defined. In this study, we found that PAR-4 is cleaved upon UVand TNFα-induced induction of apoptosis at EEPD131↓G and this cleavage site was preferentially recognized by caspase-8. Furthermore, caspase-8-mediated PAR-4 cleavage is critical in regulating cell death triggered by TNFα, which indicates that PAR-4 functions downstream of caspase-8. Like many proteases caspases display cleavage-site specificity and share some extent of amino acid specificity adjacent to the site of hydrolysis, providing some degree of substrate site selectivity. Caspases have a strict requirement for an aspartate in the P 1 position, with P 1 -P 1 ' being the cleavable bond (P 4 -P 3 -P 2 -P 1 -↓-P 1' ). Differences in the amino acids in the P 4 , P 3 , P 2 positions mainly determine caspase specificity [26]. The optimal caspase-8 cleavage site was determined to require P 4 (L, V, D, E) P 3 (E) P 2 (I, T, V) P 1 (D) P 1 ' (G, S) [27,28]. The PAR-4 cleavage site EEPD131↓G fulfills these requirements except for position P 2 . Human and rodent PAR-4 cleavage site motifs are conserved except for position P 1 ' (Fig 1E). The P 1 ' residue requires small and uncharged amino acids (Gly, Ser, Ala) and instead of a Gly at P 1 ' in human PAR-4, mouse and rat Par-4 contain a Ser, which fits with the amino acids required for a bona fide caspase-8 substrate. The Cys protease caspase-8 initiates apoptotic cell death in response to cell surface activation of TNF death receptors by undergoing autocleavage and then initiating processing of executioner caspases-3 and -7 [29]. Although large-scale proteomics in cells have shown that caspases cut hundreds of proteins generally at a single site, only a few proteins, such as Bid, p28 Bap31, RIP-1, osteopontin and CYLD are reported as caspase-8 substrates [30][31][32][33][34]. The tumor suppressor protein PAR-4 predominantly comprises an intrinsically disordered protein, with ordered segments in the C-terminal domains of the protein [35]. In this study we demonstrate that caspase-8 cleaves PAR-4 after Asp 131, thereby separating the unstructured N-terminus from the C-terminal part, which includes the NLS-containing SAC domain and the leucine zipper. Therefore, the C-terminal fragment possesses all the domains required for nuclear translocation and induction of apoptosis (Fig 1E). Previous studies have shown that nuclear entry of the SAC domain is essential for PAR-4-induced apoptosis [15]. Our own immunofluorescence data demonstrate that cleavage of PAR-4 markedly enhances nuclear targeting of the C-terminal cleavage product but how this shuttling process is regulated is still unknown. One possible mechanism is provided by 14-3-3 proteins, which are phospho-serine/phospho-threonine binding proteins. PAR-4 has been shown to associate with the mainly cytosolic 14-3-3 sigma isoform [36,37], which also interacts and sequesters the transcription factor YAP in the cytosol and thereby prevents it from activating p73induced apoptosis in the nucleus [38]. It can therefore be speculated that caspase-8-activated hydrolysis of PAR-4 interferes with 14-3-3-mediated cytoplasmic retention of PAR-4, thereby inducing nuclear targeting of the C-terminal cleavage product. In a recent study by Chauhdry et al., the authors identified PAR-4 to be a substrate of caspase-3 during apoptosis and demonstrated that cisplatin-induced PAR-4 cleavage is abrogated in caspase-3-deficient MCF-7 cells [25]. We analyzed the ability of caspase-1 to -10 to hydrolyze PAR-4 in vitro. Only caspase-1, -7 and -8 were able to efficiently cleave PAR-4, while caspase-3 showed only very weak activity. Moreover, only co-expression of PAR-4 with caspase-8, but not with caspase-3, led to PAR-4 cleavage and induction of apoptosis in HEK 293 cells. To verify these data we also utilized caspase-3deficient MCF-7 breast cancer cells and analyzed PAR-4 cleavage after stimulation of TNF death receptors. The inflammatory response of cells to the pleiotropic cytokine TNFα can be switched to apoptosis by the addition of protein synthesis inhibitors that shut down the synthesis of the endogenous caspase-8 inhibitor c-FLIP leading to caspase-8 activation [39]. Our combined results demonstrate that TNFα/CHX-induced PAR-4 cleavage in MCF-7 cells requires caspase-8, but is caspase-3 independent. Together, our data support a critical role for caspase-8 in TNFα-induced hydrolysis of PAR-4. As PAR-4 functions as a tumor suppressor in a subset of human cancers [11] and can be cleaved by caspase-8, our findings might aid in explaining some of the controversial functions of caspase-8 in tumorigenesis [40]. Caspase-8 has been reported to be silenced in a subset of human cancers owing to gene deletion, mutation or promoter hypermethylation, all resulting in a reduced capacity to trigger apoptosis [reviewed in 40]. This strongly suggests that caspase-8 possesses tumor suppressor functions and indeed caspase-8 deficiency facilitates cellular transformation [41]. Thus, we speculate that a role of caspase-8 deficiency in tumorigenesis may be in part due to its failure to cleave and induce PAR-4 translocation and activation. In summary, our data demonstrate that PAR-4 is a novel substrate of the initiator caspase-8 and is cleaved during TNFα-and UV-induced apoptosis. Furthermore, we provide evidence that regulation of PAR-4 through its hydrolysis by caspase-8 during TNFα-induced apoptosis is an essential step for the induction of cell death in some cancer cells. Therefore, our observations provide evidence for a novel mechanism of the regulation of the pro-apoptotic properties of the tumor suppressor protein PAR-4 and future studies will address which pathways are downstream of caspase-8/PAR-4. Stable cell lines HeLa Flp-In T-Rex cells have been described previously [19] and were transfected with pcDNA5/ FRT/TO-PAR-4wt or the respective mutant and pOG44 (Invitrogen). The transfected cells were selected in media containing 5 µg/ml blasticidin and 100 µg/ml hygromycin. Monoclonal cell lines were established after initial selection. Protein expression was induced by treating the cells with 100 ng/ml doxycycline for 72 hours. Colony formation assay 2x10 2 cells expressing pcDNA5/FRT/TO-PAR-4wt, pcDNA5/FRT/TO-PAR-4-D131G or the vector control were seeded in 6 cm dishes in duplicates. Protein expression was induced by addition of 100 ng/ ml doxycycline with consecutive medium changes every three days. On day 12, the cells were washed once in PBS and subsequently stained with 0.2% methylene blue in methanol for 30 minutes. After washing, dishes were dried and pictured for documentation. Indirect immunofluorescence and confocal microscopy Cells were grown on glass coverslips (18 mm) in 12 well plates, washed with PBS and fixed with 4% paraformaldehyde / PBS for 30 min. Cells were permeabilized with PBS containing 0,1% Triton-X-100 for 30 min and blocked in 3% bovine serum albumin (BSA) in PBS for 1 h. PAR-4 was stained with PAR-4 specific antibodies (Abcam, 1:100) and visualized with secondary Alexa Fluor® 555 conjugated antibodies (1:1000). Hoechst was added and coverslips were mounted with ImmuMount (Thermo Scientific). Images were examined with a Zeiss LSM 710 confocal microscope with a LDCapochromat 40/1.1 water objective. ZEN 2009 software (Zeiss) was used for image editing.
4,708.2
2014-01-29T00:00:00.000
[ "Biology" ]
Assessment on the lung injury of mice posed by airborne PM2.5 collected from developing area in China and associated molecular mechanisms: from histopathology to integrated transcriptome analysis Some epidemiological investigations have revealed airborne ne particulate matter (PM 2.5 ) could induce adverse effects on respiratory system of human. However, the experimental evidences of the harmful effect of PM 2.5 from mid-scale city in China and associated molecular mechanisms was still scarce. In this study, we aimed to evaluate the adverse effect on the lung system of PM 2.5 collected from mid-city of China and elucidate the underlying molecular mechanism through mRNA-seq and microRNA-seq integrated analysis. We exposed male mice for 8 weeks (298.52 ± ) using a whole-body exposure system. Micro-CT and histopathological analyses were performed to determine the morphology and histopathology changes of lung tissues induced by PM 2.5 . Transcriptome (both mRNA and microRNA) sequencing and the immunohistochemistry assay were performed to reveal the associated underlying mechanisms. The contents of PAHs absorbed in the PM 2.5 , as well as the pearson correlation index between it and the target genes and microRNA were examined. asthma [5], and chronic obstructive pulmonary disease (COPD) [6]. Otherwise, patients with lung-diseases would be two or three times the risk of death than normal population [7]. Hence, World Health Organization cites PM 2.5 pollution as fth leading risk factor for mortality and economic loss. In China, ambient PM 2.5 was responsible for 1.2 million premature death as the forth healthy risk factor [8,9]. From the public health view of point, it was imperative to comprehensively elucidate the causal relationship between PM 2.5 pollution and lung injury. There have been several publications about the adverse effects of PM 2.5 to respiratory system using mice as model animal with oropharyngeal aspiration (OPA) or intranasal administration. Balb/c mice exposed to PM 2.5 (2.5-20 µg for each mice) through OPA for 21 days showed obvious collagen deposition around small airway [10].Acute OPA of PM 2.5 extract to mice may induce greater lung neutrophilia and in ammation [11]. C57BL/6 mice exposed to PM 2.5 (3 mg/kg) by OPA can adversely affect prenatal lung development in the offspring [12]. C57BL/6J mice showed signi cant in ammation and incipient brosis symptom after directly intranasal administration with PM 2.5 (100 µg/day) for 4 weeks [13]. The obvious brous cap thickness observed from apoe-/-mice exposed to PM 2.5 by intranasal instillation [14]. Generally, direct administration by PM 2.5 suspension may under-or overestimate the healthy risk. There were only few investigations about the wholebody inhalational exposure. Yuan et al. (2020) found that C57BL/6 mice whole-body exposed to PM 2.5 (59.77 µg/m 3 ) sampled from Beijing, China exhibited severe lung injury and brosis [11]. They also demonstrated that treatment with local PM 2.5 polluted air at Beijing, China would strikingly induce lung oxidative stress and injury in mice for 3 weeks and 6 months, respectively [15,16]. Zhou et al. (2019) exposed C57BL/6 mice to PM 2.5 collected from Shijiazhuang, China, and found that signi cant increase in circulating white blood cells and in ammation in lungs of mice from exposure groups [17]. All the investigations were preformed to estimate the adverse effects on lung by PM 2.5 in Beijing-Tianjin-Hebei region, China. The underlying mechanism have not been comprehensively elucidated. Evidences have unveiled that air pollution was much more serious in economically developing levels cities than the economically developed mega-cities in China [18][19][20]. However, the assessment on the healthy risk of PM 2.5 pollution in this mid-scale cities was scarce. An increase of 10 µg/m 3 PM 2.5 was associated with 15-27% increase in lung cancer mortality [21]. Moreover, the annual mean concentration of PM 2.5 in mid-scale cities in China was marked higher than the limitation suggested by WHO as 10 µg/m 3 [19,22]. Hence, its warranted to investigate the consequences posed by PM 2.5 in this region to pulmonary system using "real-world" exposure system. High-throughput sequencing is a precise method to measure global gene expression pro les by RNAsequencing including mRNA and microRNA. Recently, this tool has been used to illuminate the associated molecular mechanism of cytotoxicity induced by PM 2.5 [23,24]. There was only one publication about the potential mechanism of lung injury posed by airborne PM 2.5 at Beijing, China using transcriptome analysis [25]. Meanwhile, some evidences suggested that microRNA deregulation was likely to be associated with the respiratory diseased induced by PM [26,27]. Hence, the integrated analysis between mRNA and microRNA to elucidate the underlying mechanisms of the lung injury posed by PM 2.5 was imperative. In summary, we for the rst time established a whole-body exposure system to investigate the adverse effects posed by PM 2.5 sampled from Baoji (33.35°-35.06° N, 106.18°-108.03° E), an inland city located in the mid-west part of China, using mice for 40 days. The objectives are to evaluate: (1) histopathological changes of lung tissue; (2) morphological changes of lung; (3) underlying mechanisms using mRNA-seq, microRNA-seq, and immunohistochemical analysis; (4) the potential biomarkers to lung injury posed by PM 2.5 ; (4) the dominant components in PM 2.5 associated with lung injury. PM 2.5 -induced lung injury in mice Micro-CT were performed to determine the potential effect on the lung morphology induced by PM 2.5 in vivo after the exposure before the histopathology assay. The tomograms showed higher density and con uent opacity of the mice lung tissues from exposure groups ( Figure. 1d and e), indicating the remarkable pulmonary in ammation induced by PM 2.5 , while no morphological changes were observed in the control ( Figure. 1a and b). Moreover, we created the 3D reconstruction of the lung tissue of the mice to estimate the pulmonary function injury posed by PM 2.5 . As shown in Figure.1c and f, the effective pulmonary function from exposure group markedly shrunk compared to the control group. The same results about the higher lung density and deteriorated PF were found by previous publication through intranasally administrating [10,13,28]. However, the adverse effects found in this work were much more obvious than the previous results, while the exposure concentration (289.52 µg/m 3 ) was much lower that it (equal to 400-1543 µg/m 3 ) in the previous work [10,13], indicating exposure way may markedly impact the assessment on the toxicity of PM 2.5 on the respiratory system. Meanwhile, the PM 2.5 particles used previously were obtained from Shanghai, China, one of the megacities located in Yangtze River Delta, which suggested that the hazard level of the PM 2.5 collected from the developing area in this work may be higher than the developed city. As shown in Figure. 2, obvious alveolar intervals thicken, in ammatory cell in ltration, and alveolar structure damage ( Figure. 2b, e, h) were observed from the pulmonary alveoli from exposure group compared with the vacuolated thin-walled of alveolar cavity as the intact structure shown in the control ( Figure. 2a, d, g). The marked bronchiolar epithelial hyperplasia and endoluminal in ammatory cells in ltration were shown in the lung of PM 2.5 -treated mice ( Figure. 2c, f, i), while simple ciliated columnar epithelium of bronchia in lung was characterized as normal pulmonary structure observed in control mice ( Figure. 2a, d, g). The results found in this work were consistent to the previous investigations [15,17,27,29]. Noted that the pulmonary histopathological injury was irreversible, the long-term exposure to PM 2.5 may pose severe threat on the respiratory system to human beings since the pollution is persistent. Since the prevalence of pulmonary brosis (PF), a critical lung interstitial disease, was signi cantly associated with long-term exposure to PM 2.5 [1], we determined lung brosis in mice from control and exposure groups using Masson trichrome stain. Obvious lung brosis increase was observed from exposure group ( Figure. 2k) compared with the control (Figure. 2j), and the statistical results showed 2.56-fold (p < 0.05) increase between the exposure and control group ( Figure. 2l). To further elucidate the immune response activated through PM 2.5 exposure in mice respiratory system, the relative abundances of leucocytes CD45+, as well as the epithelial cells (CD45-/CD31-/CD326+) in lung tissue of mice were determined using ow cytometry analysis. As shown in Figure. 3, the relative abundance of leucocytes in exposure group were signi cantly higher than it in the control ( Figure. 3a-c). However, there was no statistical difference of the epithelial cell abundances between the PM 2.5 -treated and control groups ( Figure. 3d-f). The results indicated that immune system response may be the dominate mechanism for the effects by PM 2.5 , which was parallel to previous works [30,31]. However, the bronchiolar epithelial hyperplasia cannot be manifested using the relative abundance of the cells, which may ascribe to the interference of the overwhelming quantity advantage of the leucocytes. Further experiments (e.g., single-cell sequencing) will be performed to elucidate the underlying mechanism of the difference occurred in epithelial cells. PM 2.5 -induced changes of gene expression pro le in mice lung To investigate the underlying molecular mechanism of the lung injury posed by PM 2.5 , RNA-sequencing was performed to analyze the whole-genome expression pro ling changes of the mice lung tissues with or without exposed to PM 2.5 . As shown in the Figure (Table S3), which was parallel to the previous study about the mice exposed to PM 2.5 collected from Beijing, China [25]. Meanwhile, the top 10 up-and downregulated genes were listed in Table S4. PM 2.5 -induced changes of microRNA expression pro le in mice lung According to the microRNA-sequencing analysis, a total of 70 microRNA (53 up-regulation and 17 downregulation) were identi ed in comparison of control and exposure groups, and the fold changes were shown in Figure. S5. As shown in Table S5, the top 20 KEGG pathway enrichment determined by KEGG annotation including GnRH signaling pathway, HIF-1 signaling pathway, and MAPK signaling pathway were identi ed. The similar results of the KEGG pathway (i.e., HIF-1 signaling pathway, Insulin signaling pathway, and MAPK signaling pathway) were found by previous investigation about the PM 2.5 collected from Shijiazhuang, Hebei, China using microRNA microarray analysis [27]. The top 10 up-and down-regulated microRNA were listed in Table S6. Integrated analysis between the gene expression and microRNA To further elucidate the molecular mechanisms, the integrated analysis of mRNA and microRNA expression data were performed. A total of 3024 mRNA (1625 up-regulated and 1399 down-regulated) and 100 microRNA (59 up-regulated and 41 down-regulated) were identi ed in comparison of control and exposure groups (Figure. S6b and Table S7), which was consistent to the KEGG annotations associated with mRNA expression levels, but was robuster through integrated analysis. GO annotations found that the biology process (BP) including immune system process, regulation of biological quality, and regulation of immune system process were signi cantly enriched ( Figure. S6c). To further validate the causal relationship between the enriched KEGG pathway and the lung injury posed by PM 2.5 . Nine representative pathways involved into the immune system, including B cell receptor signaling pathway, cell adhesion molecules (CAMs), antigen processing and presentation, were chosen ( Figure. 4a). The input genes associated with the enriched KEGG pathway annotated by integrated analysis were showed in Table S7. As showed in Figure. 4b, we selected 27 genes to explore the mechanism for the pathway in PM 2.5exposed lung tissue. Among these genes (showed in Table 1), Cd72, Cd81, Cd19, Pik3cd, and Ppp3cc were involved into B cell receptor signaling pathway, while Cd80, Cd22, H2-M2, H2-T24, and H2-T3 were involved into cell adhesion molecules (CAMs). Tapbp and Klrc1, Prf1, and Jak3 and Tnfrsf13c were involved into antigen processing and presentation, graft-versus-host disease, and primary immunode ciency, respectively. Angpt2, Pdgfra, P2ry1, and Ig were associated with Rap1 signaling pathway. Ptpn7 and Fgf22 were related to MAPK signaling pathway. Rasal1 and Lamc1 were involved into Ras signaling pathway and small cell lung cancer, respectively. Gene Met, Ralgds, and Cd8b1 were identi ed involved into multiple pathways. The associated microRNAs (total of 30) were mapped in Figure.4b and Table 1. The RT-qPCR analysis was performed to validate the results from transcriptomics. As showed in Table 1, most of the genes and microRNAs we selected showed similar results to those of the transcriptomics. Gene Cd72, Cd19, Pik3cd involved into B cell receptor signaling pathway were signi cantly up-regulated from exposure group, while gene Cd81 was markedly down-regulated ( Figure. 4c). The mRNA transcript levels of gene Cd22 and H2-M2 related to CAMs strikingly increased, and the levels of H2-T24 decreased markedly ( Figure. 4c). In addition, the associated microRNA involved into B cell receptor signaling pathway including 203-5p, 7a-5p, and 92a-1-5p were signi cantly up-regulated in PM 2.5 -exposed group. The obvious down-regulations of microRNA 149-5p, 328-3p, 466i-5p, and 24-3p were observed in exposure group ( Figure. 4d). Among the CAMs, we found that the microRNA 674-3p, 486a-3p, and 1247-5p increased remarkably, while 486a-3p was signi cantly downregulated in exposure group ( Figure. 4d). The protein expression of CD19, CD81, and PIK3CD involved into the B cell receptor signaling pathway was examined to con rm the changes of the associated genes by immunohistochemistry. As showed in Figure. Discussion Some evidences have suggested that the adverse effects on lung tissue posed by PM 2.5 maybe associated with the immune and in ammatory response [11,15,32]. Hence, B cell receptor signaling pathway, as the important pathway responsible for immune response [33], enriched by integrated analysis was foreseeable, which was also illuminated by previous study using RNA-sequencing [25]. The transcriptional levels of the target genes involved in this pathway were also signi cantly up-or down-regulated by PM 2.5 . Among them, Cd72, type II transmembrane protein coding gene belonging to C-type lectin family [34], was elucidated play an important role in controlling the magnitude of B cell responses [35], and then responsible for the immune system homeostasis regulation [36]. The overexpression of Cd72 found in this work from PM 2.5 -exposed group may promote B cell survival and proliferation, and enhance release of CD23, and then activate the immune response [37][38][39]. In addition, anti-CD72 monoclonal antibodies (mAbs) can activate CD19 tyrosine phosphorylation [40]. The similar result was observed in this work (the up-regulation of Cd19 induced by PM 2.5 ). It was reported that mutation of gene pik3cd was associated with the prevalence of systemic lupus erythematosus, which is a typical immune system disease [41]. Hence, the marked increase of the mRNA level of pik3cd in lung tissue from exposure group might related to the immune response induced by PM 2.5 . Meanwhile, the associate microRNA of Cd72 149-5p and 328-3p were signi cantly down-regulated by PM 2.5 . It has reported that the two microRNA may be responsible for the lung in ammatory and brotic pathology in mice [42]. The microRNA (203-5p and 7a-5p) related to Pik3cd were markedly up-regulated in mice lung from exposure group. Based on the KEGG and GO annotations, the microRNA 203-5p was involved into the B cell receptor signaling pathway ( Figure. S7a). Therefore, the increase of the expression levels of gene Cd72, Cd19, and Pik3cd involved into B cell receptor signaling pathway may contribute to the lung injury including in ammation and brosis induced by PM 2.5 . Surprisingly, we rstly found the most signi cantly enriched KEGG pathway of DEGs through integrated analysis was cell adhesion molecules (CAMs), which was identi ed to mediate the process of cell recruitment and homing, and play an important role in the in ammatory process [43]. Once the in ammatory process of pulmonary immune system was triggered by PM 2.5 , the up-regulation of adhesion molecules genes occurred in endothelial and immune cells to mediate leukocyte adhesion and then migrating to in ammation sites [44]. From the results of RT-qPCR quanti cation and transcriptomics analysis, the signi cant upswing of the expression levels of genes Cd22 and H2-M2 were observed from PM 2.5 -treated group. The gene Cd22, a immunoglobulin superfamily cell-surface molecule that serves as an adhesion receptor for sialic acid-bearing ligands [45], was elucidated to active B cells and regulate antigen receptor signaling in vitro [46]. Similarly, the associated microRNA 3110-5p for Cd22 were signi cantly stimulated from exposure group. Thus, our nding indicated that CAMs may be a key pathway responsible for the adverse effects posed by PM 2.5 to respiratory system. According to the KEGG and GO annotations ( Figure. S7b), the genes Cd22 and H2-M2 can be used as feasible biomarkers. Furthermore, the target genes involved in the insigni cant (P > 0.05) enriched KEGG pathways (i.e., antigen processing and presentation, graft-versus-host disease, primary immunode ciency, rap1 signaling pathway, MAPK signaling pathway, ras signaling pathway, small cell lung cancer, and PI3K-Akt signaling pathway) were also signi cantly up-or down-regulated by PM 2.5 , suggesting these pathways related to immune response of lung tissue should also be concerned for the further investigation. A number of investigations have demonstrated that long-term exposure to PAHs increased risk of developing lung cancer [47][48][49]. CCA was performed using concentrations of PAHs as environmental factor. As showed in PAHs mixtures [50]. DBP has been evaluated 100 times carcinogenic potency than BaP [50]. The potency value of dibenzo[a, h]anthracene (DBA), which was estimated to be associated with human cancer [51], was reported to be up to 10 times than Bap [52]. To further explore the covariation between the PAHs and genetic indicators, Pearson's index was determined. Obvious covariable relationship was observed between the target genes including Cd72, Cd19, Pik3cd, Cd22, H2-M24, and associated microRNA, and the PAHs detected from PM 2.5 ( Figure. S9, p < 0.05). This result suggested the target genetic indicators we selected showed a signi cant covariation with environmental pollutants, and then they may be used as the biomarkers to indicate the healthy risk posed by these types of pollutants. From the public health point of view, PM 2.5 samples used in this work were collected from Baoji, the mid-scale city located at the developing area. The main components of PAHs detected from the PM 2.5 were similar to the mega-city (e.g., Beijing, Nanjing, et.al), while the concentrations of them, especially for Bap, DBA, DBP were statistically higher than the developed area [53][54][55]. Noted the concentrations of PM 2.5 in all regions in China were very high than the threshold proposed by WHO (35 µg/mL) or more permissive limits adopted by China (75 µg/mL) in spite of the general decrease trends observed recent years [56]. Meanwhile, energy structure was much different between the mid-scale city and the mega-city in China [57], which directly result in the variation of the occurrence levels of the organic pollutants (PAHs) derived from the fossil fuel combustion [58]. Combined with the obviously adverse effects of the PM 2.5 collected from Baoji city on the respiratory system found in this work, the healthy risk posed by PM 2.5 at the mid-scale city of China should be paid more attention. Conclusion This work for the rst time estimated the lung injury posed by PM 2.5 collected from Baoji, a representative midscale city of China. We also elucidated the underlying mechanisms through integratedly analyzing the pro les of mRNA-seq and microRNA-sEq. Obvious lung injuries including pulmonary dysfunction, in ammatory response, pulmonary brosis were observed in the lung tissues of mice from exposure group. As revealed by KEGG annotations, the main pathway induced by PM 2.5 was immune system associated pathway, especially B cell receptor signaling pathway and cell adhesion molecules (CAMs). Moreover, the expression levels of the key genes and microRNAs involved into the pathway, as well as the associated protein expression were veri ed. The results from this work may provide deeper insight into the mechanisms for the pulmonary toxicity posed by PM 2.5 in the developing area. The more adverse effects on lung tissue posed by PM 2.5 from mid-scale city of China than it from the mega city suggested the potential health risk of PM 2.5 in developing area should be more concerned. PM 2.5 particle collection and analysis Ambient PM 2.5 were collected on quartz lter (2 µm pore size) by a High-Volume Ultra ne Particle Sampler (Ju Kang Technology, China) at a 5 L/min ow rate from Baoji, shaanxi, China, maintained for 24 h. PM 2.5 was extracted described previously [59]. Brie y, the lters were chopped into small fragments, and then shaken for 20 min after treated by ultrasonic in ice for 30 min in a 50-mL centrifuge tube. The process was repeated for three times. The mixture was centrifuged at 8000 rpm for 10 min, and then the precipitate was transplanted into a new tube and dried by lyophilization. The powder was stored at -80 ℃ until experiment. The concentrations of seventeen EPA priority polycyclic aromatic hydrocarbons (PAHs) bonding on the PM 2.5 were determined by gas chromatography coupled with a mass spectrometer (GC-MS, Agilent GC 6890, MS 5973, USA) following the previous publication [60]. The details were shown in supporting information. Instrumental and method detection limits were showed in Table S1. Animals and whole-body inhalation Male Balb/C mice at 8 weeks of age with body mass of 20 ± 2 g were purchased from Chengdu Dashuo Company (Chengdu, Sichuan, China) and acclimated for a week before exposure in SPF level animal house of West China Hospital. Subsequently, twenty mice were divided randomly into two groups (n = 10 for each group) exposed to either ltered air (control) or concentrated PM 2.5 air (exposure) in a "real-world" exposure system for Figure.S2. According to the criteria provided previously, the daily ventilation volume of adult male Balb/C mice is 0.0864 m 3 [61]. Previous investigation has revealed that 75% of PM 2.5 particles would enter and touch alveoli through inhalation [62]. Hence, the daily exposure dose of PM 2.5 for one mouse was: 298.52 µg/m 3 Lung tissues preparation and histopathology assay After exposure, the mice were anesthetized by pentobarbital sodium. Blood samples were collected from the cardiac vein. Serum was obtained by centrifuged at 3000 rpm for 15 min with no anticoagulant added to detect the routine blood indexes. The lung tissue of each mice was immediately separated. After washed by precooled phosphate buffered saline (PBS) for 3 times, the left lung tissues were divided into two parts. One part was used for ow cytometry analysis, and the other was xed with 10% formaldehyde solution immediately and then embedded in para n. Para n-embedded tissues (60 ℃) were sectioned consecutively with a thickness of 5 µm. The slices were then depara nized and stained with hematoxylin and eosin (H&E) or Masson trichrome. More than ve replicates per lung tissue each mouse were observed with an optical microscope to determine the histopathology from exposure group. The right lung tissues were kept at -80℃ for further analysis. Flow cytometry analysis The ow cytometry analysis was performed as previously described. Brie y, one part of the left lung tissues were minced on ice, and then digested by 0.1% Type I and Type Collagenase in 4 ml of the PBS system, in 37 ℃ in the incubator, shaking incubation for 1 h. After digestion completely, cell suspension was ltered through The paired-end RNA reads were aligned to the reference genome using Hisat2 v2.0.5 software. The gene expression level was quanti ed by feature Counts v1.5.0-p3. The microRNA reads were quality controlled using miRBase20.0 as reference, modi ed by software miredeep2. The potential microRNA secondary structures were obtained using srna-tools-cli. MicroRNA expression levels were estimated by TPM (transcript per million). Differential expressions of RNA and microRNA between exposure and control group were analyzed using the Quantitative real-time PCR analysis To verify the data from the transcriptomics, 27 genes and 29 microRNA were chosen as candidates for RT-qPCR analysis. Primer sequences were listed in Table S2. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and β-actin genes were used as the endogenous control to monitor the quality of the target genes. MicroRNA U6 was used as the internal standard to monitor the quality of the target microRNA. The variations in the endogenous control expression were below 10% for all groups. Ampli cation was performed using SYBR Green PCR master mix (Applied Biosystems) according to the manufacturer's instructions. Gene and microRNA expressions were quanti ed using the 2 −△△ Ct method suggested by Applied Biosystems (Foster City, CA, USA). The fold change between the exposure group and the control were calculated by the geometric mean of the relative expression normalized by the two housekeeping genes (β-Actin and GAPDH) [63]. Statistical analysis The statistical program SPSS 18.0 (Chicago, IL, USA) was used to analyze all the collected data. Five or more replicates of each parameter were determined to eliminate the variability of the results. All the data were expressed as mean ± standard deviations (S.D). Two-tail student's T-test (95% con dence interval) was used to examine the signi cance of differences between control and exposure groups. Consent for publication Not applicable. Not applicable.Availability of data and materials Not applicable.
5,801.6
2020-10-23T00:00:00.000
[ "Medicine", "Biology" ]
Chemical composition and antibacterial properties of essential oil and fatty acids of different parts of Ligularia persica Boiss. Objective: The objective of this research was to investigate the chemical composition and antibacterial activities of the fatty acids and essential oil from various parts of Ligularia persica Boiss (L. persica) growing wild in north of Iran. Materials and Methods: Essential oils were extracted by using Clevenger-type apparatus. Antibacterial activity was tested on two Gram-positive and two Gram-negative bacteria by using micro dilution method. Results: GC and GC∕MS analysis of the oils resulted in detection of 94%, 96%, 93%, 99% of the total essential oil of flowers, stems, roots and leaves, respectively. The main components of flowers oil were cis-ocimene (15.4%), β-myrcene (4.4%), β-ocimene (3.9%), and γ-terpinene (5.0%). The major constituents of stems oil were β-phellandrene (5.4%), β-cymene (7.0%), valencene (3.9%). The main compounds of root oil were fukinanolid (17.0%), α-phellandrene (11.5%) and Β-selinene (5.0%) and in the case of leaves oil were cis-ocimene (4.8%), β-ocimene (4.9%), and linolenic acid methyl ester (4.7%). An analysis by GC-FID and GC-MS on the fatty-acid composition of the different parts of L. persica showed that major components were linoleic acid (11.3-31.6%), linolenic acid (4.7-21.8%) and palmitic acid (7.2-23.2%). Saturated fatty acids were found in lower amounts than unsaturated ones. The least minimum inhibition concentration (MIC) of the L. persica was 7.16 μg/ml against Pseudomonas aeruginosa. Conclusion: Our study indicated that the essential oil from L. persica stems and flowers showed high inhibitory effect on the Gram negative bacteria. The results also showed that fatty acids from the stems and leaves contained a high amount of poly-unsaturated fatty acids (PUFAs). Introduction For centuries, essential oils have been used for the treatment of infections and diseases, in different parts of the world (Rios and Recio, 2005). Nowadays, the use of essential oils is growing and there is a noticeable range application for them (e.g. in food and beverages industry, as fragrances in perfumes and cosmetics) but the oils also cover a broad spectrum of biological activities which has aroused the researchers' interest. In the past two decades, there has been a lot of research to study the antimicrobial activity of essential oils. The main constituents of some plant essential oils are thymol, carvacrol, linalool and eugenol that have been shown to have a wide spectrum of antimicrobial activities (Kalemba and Kunicke, 2003;Dorman and Deans, 2000). Recently, the antibacterial properties and potential use of essential oils in foods have been investigated (Burt, 2004). Antimicrobial activities of spices and herbs have been known for several centuries (Bagamboula et al., 2003). Essential oils and their components are becoming increasingly popular as natural antimicrobial agents to be used for a wide variety of purposes, including food preservation, complementary medicine and natural therapeutics. At present, essential oils are used by the flavoring industry for flavor enhancement and for their antioxidant effects (Cosentino et al., 2003). Fatty acids have also a wide range of functions (Elias, 1983). For example, some polyunsaturated fatty acids such as nervonic acid, linoleic acid and arachidic acid are vital for human growth (Carvalho et al., 2006). Ligularia persica Boiss (L. persica(. is an important species of Compositae family. According to Flora Iranica, there is only one species of Ligularia in Iran that is endemic of north of Iran. The local names of this genus are ″ Zabantala ″ and ″ Pirsonbol ″ (Rechinger, 1989). Ligularia species are used in traditional medicines such as treatment of coughs, inflammations, jaundice, scarlet fever, rheumatoid arthritis, and hepatic diseases (Xie et al., 2010). Up to now, several phytochemical studies have identified various compounds such as steroids, alkaloids, flavonoids, lignans, sesquiterpenoids, and terpenoids in ligularia species (Yang et al., 2011). The secondary metabolites reported from L. persica have anti-bacterial, antilung cancer, anti-stomach cancer, antihepatotoxicity, anti-thrombotic, anticoagulation and anti-insect activity (Yang et al., 2011). Extraction of roots of L. persica and chromatographic separation revealed one new derivative of tovarol, four new derivatives of shiromodiol, -andeudesmol, bakkenolide A and four known eremophilane derivatives (Marco et al., 1991). There is a report on chemical composition and antimicrobial activities of aerial parts of L. persica in the literature (Mirjalili and Yousefzadi, 2012). However, no previous work has been conducted on different part of this plant. Also, there is no report on the fatty acids composition and antibacterial activity of the different parts of L. persica essential oils. Therefore, the aim of this research is to analyze the chemical constituents and fatty acids of different parts of L. persica and antibacterial activity of the essential oils of different parts of L. persica was then investigated and discussed. Materials and methods Plant Material L. persica was collected during the flowering stage in July 2012 from Pole Zangule located in central Alborz Mountains (Mazandaran province, North of Iran). The specimen was identified and authenticated by a taxonomist, Dr Alireza Naqinezhad, and a voucher herbarium specimen was deposited in the herbarium of the Department of Biology, University of Mazandaran (No. 1505). The plant material was air-dried at room temperature and protected from light for one week. Isolation of essential oil Different parts of L. persica (50 g) were subjected to hydro-distillation for 2 hours using a Clevenger-type apparatus. The obtained essential oil was dried over anhydrous sodium sulphate, filtered and stored at +4 °C until analysis. Oil extraction and fatty acid methylation preparation Dried ground plant materials (different parts of L. persica) were extracted with hexane using a Soxhlet apparatus (70 °C, 8 hours) to obtain the fatty components. After removing hexane using rotary evaporator, the oily mixtures were derivatized to produce their methyl esters by transesterification process with 2 M methanolic KOH at 70°C for 15 minutes (Tavakoli et al., 2012;Paquat, 1992). The organic phases were analyzed by GC-FID and GC-MS systems. Analysis of the essential oil and fatty acids GC-FID analysis The GC analysis of the essential oil and fatty acids was performed using an Agilent Technology 7890A Network gas chromatographic (GC) system, equipped with an FID detector. Compounds were separated on a DB-5 Fused-silica capillary column (60 m long, 250 μm i.d. with 0.25μm film thickness, Agilent Technology). A sample of 1.0 μL was injected in the split mode with a split ratio of 1:5. The oven temperature was programmed to rise from 50 to 240°C at a rate of 4°C/min. GC-MS analysis The GC-MS analysis was performed with an Agilent Technology 5975C massselective detector coupled to anAgilent Technology 7890A gas chromatographic. For GC-MS detection, an electron ionization system, with ionization energy of 70 eV, was used. Column oven temperature program was the same as in GC analysis. Helium was used as the carrier gas at a flow rate of 1.0 ml/min. Mass range was 30 -600 m/z, while injector and MS transfer line temperatures were set at 220 °C and 250 °C, respectively. Compounds identification The oil components were identified by calculation of their retention indices under temperature-programmed conditions for nalkanes (C 6 -C 23 ) and the oil on DB-5 column under the same conditions. Identification of individual compounds was done by comparison of their mass spectra with those of the internal reference mass spectra library (Wiley 7.0 n,NIST 08) or with authentic compounds and confirmed by comparing their retention indices with authentic compounds or with those reported in the literature (Davies, 1990;Shibamoto, 1987;Adams,2007). Antimicrobial activity Microbial strains The essential oils were tested against two Gram-positive bacteria: Staphylococcus aureus ATCC 25923, and Streptococcus sobrinus ATCC 27609 and two Gram-negative bacteria including Escherichia coli ATCC 25922,and Pseudomonas aeruginosa ATCC 27853. Micro dilution broth method Micro-dilution susceptibility assay was performed using the NCCLS method for the determination of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) (Wayne, 1999). Dilutions were prepared in 96-well microtiter plates to get final concentrations ranging from 0 to 4,000 µg/ml. All tests were performed in BHI broth medium. Bacterial cell numbers were adjusted to approximately 1 × 10 8 CFU (colony forming units)/ml. The 96-well plates were prepared by dispensing 95 µl of nutrient broth and 5 µl of the inoculums into each well. The final volume in each well was 200 µl. The plates were incubated at 37 °C for 24 hours. Gentamicin was used as positive standard in order to control the sensitivity of the microorganisms. The growth was indicated by the presence of a white 'pellet' on the well bottom. The MIC was calculated as the highest dilution showing complete inhibition of the tested strains. Fatty acid Composition The analysis of fatty acid obtained from different parts of L. persica revealed the presence of over 19 compounds as shown in Table 2. The major components were linoleic acid (10.9-31.6%), linolenic acid (4.7-21.8%) and palmitic acid (7.2-23.2%). The results demonstrated that the quantities of unsaturated fatty acids (20.4-54.7%) were higher than saturated fatty acids (9.1-28.9%). Inhibition of Bacterial Growth The anti-bacterial activity of the essential oil from L. persica against a panel of pathogenic microorganisms was assessed by measurement of minimum inhibitory concentration (MIC). The results are presented in Table 3. It can be concluded that the essential oil of root has the highest antibacterial activity and the oil of the leaves has the least efficient antibacterial activity among other parts. The Gram-negative bacterium that exhibited a higher sensitivity to the tested oils was Pseudomonas aeruginosa. The essential oil from stems showed the highest anti-bacterial effect against Pseudomonas aeruginosa (7.16 μg/ml in terms of MIC) and the least antibacterial activity was seen for leaves essential oil against Staphylococcus aureus (375 μg/ml in terms of MIC). Discussion A comparison between reported chemical composition of the aerial parts of L. persica showed that the similar composition were obtained (Mirjalili and Yousefzadi, 2012). In general, monoterpenes and sesquiterpenes were more abundant as compared to the other compounds. In addition, the presence of significant amounts of various bioactive constituents indicates a possible industrial use of these plants. Fukinanolid or bakkenolide A (17.0%), as the most abundant sesquiterpene in roots, α-pinen have been recently introduced as a powerful anti-microbial and anti-tumor agent (Rustaiyan et al., 1999). Cis-ocimen that was the most abundant chemical in flowers (15.4%) is used as raw material in perfumes and cosmetics. Therefore, the essential oils of L. persica are suitable as natural supplement sources for food, cosmetic and pharmaceutical industries. In addition, the amounts of the unsaturated fatty acids in the leaves and stems were higher than of the flowers and roots. Unsaturated fatty acids play a crucial role in human nutrition and health. Polyunsaturated fatty acids (PUFAs) have been considered as health-promoting nutrients in recent years. A growing body of studies illustrates the benefits of PUFAs in alleviating cardiovascular, inflammatory, heart diseases, atherosclerosis, autoimmune disorder, diabetes and other diseases (Finley, 2001). Our study reported the secondary metabolites in essential oil and fatty acids extracted from different parts of Ligularia persica, as well as their antibacterial activities. These results indicate that L. persica may be a rich source of natural products with biological activities.
2,552.2
2016-04-01T00:00:00.000
[ "Chemistry", "Environmental Science", "Medicine" ]
Body-Ordered Approximations of Atomic Properties We show that the local density of states (LDOS) of a wide class of tight-binding models has a weak body-order expansion. Specifically, we prove that the resulting body-order expansion for analytic observables such as the electron density or the energy has an exponential rate of convergence both at finite Fermi-temperature as well as for insulators at zero Fermi-temperature. We discuss potential consequences of this observation for modelling the potential energy landscape, as well as for solving the electronic structure problem. Introduction An atomistic potential energy landscape (PEL) is a mapping assigning energies E(r), or local energy contributions, to atomic structures r = {r } ∈ ∈ (R d ) , where is a general (possibly infinite) index set. High-fidelity models are provided by the Born-Oppenheimer PEL associated with ab initio electronic structure models such as tight-binding, Kohn-Sham density functional theory (DFT), Hartree-Fock, or even lower level quantum chemistry models [38,48,54,58,73,94]. Even now, however, the high computational cost of electronic structure models severely limits their applicability in material modelling to thousands of atoms for static and hundreds of atoms for long-time dynamic simulations. There is a long and successful history of using surrogate models for the simulation of materials, devised to remain computationally tractable but capture as much detail of the reference ab initio PEL as possible. Empirical interatomic potentials are purely phenomenological and are able to capture a minimal subset of desired properties of the PEL, severely limiting their transferability [23,86]. The rapid growth in computational resources, increased both the desire and the possibility to match as much of an ab inito PEL as possible. A continuous increase in the complexity of parameterisations since the 1990s [6,7,36] has over time naturally led to a new generation of "machine-learned interatomic potentials" employing universal approximators instead of empirical mechanistic models. Early examples include symmetric polynomials [11,80], artificial neural networks [8] and kernel methods [5]. A striking case is the Gaussian approximation potential for Silicon [4], capturing the vast majority of the PEL of Silicon of interest for material applications. The purpose of the present work is, first, to rigorously evaluate some of the implicit or explicit assumptions underlying this latest class of interatomic potential models, as well as more general models for atomic properties. Specifically, we will identify natural modelling parameters as approximation parameters and rigorously establish convergence. Secondly, our results indicate that nonlinearities are an important feature, highlighting some superior theoretical properties. Finally, unlike existing nonlinear models, we will identify explicit low-dimensional nonlinear parameterisations yet prove that they are systematic. In addition to justifying and supporting the development of new models for general atomic properties, our results establish generic properties of ab initio models that have broader consequences, e.g. for the study of the mechanical properties of atomistic materials [15,17,32,93]. The application of our results to the construction and analysis of practical parameterisations (approximation schemes) that exploit our results will be pursued elsewhere. Our overarching principle is to search for representations of properties of ab initio models in terms of simple components, where "simple" is of course highly context-specific. To illustrate this point, let us focus on modelling the potential energy landscape (PEL), which motivated this work in the first place. Pragmatically, we require that these simple components are easier to analyse and manipulate analytically or to fit than the PEL. For many materials (at least as long as Coulomb interaction does not play a role), the first step is to decompose the PEL into site energy contributions, E(r) = ∈ E (r), (1.1) where one assumes that each E is local, i.e., it depends only weakly on atoms far away. In previous works we have made this rigorous for the case of tightbinding models of varying complexity [14,16,17,93]. In practise, one may therefore truncate the interaction by admitting only those atoms r k with r k := |r k −r | < r cut as arguments. Typical cutoff radii range from 5Å to 8Å, which means that on the order 30 to 100 atoms still make important contributions. Thus the site energy E is still an extremely high-dimensional object and short of identifying low-dimensional features it would be practically impossible to numerically approximate it, due to the curse of dimensionality. A classical example that illustrates our search for such low-dimensional features is the embedded atom model (EAM) [23], which assigns to each atom ∈ a site energy While the site energy E eam remains high-dimensional, the representation is in terms of three one-dimensional functions φ, ρ, F which are easily represented for example in terms of splines with relatively few parameters. Such a low-dimensional representation significantly simplifies parameter estimation, and vastly improves generalisation of the model outside a training set. Unfortunately, the EAM model and its immediately generalisations [6] have limited ability to capture a complex ab initio PEL. Still, this example inspires our search for representations of the PEL involving parameters that are • low-dimensional, • short-ranged. Following our work on locality of interaction [14,16,17,93] we will focus on a class of tight-binding models as the ab initio reference model. These can be seen either as discrete approximations to density functional theory [38] or alternatively as electronic structure toy models sharing many similarities with the more complex Kohn-Sham DFT and Hartree-Fock models. To control the dimensionality of representations, a natural idea is to to consider a body-order expansion, where r k := r k −r and we say that V n (r k 1 , . . . , r k n ) is an (n+1)-body potential modelling the interaction of a centre atom and n neighbouring atoms {k 1 , . . . , k n }. This expansion was traditionally truncated at body-order three (N = 2) due to the exponential increase in computational cost with N . However, it was recently demonstrated by Shapeev's moment tensor potentials (MTPs) [80] and Drautz' atomic cluster expansion (ACE) [25] that a careful reformulation leads to models with at most linear N -dependence. Indeed, algorithms proposed in [2,80] suggest that the computational cost may even be N -independent, but this has not been proven. Even more striking is the fact that the MTP and ACE models which are both linear models based on a body-ordered approximation, currently appear to outperform the most advanced nonlinear models in regression and generalisation tests [66,106]. These recent successes are in stark contrast with the "folklore" that body-order expansions generally converge slowly, if at all [10,25,27,46,86]. The fallacy in those observations is typically that they implicitly assume a vacuum cluster expansion (cf. § 2.2). Indeed, our first set of main results in § 2.4 will be to demonstrate that a rapidly convergent body-order approximation can be constructed if one accounts for the chemical environment of the material. We will precisely characterise the convergence of such an approximation as N → ∞, in terms of the Fermitemperature and the band-gap of the material. In the simplest scheme we consider, we achieve this by considering atomic properties [O(H)] , where H is a tight-binding Hamiltonian and O an analytic function. Approximating O by a polynomial on the spectrum σ (H) results in an approximation of the atomic property [ p(H)] , which is naturally "body-ordered". To obtain quasi-optimal approximation results, naive polynomial approximation schemes (e.g. Chebyshev) are suitable only in the simplest scenarios. For the insulating case we leverage potential theory techniques which in particular yield quasi-optimal approximation rates on unions of disconnected domains. Our main results are obtained by converting these into approximation results on atomic properties, analysing their qualitative features, and taking care to obtain sharp estimates in the zero-Fermi-temperature limit. These initial results provide strong evidence for the accuracy of a linear bodyorder approximation in relatively simple scenarios, and would for example be useful in a study of the mechanical response of single crystals with a limited selection of possible defects. However, they come with limitations that we discuss in the main text. In response, we consider a much more general framework, generalizing the theory of bond order potentials [55], that incorporates our linear body-ordered model as well as a range of nonlinear models. We will highlight a specific nonlinear construction with significantly improved theoretical properties over the linear scheme. For both the linear and nonlinear body-ordered approximation schemes we prove that they inherit regularity, symmetries and locality of the original quantity of interest. Finally, we consider the case of self-consistent tight-binding models such as DFTB [33,59,78]. In this case the highly nonlinear charge-equilibration leads in principle to arbitrarily complex intermixing of the nuclei information, and thus arbitrarily high body-order. However, our results on the body-ordered approximations for linear tight-binding models mean that each iteration of the self-consistent field (SCF) iteration can be expressed in terms of a low body-ordered and local interaction scheme. This leads us to propose a self-similar compositional representation of atomic properties that is highly reminiscent of recurrant neural network architectures. Each "layer" of this representation remains "simple" in the sense that we specified above. Tight binding model We suppose is a finite or countable index set. For ∈ , we denote the state of atom by u = (r , v , Z ) where r ∈ R d denotes the position, v the effective potential, and Z the atomic species of . Moreover, we define r k := r k − r , r k := |r k |, and u k := (r k , v , v k , Z , Z k ). For functions f of the relative atomic positions u k , the gradient denotes the gradient with respect to the spatial variable: ∇ f (u k ) := ∇ ξ → f ((ξ, v , v k , Z , Z k )) ξ =r k . The whole configuration is denoted by u = (r, v, Z ) = ({r } ∈ , {v } ∈ , {Z } ∈ ). For a given configuration u, the tight binding Hamiltonian takes the following form: (TB) For , k ∈ and N b atomic orbitals per atom, we suppose that where h and t have values in R N b ×N b , are independent of the effective potential v, and are continuously differentiable with for some h 0 , γ 0 > 0. Moreover, we suppose the Hamiltonian satisfies the following symmetries: 3) are independent of the atomic sites , k, m ∈ . (ii) Pointwise bounds on |h(u k )| and |t (u m , u km )| are normally automatically satisfied since most linear tight binding models impose finite cut-off radii. Moreover, the assumption on the derivatives |∇h(u k )| and |∇t (u m , u km )| states that there are no long range interactions in the model. In particular, we are assuming that Coulomb interactions have been screened, a typical assumption in many practical tight binding codes [20,68,71]. (iii) The Hamiltonian is symmetric and thus the spectrum is real. (iv) The operators H(u) and H(Qu) are similar, and thus have the same spectra. (vi) The entries of H(u) k ∈ R N b ×N b will be denoted H(u) ab k for 1 a, b N b . When clear from the context, we drop the argument (u) in the notation. The assumptions (TB) define a general three-centre tight binding model, whereas, if t ≡ 0, a simplification made in the majority of tight binding codes, we say (TB) is a two-centre model [38]. The choice of potential in (TB) defines a hierarchy of tight binding models. If v = const, (TB) defines a linear tight binding model, a simple yet common model [14,16,17,70]. In this case, we implicitly assume that the Coulomb interactions have been screened, a typical assumption made in practice for a wide variety of materials [20,68,71,72]. Supposing that v is a function of a self-consistent electronic density, we arrive at a non-linear model such as DFTB [33,59,78]. Abstract variants of these nonlinear models have been analysed, for example, in [93,99]. Through much of this article we will treat r, v as independent inputs into the Hamiltonian, but will discuss their connection and self-consistency in § 2.7. For a finite system u (that is, with a finite set), we consider analytic observables of the density of states [14,93]: for functions O : R → R that can be analytically continued into an open neighbourhood of σ H(u) , we consider that where (λ s , ψ s ) are normalised eigenpairs of H(u). Many properties of the system, including the particle number functional and Helmholtz free energy, may be written in this form [14,16,70,93]. By distributing these quantities amongst atomic positions, we obtain a well-known spatial decomposition [14,16,35,38], (2.4) For infinite systems, we may define O (u) through the thermodynamic limit [14,16] or via the holomorphic functional calculus; see § 4.1.2 for further details. When discussing derivatives of the local observables, we will simplify notation and write Local observables Although the results in this paper apply to general analytic observables, our primary interest is in applying them to two special cases. A local observable of particular importance is the electron density; for inverse Fermitemperature β ∈ (0, +∞] and fixed chemical potential μ, we use the notation of (2.4) to define (2.6) Throughout this paper F β (u) := F β (u) ∈ will denote a vector and so (2.6) reads ρ = F β (u). In § 2.7, we consider the case where the effective potential is a function of the electron density (2.6) (that is, v = w(ρ) for some w : R → R ) which leads to the self-consistent local observables where u(ρ) := r, w(ρ), Z . Remark 2. All the results of this paper also hold for the off-diagonal entries of the density matrix (ρ k := tr F β H(u) k ) without any additional work. This fact will be clear from the proofs. It is likely though that additional properties related to the off-diagonal decay (near-sightedness) and spatial regularity further improve the "sparsity" of the density matrix. A complete analysis would go beyond the scope of this work. The second observable we are particularly interested in is the site energy, which allows us to decompose the total potential energy landscape into localised "atomic" contributions. In the grand potential model for the electrons, which is appropriate for large or infinite condensed phase systems [14], it is defined as The total grand potential is defined as G β (u) [14,70]. For β < ∞, the functions F β ( · ) and G β ( · ) are analytic in a strip of width πβ −1 about the real axis [17,Lemma 5.1]. To define the zero Fermi-temperature observables, we assume that μ lies in a spectral gap (μ ∈ σ H(u) ; see § 2.1.3). In this case, F β ( · ) and G β ( · ) extend to analytic functions in a neighbourhood of σ H(u) for all β ∈ (0, ∞]. In order to describe the relationship between the various constants in our estimates and the inverse Fermi-temperature or spectral gap (in the case of insulators), we will state all of our results for O β = F β or G β . Other analytic quantities of interest can be treated similarly with constants depending, e.g., on the region of analyticity of the corresponding function z → O(z). Metals, insulators, and defects As we can see from (2.4), the structure of the spectrum σ H(u) will have a key role in the analysis. Firstly, by (TB), H(u) is a bounded self-adjoint operator on 2 ( ×{1, . . . , N b }) and thus the spectrum is real and contained in some bounded interval. In order to keep the mathematical results general, we will not impose any further restrictions on the spectrum. However, to illustrate the main ideas, we briefly describe typical spectra seen in metals and insulating systems. In the case where u describes a multi-lattice in R d formed by taking the union of finitely many shifted Bravais lattices, the spectrum σ H(u) is the union of finitely many continuous energy bands [57]. That is, there exist continuous functions, ε α : BZ → R, on the Brillouin zone BZ, a compact connected subset of R d , such that In particular, in this case, σ H(u) = σ ess H(u) is the union of finitely many intervals on the real line. The band structure {ε α } relative to the position of the chemical potential, μ, determines the electronic properties of the system [89]. In metals μ lies within a band, whereas for insulators, μ lies between two bands in a spectral gap. Schematic plots of these two situations are given in Figure 1. In particular, if u ref describes a multilattice, then, since local perturbations in the defect core are of finite rank, the essential spectrum is unchanged and we obtain finitely many eigenvalues bounded away from the spectral bands. Moreover, a small global perturbation can only result in a small change in the spectrum. Again, a schematic plot of this situation is given in Figure 2. For the remainder of this paper, we consider the following notation: and max I − μ min I + . Moreover, we define g := min I + − max I − 0, and (2.10) The constants in Definition 1 are also displayed in Figure 2. The constant g in Definition 1 is slightly arbitrary in the sense that as long as where δ is the constant from Proposition 2.1), then there exists a finite set {λ j } as in (2.9). Choosing smaller g reduces the size of the set {λ j }. Vacuum cluster expansion For a system of M identical particles X 1 , . . . , X M , a maximal body-order N , and a permutation invariant energy E = E({X 1 , . . . , X M }), we may consider the vacuum cluster expansion, where the n-body interaction potentials V (n) are defined by considering all isolated clusters of j n atoms: The expansion (2.12) is exact for N = M. The vacuum cluster expansion is the traditional and, arguably, the most natural many-body expansion of a potential energy landscape. However, in many systems, it converges extremely slowly with respect to the body-order N and is thus computationally impractical. An intuitive explanation for this slow convergence is that, when defining the body-order expansion in this way, we are building an interaction law for a condensed or possibly even crystalline phase material from clusters in vacuum where the bonding chemistry is significantly different. Although this observation appears to be "common knowledge" we were unable to find references that provide clear evidence for it. However, some limited discussions and further references can be found in [10,25,27,46,86]. Our own approach employs an entirely different mechanism, which in particular incorporates environment information and leads to an exponential convergence of an N -body approximation. Technically, our approximation is not an expansion, that is, the n-body terms V (n) of the classical cluster expansion are replaced by terms that depend also on the highest body-order N . We will provide a more technical discussion contrasting our results with the vacuum cluster expansion in § 2.6. A general framework Before we consider two specific body-ordered approximations, we present a general framework which both incorporates many (linear-scaling) electronic structure methods from the literature (e.g. the kernel polynomial method (KPM) [82], bond-order potentials (BOP) [26,39,55,74], and quadrature-based methods [69,87,88]), and illustrates the key features needed for a convergent scheme: To that end, we introduce the local density of states (LDOS) [38] which is the (positive) measure D supported on σ (H) such that for n ∈ N 0 . (2.13) Existence and uniqueness follows from the spectral theorem for normal operators (e.g. see [1,Theorem 6.3.3] or [92]). In particular, (2.4) may be written as the integral O (u) = O dD . Then, on constructing a (possibly signed) unit measure D N with exact first N moments (that is, x n dD N (x) = tr[H n ] for n = 1, . . . , N ), we may define the approximate local observable O N (u) := O dD N , and obtain the general error estimates 14) where P N denotes the set of polynomials of degree at most N , and · op is the operator norm on a function space (S, · ∞ ). For example, we may take S to be the set of functions analytic on an open set containing C , a contour encircling supp D − D N , and consider Alternatively, we may consider S = L ∞ supp(D − D N ) leading to the total variation operator norm. Equation (2.14) highlights the key generic features that are crucial ingredients in obtaining convergence results: • Analyticity. The potential theory results of § 4.1.5 connect the asymptotic convergence rates for polynomial approximation to the size and shape of the region of analyticity of O. • Spectral Pollution. While suppD ⊂ σ (H), this need not be true for D N . Indeed, if suppD N introduces additional points within the band gap, this may significantly slow the convergence of the polynomial approximation; cf. § 2.6. • Regularity of D N . Roughly speaking, the first term of (2.14) measures how "well-behaved" D N is. In particular, if D N is positive, then this term is bounded independently of N , whereas, if D N is a general signed measure, then this factor contributes to the asymptotic convergence behaviour. In the sections to follow, we introduce linear ( § 2.4) and nonlinear ( § 2.5) approximation schemes that fit into this general framework. Moreover, in § 2.6, we also write the vacuum cluster expansion as an integral against an approximate LDOS. In order to complement the intuitive explanation for the slow convergence of the vacuum cluster expansion, we investigate which of the requirements listed above fail. In the appendices, we review other approximation schemes that fit into this general framework such as the quadrature method (Appendix D), numerical bond order potentials (Appendix E), and the kernel polynomial method (Appendix F). Linear body-ordered approximation We will construct two distinct but related many-body approximation models. To construct our first model we exploit the observation that polynomial approximations of an analytic function correspond to body-order expansions of an observable. An intuitive approach is to write the local observable in terms of its Chebyshev expansion and truncate to some maximal polynomial degree. The corresponding projection operator is a simple example of the kernel polynomial method (KPM) [82] and the basis for analytic bond order potentials (BOP) [74]. We discuss in Appendix F that these schemes put more emphasis on the approximation of the local density of states (LDOS) and, in particular, exploit particular features of the Chebyshev polynomials to obtain a positive approximate LDOS. Since our focus is instead on the approximation of observables, we employ a different approach that is tailored to specific properties of the band structure and leads to superior convergence rates for these quantities. For a set of N + 1 interpolation points X N = {x j } N j=0 , and a complex-valued function O defined on X N , we denote by I X N O the degree N polynomial interpolant of x → O(x) on X N . This gives rise to the body-ordered approximation . We may connect (2.15) to the general framework in § 2.3 by defining 16) and j are the node polynomials corresponding to X N = {x j } N j=0 (that is, j are the polynomials of degree N with j (x i ) = δ i j ). (2.17) Proof. ( has finite body order. Each term in (2.18) depends on the central atom , the n − 1 neighbouring sites 1 , . . . , n−1 , and the at most n additional sites arising from the three-centre summation in the tight binding Hamiltonian (TB). In particular, (2.15) has body order at most 2N . See § 4.2 for a complete proof including an explicit definition of the V nN . If one uses Chebyshev points as the basis for the body-ordered approximation (2.15), the rates of convergence depend on the size of the largest Bernstein ellipse (that is, ellipses with foci points ±1) contained in the region of analyticity of z → O(z) [95]. This leads to a exponentially convergent body-order expansion in the metallic finite-temperature case (see § 4.1.4 for the details). However, the resulting estimates deteriorate in the zero-temperature limit. Instead, we apply results of potential theory to construct interpolation sets X N that are adapted to the spectral properties of the system (see § 4.1.5 for examples) and (i) do not suffer from spectral pollution, and (ii) (asymptotically) minimise the total variation of D N ,lin which, in this context, is the Lebesgue constant [95] for the interpolation operator I X N . This leads to rapid convergence of the body-order approximation based on (2.15). The interpolation sets X N depend only on the intervals I − , I + from Definition 1 (see also Figure 2 where O β = F β or G β and C 1 , C 2 , η > 0 are independent of N . The asymptotic convergence rate γ := lim N →∞ γ N is positive and exhibits the asymptotic behaviour In this asymptotic relation, we assume that the limit g → 0 is approached symmetrically about the chemical potential μ. Remark 3. Higher derivatives may be treated similarly under the assumption that higher derivatives of the tight binding Hamiltonian (TB) exist and are short ranged. The role of the point spectrum We now turn towards the important scenario when a localised defect is embedded within a homogeneous crystalline solid. Recall from § 2.1.3 (see in particular Fig. 2) that this gives rise to a discrete spectrum, which "pollutes" the band gap [70]. Thus, the spectral gap is reduced and a naive application of Theorem 2.3 leads to a reduction in the convergence rate of the body-ordered approximation. We now improve these estimates by showing that, away from the defect, we obtain improved pre-asymptotics, reminiscent of similar results for locality of interaction [17]. In that follow, we fix u satisfying Definition 1. While improved estimates may be obtained by choosing {λ j } as interpolation points, leading to asymptotic exponents that are independent of the defect, in practice, this requires full knowledge of the point spectrum. Since the point spectrum within the spectral gap depends on the whole atomic configuration, the approximate quantities of interest corresponding to these interpolation operators would no longer satisfy Proposition 2.2. Remark 4. This phenomenon has been observed in the context of Krylov subspace methods for solving linear equations Ax = b where outlying eigenvalues delay the convergence by O(1) steps without affecting the asymptotic rate [30]. Indeed, since the residual after n steps may be written as r n = p n (A)r 0 where p n is a polynomial of degree n, there is a close link between polynomial approximation and convergence of Krylov methods. On the other hand, we may use the exponential localisation of the eigenvectors corresponding to isolated eigenvalues to obtain pre-factors that decay exponentially as |r | → ∞. where O β = F β or G β and C 3 , C 4 > 0 are independent of N . The asymptotic convergence rate γ def := lim N →∞ γ def N is positive and we have In these asymptotic relations, we assume that the limits g def , g → 0 are approached symmetrically about the chemical potential μ. In practice, Theorem 2.4 means that, for atomic sites away from the defectcore, the observed pre-asymptotic error estimates may be significantly better than the asymptotic convergence rates obtained in Theorem 2.3. Remark 5. (Locality) (i) By Theorem 2.4, and the locality estimates for the exact observables O β [17], we immediately obtain corresponding locality estimates for the approximate quantities: We investigate another type of locality in Appendix B where we show that various truncation operators result in approximation schemes that only depend on a small atomic neighbourhood of the central site. An exponential rate of convergence as the truncation radius tends to infinity is obtained. Remark 6. (Connection to the general framework) The fact that the exponents in Theorem 2.4 depend on the discrete eigenvalues of H(u) can be seen from the general estimate (2.14) applied to the approximate LDOS D N ,lin from (2.16): • Spectral Pollution. We choose the interpolation points so that the support of D N ,lin lies within σ H(u) and so spectral pollution does not play a role, • Regularity of D N .lin . The total variation of D N ,lin can be estimated by the Lebesgue constant [95] for the interpolation operator I X N : This quantity depends on the discrete eigenvalues within the band gap. A non-linear representation The method presented in § 2.4 approximates local quantities of interest by approximating the integrand O : C → C with polynomials. As we have seen, this leads to approximation schemes that are linear functions of the spatial correlations {[H n ] } n∈N . In this section, we construct a non-linear approximation related to bond-order potentials (BOP) [26,39,55] and show that the added non-linearity leads to improved asymptotic error estimates that are independent of the discrete spectra lying within the band gap. In this way, the nonlinearity captures "spectral information" from H rather than only approximating O : C → C without reference to the Hamiltonian. Applying the recursion method [49,50], a reformulation of the Lanczos process [61], we obtain a tri-diagonal (Jacobi) operator T on 2 (N 0 ) whose spectral measure is the LDOS D [91] (see § 4.3.1 for the details). We then truncate T by taking the ). By showing that the first N moments of D N ,nonlin are exact, we are able to apply (2.14) to obtain the following error estimates. The asymptotic behaviour of the exponent in these estimates follows by proving that the spectral pollution of D N ,nonlin in the band gap is sufficiently mild. Remark 7. It is important to note that N : U → C can be constructed without knowledge of H because, as we have seen, if the discrete eigenvalues are known a priori, then Theorem 2.5 is immediate from Theorem 2.4 by adding finitely many additional interpolation points on the discrete spectrum. In particular, the fact that N is a material-agnostic nonlinearity has potentially far-reaching consequences for material modelling. Remark 9. (Quadrature Method) Alternatively, we may use the sequence of orthogonal polynomials [40] corresponding to D as the basis for a Gauss quadrature rule to evaluate local observables. This procedure, called the Quadrature Method [51,69], is a precursor of the bond order potentials. Outlined in Appendix D, we show that it produces an alternative scheme also satisfying Theorem 2.5. The linear-scaling spectral Gauss quadrature (LSSGQ) method [87] is based upon this idea, albeit in the context of finite difference approximations to the DFT Hamiltonian. However, since the resulting discrete Hamiltonian in [87] is banded, the analysis of the present work may be readily applied. Therefore, Theorem 2.5 provides rigorous justification for the exponential rate of convergence for increasing body-order (number of quadrature points), complementing the intuitive explanations and numerical experiments of [87]. Since the convergence results are independent of system size, we obtain a linearscaling method, a result that complements the intuitive explanation [87, (56)], and numerical evidence [87, Fig. 5]. Remark 10. (Convergence of Derivatives) In this more complicated nonlinear setting, obtaining results such as (2.21) is more subtle. We require an additional assumption on D , which we believe maybe be typically satisfied, but we currently cannot justify it and have therefore postponed this discussion to Appendix C. We briefly mention, however, that if D is absolutely continuous (e.g., in periodic systems), we obtain The vacuum cluster expansion revisited For ∈ , we denote by H ;K the Hamiltonian matrix corresponding to the finite subsystem { } ∪ K ⊂ : for k 1 , k 2 ∈ { } ∪ K , For an observable O, the vacuum cluster expansion as detailed in § 2.2 is constructed as follows: Therefore, on defining the spectral measure D ; the are normalised eigenpairs of H ;K , we may write the vacuum cluster expansion as in § 2.3: While D N ,vac is a generalised signed measure (with values in R ∪ {±∞}), all moments are finite. More specifically, if we absorb the effective potential and two centre terms into the three centre summation by writing H k 1 k 2 = m H k 1 k 2 m , see (4.16), we have Equation (2.30) follows from the proof of Proposition 2.2, see (4.19). In particular, the first N moments of D N ,vac are exact. Therefore, we may apply the general error estimate (2.14) and describe the various features of D N ,vac which provide mathematical intuition for the slow convergence of the vacuum cluster expansion: • Spectral Pollution. When splitting the system up into arbitrary subsystems as is the case in the vacuum cluster expansion, one expects significant spectral pollution in the band gaps, leading to a reduction in the convergence rate, • Regularity of D N ,vac . The approximate LDOS is a linear combination of countably many Dirac deltas and does not have bounded variation. Moreover, D N ,vac has values in R ∪ {±∞}. Self-consistency Throughout this section, we suppose that the effective potential is a function of a self-consistent electron density: that is, (2.6) becomes the following nonlinear equation: where u(ρ) := r, w(ρ), Z . We shall assume that the effective potential satisfies the following: Remark 11. (i) For a smooth function w : R → R, the effective potential w(ρ) := w(ρ ) satisfies (EP). This leads to the simplest abstract nonlinear tight binding models discussed in [93,99]. r m e −τ r m (for some τ > 0) also fits into this general framework. This setting already covers many important modelling scenarios and also serves as a crucial stepping stone towards charge equilibration under full Coulomb interaction, which goes beyond the scope of the present work. The main result of this section is the following: if there exists a self-consistent solution ρ to (2.31), then we can approximate ρ with self-consistent solutions to the following approximate self-consistency equation: for sufficiently large N . The operator I X N F β is a linear body-ordered approximation of the form we analyzed in detail in § 2.4. To do this, we require a natural stability assumption on the electronic structure problem, which was employed for example in [93,99,100]: Remark 12. (Stability) (i) The stability condition of Theorem 2.6 is a minimal starting assumption that naturally arises from the analysis [93,99,100]. For example, if ρ is a stable self-consistent electron density, then there exists φ (m) ∈ 2 ( ) such that [93]: (ii) As noted in [99] (in a slightly simpler setting), the stability condition of Theorem 2.6 is automatically satisfied for multi-lattices with ∇w positive semi-definite. In fact, in this case the stability operator is negative semi-definite. where γ N are the constants from Theorem 2.3 applied to u(ρ ). In order for this result to be of any practical use, we need to solve the non-linear equation (2.32) for the electron density via a self-consistent field (SCF) procedure. Supposing we have the electron density ρ i and corresponding state u i := u(ρ i ) after i iterations, we diagonalise the Hamiltonian H(u i ) and hence evaluate the output density ρ out = I X N F β (u i ). At this point, since the simple iteration ρ i+1 = ρ out does not converge in general, a mixing strategy, possibly combined with Anderson acceleration [19], is used in order to compute the next iterate. The analysis of such mixing schemes is a major topic in electronic structure and numerical analysis in general and so we only present a small step in this direction. A more thorough treatment of these SCF results is beyond the scope of this work. See [12,53,63] for recent results in the context of Hartree-Fock and Kohn-Sham density functional theory. For a recent review of SCF in the context density functional theory, see [101]. Remark 13. It is clear from the proofs of Theorems 2.6 and 2.9 that as long as the approximate scheme F β,N satisfies then we may approximate (2.31) with approximate self-consistent solutions ρ N = F β,N u(ρ N ) . In particular, as long as we have the estimate from Remark 10 (see Appendix C for the technical details), then we may use the nonlinear approximation scheme N from Theorem 2.5 in Theorems 2.6 and 2.9 . In this case, we obtain error estimates that are (asymptotically) independent of the discrete spectrum. Remark 14. In the linear-scaling spectral Gauss quadrature (LSSGQ) method [87], a self-consistent field iteration analogous to (2.32) is proposed. In particular, with the caveats outlined in Remark 13 taken into consideration, Theorem 2.6 goes some way to rigorously justify the exponential rate of convergence observed numerically in [87, Fig. 4]. Conclusions and Discussion The main result of this work is a sequence of rigorous results about bodyordered approximations of a wide class of properties extracted from tight-binding models for condensed phase systems, the primary example being the potential energy landscape. Our results demonstrate that exponentially fast convergence can be obtained, provided that the chemical environment is taken into account. In the spirit of our previous results on the locality of interaction [16,17,93], these provide further theoretical justification-albeit qualitative-for widely assumed properties of atomic interactions. More broadly, our analysis illustrates how to construct general low-dimensional but systematic representations of high-dimensional complex properties of atomistic systems. Our results, as well as potential generalisations, serve as a starting point towards a rigorous end-to-end theory of multi-scale and coarse-grained models, including but not limited to machine-learned potential energy landscapes. In the following paragraphs we will make further remarks on the potential applications of our results, and on some apparent limitations of our analysis. Representation of atomic properties Our initial motivation for studying the body-order expansion was to explain the (unreasonable?) success of machine-learned interatomic potentials [5,8,80], and our remarks will focus on this topic, however in principle they apply more generally. Briefly, given an ab initio potential energy landscape (PEL) E QM for some material one formulates a parameterised interatomic potential and then "learn" the parameters θ by fitting them to observations of the reference PEL E QM . A great variety of such parameterisations exist, including but not limited to neural networks [8], kernel methods [5] and symmetric polynomials [2,25,80]. Symmetric polynomials are linear regression schemes where each basis function has a natural body-order attached to it. It is particularly striking that for very low body-orders of four to six these schemes are able to match and often outperform the more complex nonlinear regression schemes [66,80,106]. Our analysis in the previous sections provides a partial explanation for these results, by justifying why one may expect that a reference ab initio PEL intrinsically has a low body-order. Moreover, classical approximation theory can now be applied to the body-ordered components as they are finite-dimensional to obtain new approximation results where the curse of dimensionality is alleviated. Our results on nonlinear representations are less directly applicable to existing MLIPs, but rather suggest new directions to explore. Still, some connections can be made. The BOP-type construction of § 2.5, points towards a blending of machine-learning and BOP techniques that have not been explored to the best of our knowledge. A second interesting connection is to the overlap-matrix based fingerprint descriptors (OMFPs) introduced in [105] where a global spectrum for a small subcluster is used as a descriptor, while (3.1) can be understood as taking the projected spectrum as the descriptor. Thus, Theorem 2.5 suggests (1) an interesting modification of OMFPs which comes with guaranteed completeness to describe atomic properties; and (2) a possible pathway towards proving completeness of the original OMFPs. Finally, our self-consistent representation of § 2.7 motivates how to construct compositional models, reminiscent of artificial neural networks, but with minimal nonlinearity that is moreover physically interpretable. Although we did not pursue it in the present work, this is a particularly promising starting point to incorporate meaningful electrostatic interaction into the MLIPs framework. Linear body-ordered approximation: the preasymptotic regime Possibly the most significiant limitation of our analysis of the linear bodyordered approximation scheme is that the estimates deteriorate when defects cause a pollution of the point spectrum. Here, we briefly demonstrate that this appears to be an asymptotic effect, while in the pre-asymptotic regime this deterioration is not noticable. To explore this we choose a union of intervals E ⊇ σ (H) and a polynomial P N of degree N and note We then construct interpolation sets (Fejér sets) such that the corresponding polynomial interpolant gives the optimal asymptotic approximation rates (for details of this construction, see §4.1.5- §4.1.8). We then contrast this with a best L ∞ (E)approximation, and with the nonlinear approximation scheme from Theorem 2.5. We will observe that the non-linearity leads to improved asymptotic but comparable pre-asymptotic approximation errors. Fig. 3. Approximation errors for Chebyshev projection (green), polynomial interpolation in Fejér sets on E j (black), best L ∞ (E j ) polynomial approximation (blue), and, for j = 2, errors in the nonlinear approximation scheme (red). We also plot the corresponding predicted asymptotic rates (from (4.5), (4 As a representative scenario we consider the Fermi-Dirac distribution F β (z) = (1+e βz ) −1 with β = 100 and both the "defect-free" case Then, for fixed polynomial degree N and j ∈ {1, 2}, we construct the (N + 1)-point Fejér set for E j and the corresponding polynomial interpolant I j,N F β . Moreover, we consider a polynomial P j,N of degree N minimising the right hand side of (3.2) for E = E j . Then, in Figure 3, we plot the errors F β − I j,N F β L ∞ (E j ) and F β − P j,N L ∞ (E j ) for both j = 1 (Fig. 3a) and j = 2 (Fig. 3b) against the polynomial degree N together with the theoretical asymptotic convergence rates for best L ∞ (E j ) polynomial approximation (4.15). What we observe is that, as expected, introducing the interval [c, d] into the approximation domain drastically affects the asymptotic convergence rate and the errors in the approximation based on interpolation. While the best approximation errors follow the asymptotic rate for larger polynomial degree, it appears that, pre-asymptotically, the errors are significantly reduced. We also see that the approximation errors are significantly better than the general error estimate Moreover, in Figure 3b, we plot the errors when using a nonlinear approximation scheme satisfying Theorem 2.5. In this simple experiment, we consider the Gauss quadrature rule N := I X 1 While D does not correspond to a physically relevant Hamiltonian, the same procedure may be carried out for any measure supported on E 1 with supp D ∩ [c, d] finite. Then plotting the errors |F β − N |, we observe improved asymptotic convergence rates that agree with that of the "defect-free" case from Figure 3a. However, the improvement is only observed in the asymptotic regime which corresponds to body-orders never reached in practice. Preliminaries Here, we introduce the concepts needed in the proofs of the main results. Hermite integral formula If, in addition, C encircles {z}, then The proof of these facts is a simple application of Cauchy's integral formula, [3,95]. Resolvent calculus where C is a simple closed positively oriented contour (or system of contours) contained in the region of analyticity of O and encircling the spectrum σ (H). The following Combes-Thomas resolvent estimate [21] will play a key role in the analysis: Then, there exists a constant C > 0 such that and γ CT := c min{1, d} and c > 0 depends on h 0 , γ 0 , d and min =k r k . Proof. A proof with γ CT depending instead on dist z, σ H(u) can be found in [16]. A low-rank update formula leads to the improved "defect-independent" result [17] where the exponent only depends on the distance between z and the reference spectrum. See [93] for an explicit description of γ CT in terms of the constants γ 0 , d and the non-interpenetration constant min =k r k . A key observation for arguments involving forces (or more generally, derivatives of the analytic quantities of interest) is that the Combes-Thomas estimate allows us to bound derivatives of the resolvent operator: where γ CT is the Combes-Thomas constant from Lemma 1 and γ 0 is the constant from (TB). Proof. This result can be found in the previous works [14,16,17], but we give a brief sketch for completeness. Derivatives of the resolvent have the following form: The result follows by applying the Combes-Thomas resolvent estimates together with the fact that the Hamiltonian is short-ranged (TB). Assuming that the Hamiltonian has higher derivatives that are also short-ranged, higher order derivatives of the resolvent can be treated similarly [16]. Local observables Firstly, we note that F β ( · ) is analytic away from the simple poles at πβ −1 (2Z+1). Moreover, G β ( · ) can be analytically continued onto the open set C \ μ + ir : r ∈ R, |r | πβ −1 [17]. Therefore, we may consider (4.3) with O = F β or G β and a contour C β encircling σ (H) and avoiding C \ μ + ir : r ∈ R, |r | πβ −1 . Therefore, we may choose C β so that the constant d, from Lemma 1, is proportional to β −1 . Moreover, if there is a spectral gap, the constant d is uniformly bounded below by a positive constant multiple of g as β → ∞. In the case of insulators at zero Fermi-temperature, we take C ∞ encircling σ (H(u)) ∩ (−∞, μ) and avoiding the rest of the spectrum. Therefore, we may choose C ∞ so that the constant d, from Lemma 1, is proportional to g. Chebyshev Projection and Interpolation in Chebyshev Points For O β = F β or G β , these estimates give an exponential rate of convergence with exponent depending on ∼ β −1 . Indeed, after scaling H so that the spectrum is contained in and we conclude by directly applying (4.5). The same estimate also holds for I N (or any polynomial). For full details of all the statements made in this subsection, see [95]. Classical logarithmic potential theory In this section, we give a very brief introduction to classical potential theory in order to lay out the key notation. For a more thorough treatment, see [75] or [37,62,76,95]. It can be seen from the Hermite integral formula (4.2) that the approximation error for polynomial interpolation may be determined by taking the ratio of the size of the node polynomial X at the approximation points to the size of X along an appropriately chosen contour. Logarithmic potential theory provides an elegant mechanism for choosing the interpolation points so that the asymptotic behaviour of X can be described. We suppose that E ⊂ C is a compact set. We will see that choosing the interpolation nodes as to maximise the geometric mean of pairwise distances provides a particularly good approximation scheme: Any set F n ⊂ E attaining this maximum is known as a Fekete set. It can be shown that the quantities δ n (E) form a decreasing sequence and thus converges to what is known as the transfinite diameter: τ (E) := lim n→∞ δ n (E). We let n (z) denote the node polynomial corresponding to a Fekete set and note that Therefore, rearranging (4.8), we obtain lim n→∞ n τ (E). In fact, this inequality can be replaced with equality, showing that Fekete sets allow us to describe the asymptotic behaviour of the node polynomials on the domain of approximation. To extend these results, it is useful to recast the maximisation problem (4.7) into the following minimisation problem, describing the minimal logarithmic energy attained by n particles lying in E with the repelling force 1/|z i − z j | between particles i and j lying at positions z i and z j , respectively: Fekete sets can therefore be seen as minimal energy configurations and described by the normalised counting measure ν n := 1 n n j=1 δ z j where F n = {z j } n j=1 . The minimisation problem (4.9) may be extended for general unit Borel measures μ supported on E by defining the logarithmic potential and corresponding total energy by The infimum of the energy over the space of unit Borel measures supported on E, known as the Robin constant for E, will be denoted −∞ < V E +∞. The capacity of E is defined as cap(E) := e −V E and is equal to the transfinite diameter [34]. Using a compactness argument, it can be shown that there exists an equilibrium measure ω E with I (ω E ) = V E and, in the case V E < ∞, by the strict convexity of the integral, ω E is unique [77]. V E for all z ∈ C, with equality holding on E except on a set of capacity zero (we say this property holds quasi-everywhere). Moreover, if cap E > 0, then it can be shown that the normalised counting measures, ν n , corresponding to a sequence of Fekete sets weak-converges to ω E . Since U ν n (z) = 1 n log 1 | n (z)| , the weak-convergence allows one to conclude that uniformly on compact subsets of C \ E. Here, we have defined the Green's function g E (z) := V E − U ω E (z), which describes the asymptotic behaviour of the node polynomials corresponding to Fekete sets. We therefore wish to understand the Green's function g E . , respectively. We also plot the image of an 10 × 10 equi-spaced grid. A parameter problem is solved in order to obtain z 3 and thus ω 3 and ω 2 = ω 4 whereas the other constants are fixed. Here, we take z 1 = −1, z 2 = −ε, z 4 = ε, z 5 = 1, ω 1 = iπ, ω 5 = 0 with ε = 0.3 Construction of the Green's function Now we restrict our attention to the particular case where E ⊂ R is a union of finitely many compact intervals of non-zero length. It can be shown that the Green's function g E satisfies the following Dirichlet problem on C \ E [75]: In fact, it can be shown that (4.11) admits a unique solution [75] and thus (4.11) is an alternative definition of the Green's function. Using this characterisation, it is possible to explicitly construct the Green's function g E as follows. In the upper half plane, , G E (min E) = iπ , and G E (max E) = 0. Using the symmetry of E with respect to the real axis, we may extend Re(G E (z)) to the whole complex plane via the Schwarz reflection principle. Then, one can easily verify that this analytic continuation satisfies (4.11). Since the image of G E is a (generalised) polygon, z → G E (z) is an example of a Schwarz-Christoffel mapping [29]. See Figure 4 for the case E = [−1, −ε] ∪ [ε, 1]. We shall briefly discuss the construction of the Schwarz-Christoffel mapping G E for E = [−1, ε − ]∪[ε + , 1]. We define the pre-vertices z 1 = −1, z 2 = ε − , z 4 = ε + , z 5 = 1 and wish to construct a conformal map G E with G E (z k ) = ω k as in Figure 4. For simplicity, we also define z 0 := −∞ and z 6 := ∞ and observe that because the image is a polygon, arg G E (z) must be constant on each interval (z k−1 , z k ) and z k+1 ), and α k π is the interior angle of the infinite slit strip at vertex ω k (that is, α 1 = α 2 = α 4 = α 5 = 1 2 and α 3 = 2). After defining z α := |z| α e iα arg z where arg z ∈ (−π, π], we can see that for z ∈ (z k−1 , z k ), we have arg 5 j=k (z − z j ) α j −1 = 5 j=k (α j − 1)π and so the jump in the argument of z → 5 j=1 (z − z j ) α j −1 is (1 − α k )π at z k as in (4.12). Therefore, integrating this expression, we obtain Since G E (1) = A, we take A = 0 (to ensure (4.11c) holds). Moreover, since the real part of the integral is ∼ log |z| as |z| → ∞, we apply (4.11b) to conclude B = 1. Finally, we can choose z 3 such that Re G E (z) = 0 for all z ∈ E; that is, For more details, see [37]. We use the Schwarz-Christoffel toolbox [29] in matlab to evaluate (4.13) and plot Figure 5. For the simple case E := [−1, 1], by the same analysis, we can disregard z 2 , z 3 , z 4 and ω 2 , ω 3 , ω 4 and integrate the corresponding expression to obtain the closed form G [−1,1] (z) = log(z + √ z − 1 √ z + 1). A similar analysis allows one to construct conformal maps from the upper half plane to the interior of any polygon. For further details, rigorous proofs and numerical considerations, see [31]. Interpolation nodes The only difficulty in obtaining (4.10) in practice is the fact that Fekete sets are difficult to compute. An alternative, based on the Schwarz-Christoffel mapping G E , are Fejér points. For equally spaced points {ζ j } n j=1 on the interval i[0, π], the n th Fejér set is defined by {G −1 E (ζ j )} n j=1 . Fejér sets are also asymptotically optimal in the sense that (4.10) is satisfied where n is now the node polynomial corresponding to n-point Fejér set. Another approach is to use Leja points which are generated by the following algorithm: for fixed z 1 , . . . , z n , the next interpolation node z n+1 is constructed by maximising n j=1 |z j − z| over all z ∈ E. Sets of this form are also asymptotically optimal [90] for any choice of z 1 ∈ E. Since we have fixed the previous nodes z 1 , . . . , z n , the maximisation problem for constructing z n+1 is much simpler than that of (4.7). More generally, if the normalised counting measure corresponding to a sequence of sets {z j } n j=1 ⊂ E weak-converges to the equilibrium measure ω E , then the corresponding node polynomials satisfy (4.10). Fig. 5. Equi-potential curves C r k := {z ∈ C : e g E (z) = r k } for both metals (a) and insulators (b) where 1 2 (r k −r −1 k ) = kπ β for k ∈ {1, 2, 3, 4, 5} and β = 10. In the case of metals (a), the equi-potential curves agree with Bernstein ellipses. We also plot the poles of F β ( · ) which determine the maximal admissible integration contours: for (a), we can take contours C r for all r < r 1 and, for (b), the contour C r 2 can be used for all positive Fermi-temperatures (we have chosen the gap carefully so that C r 2 self-intersects at μ). Shown in black crosses are 30 Fejér points in each case. To create these plots we consider an integral formula for the Green's function z → g E (z) [37] and use the Schwarz-Christoffel matlab toolbox [28,29] to approximate these integrals For the simple case where E = [−1, 1], many systems of zeros or maxima of sequences of orthogonal polynomials are asymptotically optimal in the sense of (4.10). In fact, since the equilibrium measure for [−1, 1] is the arcsine measure [76] dμ [−1,1] any sequence of sets with this limiting distribution is asymptotically optimal. An example of particular interest are the Chebyshev points {cos jπ n } 0 j n given by the n +1 extreme points of the Chebyshev polynomials defined by T n (cos θ) = cos nθ . Asymptotically optimal polynomial approximations Suppose that E is the union of finitely many compact intervals of non-zero length and O : E → C extents to an analytic function in an open neighbourhood of E. On defining C γ := {z ∈ C : g E (z) = γ }, we denote by γ the maximal constant for which O is analytic on the interior of C γ . We let P N be the best L ∞ (E)-approximation to O in the space of polynomials of degree at most N and suppose that I N is a polynomial interpolation operator in N + 1 points satisfying (4.11). Then, the Green's function g E determines the asymptotic rate of approximation for not only polynomial interpolation, but also for best approximation: (4.15) For a proof that the asymptotic rate of best approximation is given by the Green's function see [76]. The result for polynomial interpolation uses the Hermite integral formula and (4.10), see (4.20) and (4.22), below. Linear body-order approximation In this section, we use the classical logarithmic potential theory from § 4.1.5 to prove the approximation error bounds for interpolation. However, we first show that polynomial approximations lead to body-order approximations: Proof of Proposition 2.2. We first simplify the notation by absorbing the effective potential and two-centre terms into the three-centre summation: there the first two terms in the outer summation are c 0 and c 1 H . Now, for a fixed body-order (n + 1), and k 1 < · · · < k n with k l = , we construct V nN (u ; u k 1 , . . . , u k n ) by collecting all terms in (4.17) with 0 j |X | − 1 and { , 1 , . . . , j−1 , m 1 , . . . , m j } = { , k 1 , . . . , k n }. In particular, the maximal body-order in this expression is 2(|X | − 1) for three-centre models and |X | − 1 in the two-centre case. Proof of Theorem 2.3. We let N (x) := j (x − x N j ) be the node polynomial for X N := {x N j } N j=0 . Again, we fix the configuration u and consider H := H(u). Supposing that C is a simple closed positively oriented contour encircling σ (H), we apply the Hermite integral formula (4.2) to obtain that , (4.20) where At this point we apply standard results of classical logarithmic potential theory (see, § 4.1.5 or [62]) and conclude by noting that if the interpolation points are asymptotically distributed according to the equilibrium distribution corresponding to E := I − ∪ I + , then after applying (4.10), we have that Here, the equilibrium distribution and the Green's function g E (z) are concepts introduced in § 4.1.5 and § 4.1.6. Therefore, by choosing the contour C := {ξ ∈ C : g E (ξ ) = γ } for 0 < γ < g E (μ + iπβ −1 ), the asymptotic exponents in the approximation error is γ . The maximal asymptotic convergence rate is given by g E (μ + iπβ −1 ) since C must be contained in the region of analyticity of O β and the first singularity of O β is at μ Examples of the equi-potential level sets C are given in Figure 5. Using the Green's function results of § 4.1. where G E is the integral (4.13). The asymptotic behaviour of this maximal asymptotic convergence rate for the separate β → ∞ and g → 0 limits can be found in [37,81]. Here, we consider the β −1 + g → 0 limit where the gap remains symmetric about the chemical potential μ. For ζ ∈ μ + i[0, πβ −1 ], we have c −1 | √ ζ ± 1| c, and so the integral in (4.23) has the same asymptotic behaviour as where we have used the change of variables ζ = ζ −ε − ε + −ε − . Since the integrands are uniformly bounded along the domain of integration, The constant pre-factor in (4.21) is inversely proportional to the distance dist C , σ (H) between the contour C = {g E = γ } and the spectrum σ (H). In particular, since g E is uniformly Lipschitz with constant L > 0 on the compact region bounded by C , we have: there exists λ ∈ σ (H) and ξ ∈ C such that Therefore, choosing γ to be a constant multiple of g E (μ+iπβ −1 ), we conclude that the constant pre-factor C satisfies C ∼ (g + β −1 ) −1 as g + β −1 → 0. To extend the body-order expansion results to derivatives (in particular, to forces), we write the quantities of interest using resolvent calculus, apply Lemma 2 to bound the derivatives of the resolvent, and use the Hermite integral formula (4.20) to conclude: for C 1 , C 2 simple closed positively oriented contours encircling the spectrum σ H(u) and C 1 , respectively, we have . (4.25) We conclude by choosing appropriate contours C l = {g E = γ l } for l = 1, 2 and applying (4.22). The role of the point spectrum To begin this section, we sketch the proof of Proposition 2.1. Proof of Proposition 2.1. (i) Sup-norm perturbations. We suppose that sup k |r k − r ref Therefore, applying standard results from perturbation theory [56, p. 291], we obtain Cδ. (ii) Finite rank perturbations. The finite rank perturbation result has been presented in [70] in a slightly different setting. We sketch the main idea here for completeness. Since the essential spectrum is stable under compact (in particular, finite rank) perturbations [56], the set is both compact and discrete and therefore finite. Proof of Theorem 2.4. Suppose that C is a simple closed contour encircling the spectrum σ H(u) and (λ s , ψ s ) are normalised eigenpairs corresponding to the finitely many eigenvalues outside I − ∪ I + . Therefore, we have that (4.27) The first term of (4.27) may be treated in the same way as in the proof of Theorem 2.3. Moreover, derivatives of this term may be treated in the same way as in (4.25). It is therefore sufficient to bound the remaining term and its derivative. Firstly, we note that the eigenvectors corresponding to isolated eigenvalues in the spectral gap have the following decay [17]: for C a simple closed positively oriented contour (or system of contours) encircling the {λ s }, we have that 28) where γ CT is the Combes-Thomas constant from Lemma 1 with d = dist C , σ (H(u)) . The constant pre-factor in (4.28) depends on the distance between the contour and the defect spectrum σ H(u) . Similar estimates hold for the derivatives. For full details on the derivation of (4.28), see [17, (5.18)-(5.21)]. Therefore, combining (4.28) and the Hermite integral formula, we conclude as in the proof of Theorem 2.3. Non-linear body-order approximation In this section, we prove Theorem 2.5 by applying the recursion method to reformulate the problem into a semi-infinite linear chain and replacing the far-field with vacuum. Recursion method In that follows, we briefly introduce the recursion method [49,50], a reformulation of the Lanczos process [61], which generates a tri-diagonal (Jacobi) operator T [91] whose spectral measure is D and the corresponding sequence of orthogonal polynomials [40]. This process provides the basis for constructing approximations to the LDOS giving rise to nonlinear approximation schemes satisfying Theorem 2.5. Recall that D is the LDOS satisfying (2.13). We start by defining p 0 := 1, a 0 := xdD (x) and b 1 p 1 (x) := x − a 0 where b 1 is the normalising constant to ensure p 1 (x) 2 dD (x) = 1. Then, supposing we have defined a 0 , a 1 , b 1 , . . . , a n , b n and the polynomials p 0 (x), . . . , p n (x), we set Then, { p n } is a sequence of orthogonal polynomials with respect to D (i.e. p n p m dD = δ nm ) and we have that (see Lemma D.1 for a proof). Moreover, we denote by T the infinite symmetric tridiagonal matrix on N 0 with diagonal (a n ) n∈N 0 and off-diagonal (b n ) n∈N . Remark 15. It will also prove convenient for us to renormalise the orthogonal polynomials by defining P n (x) := b n p n (x) and b 0 := 1; that is, (4.34) One advantage of this formulation is that it explicitly defines the coefficients {b n }. Therefore, if we have the first 2N + 1 moments H , . . . , (H 2N +1 ) , it is possible to evaluate Q 2N +1 (H) (that is, Q 2N +1 dD ) for all polynomials Q 2N +1 of degree at most 2N +1, and thus compute T N . In particular, for a fixed observable of interest O, we may write (4.35) Remark 16. In Appendix E we introduce more complex bond order potential (BOP) schemes based on the recursion method and show that they also satisfy Theorem 2.5. Remark 18. In Appendix D we show that the eigenvalues of T N (z) are distinct for z in some open neighbourhood, U 0 ⊂ U , of R 2N +1 , which leads to the following alternative proof. On U 0 , the eigenvalues and corresponding left and right eigenvectors can be chosen to be analytic: there exist analytic functions ε j , ψ j , φ j for j = 0, . . . , N such that (More precisely, we apply [44, Theorem 2] to obtain analytic functions ψ j , φ j of each variable z 0 , . . . , z 2N +1 separately and then apply Hartog's theorem [60] to conclude that ψ j , φ j are analytic as functions on U ⊂ C 2N +1 .) Therefore, the nonlinear method discussed in this section can also be written in the form which is an analytic function on {z ∈ U 0 : O analytic at ε j (z) for each j} (as it is a finite combination of analytic functions only involving products, compositions and sums). Self-consistent tight binding models We start with the following preliminary lemma: Proof. First, we denote the inverse of T and its matrix entries by T −1 : 2 ( ) → 2 ( ) and T −1 k , respectively. Then, applying the Combes-Thomas estimate to T yields the off-diagonal decay estimate |T −1 k | Ce −γ CT r k for some C, γ CT > 0 [93]. Due to the off-diagonal decay properties of the matrix entries, the operators T , T −1 : ∞ ( ) → ∞ ( ) given by are well defined bounded linear operators with norms sup k∈ |T k | and sup k∈ |T −1 k |, respectively. To conclude, we note that and so T −1 is the inverse of T . Here, we have exchanged the summations over k and m by applying the dominated convergence theorem: Throughout the following proofs, we denote by B r (ρ) the open ball of radius r about ρ with respect to the ∞ -norm. Moreover, we briefly note that the stability operator can be written as the product L (ρ) := F (ρ)∇w(ρ), where [93] where C is a simple closed contour encircling the spectrum σ H(u(ρ)) . Proof of Theorem 2.6. Since ρ → F β (u(ρ)) is C 2 , and I − L (ρ ) −1 is a bounded linear operator, we necessarily have that I − L (ρ) −1 is a bounded linear operator for all ρ ∈ B r (ρ ) for some r > 0. By applying Theorem 2.3, together with the assumption (EP), we obtain for all ρ ∈ B r (ρ ). As a direct consequence, we have In particular, for such N , the operator I −L N (ρ) : 2 → 2 is invertible with inverse bounded above in operator norm independently of N . We now show that I − L N (ρ) satisfies the assumptions of Lemma 3. Using (4.47) and (EP), together with the Combes-Thomas estimate (Lemma 1), we conclude that for all ρ ∈ B r (ρ ). In particular, I − L N (ρ) extends to a invertible bounded linear operator ∞ → ∞ and thus its inverse I − L N (ρ) −1 : ∞ → ∞ is bounded. Now, the mapping ρ → ρ − I X N F β u(ρ) between ∞ → ∞ is continuously differentiable on B r (ρ ) and the derivative at ρ is invertible (i.e. I − L N (ρ ) −1 : ∞ → ∞ is a well defined bounded linear operator). Since the map ρ → I X N F β u(ρ) is C 2 , its derivative L N is locally Lipschitz about ρ and so there exists L > 0 such that Moreover, by Theorem 2.3, we have that In particular, we may choose N sufficiently large such that 2b N L < 1 and t N : Thus, the Newton iteration with initial point ρ 0 := ρ , defined by converges to a unique fixed point ρ N = I X N F β (u(ρ N )) in B t N (ρ ) [102,104]. That is, ρ N − ρ ∞ t N 2b N . Here, we have used the fact that 1 − √ 1 − x x for all 0 x 1. Proof of Proposition 2.9. We proceed in the same way as in the proof of Theorem 2.6. In particular, since ρ N is stable, if ρ 0 − ρ N ∞ is sufficiently small, Here, we have used that Therefore, as long as ρ 0 − ρ N ∞ is sufficiently small, we may apply the Newton iteration starting from ρ 0 to conclude. Proof of Corollary 2.7. As a direct consequence of (4.51), we have that Here, we have applied the standard convergence result (Theorem 2.3) with fixed effective potential. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Appendix A. Notation Here we summarise the key notation: Suppose γ N (r c ) and γ def N (r c ) are the rates of approximation from Theorems 2.4 and 2.5 when applied to H r c . Then γ N (r c ) → γ N and γ def N (r c ) → γ def N as r c → ∞, with an exponential rate. Proof. We first note that Therefore, applying (TB), we obtain To conclude we choose a suitable contour C and apply the Combes-Thomas estimate (Lemma 1) together with (B.4): As a direct consequence of (B.4), we have also have [56]. This means that for sufficiently large r c , we obtain the same rates of approximation when applying Theorems 2.4 and 2.5 to H r c . B.2. Truncation One downside of the banded approximation is that the truncation radius depends on the maximal polynomial degree (e.g. see (B.2)). In this section, we consider truncation schemes that only depend on finitely many atomic sites independent of the polynomial degree: where the restriction of the Hamiltonian has been introduced in (2.26). On defining the quantities where the operators I X N are given by Theorem 2.3, we obtain a sparse representation of the N -body approximation depending only on finitely many atomic sites, independently of the maximal body-order N . Proposition B.2. Suppose u satisfies Definition 1. Fix 0 < β ∞ and suppose that, if β = ∞, then g, g def > 0. Then, Proof. Applying the Hermite integral formula (4.1) directly, we conclude that I X N O β (z) is bounded uniformly in N along a suitably chosen contour C := {g E = γ } (examples of such contours are given in Figure 5). It is important to note that the contour C must be chosen to encircle both σ (H) and σ ( H r c ). In the following, we let γ CT be the Combes-Thomas exponent from Lemma 1 corresponding to H. Similarly to (B.7), we obtain This concludes the proof. The fact that the exponents of Proposition B.2 are independent of the defect states within the band gap is in the same spirit to the improved locality estimates of [17]. Remark 19. (Divide-and-conquer Methods) This truncation scheme is closely related to the divide-and-conquer method for solving the electronic structure problem [103]. In this context the system is split into many subsystems that are only related through a global choice of Fermi level. In our notation, this method consists of constructing N DAC smaller Hamiltonians H r c , j centred on the atoms j (for j = 1, . . . , N DAC ) and approximating the quantities O (u) for in a small neighbourhood of j by calculating tr O H r c , j . That is, the eigenvalue problem for the whole system is approximated by solving N DAC smaller eigenvalue problems in parallel. In particular, this method leads to linear scaling algorithms [42]. Theorem B.2 then ensures that the error in this approximation decays exponentially with the distance between and the exterior of the subsystem centred on j . A similar error analysis in the context of divide-and-conquer methods in Kohn-Sham density functional theory can be found in [18]. Remark 21. (Non-linear schemes) One may be tempted to approximate the Hamiltonian with the truncation, H r c , and then apply the nonlinear scheme of Theorem 2.5. In doing so, we obtain the following error estimates: A problem with this analysis is that the constant γ N (r c ) in (B.13) arises by applying Theorem 2.5 to H r c rather than the original system H. In particular, this means that γ N (r c ) depends on the spectral properties of H r c rather than H. Since spectral pollution is known to occur when applying naive truncation schemes [64], the choice of H r c is important for the analysis. In particular, it is not clear that γ N (r c ) → γ N in general. This is in contrast the the result of Proposition B.1. Appendix C. Convergence of Derivatives in the Nonlinear Approximation Scheme As mentioned in Remark 10, the results of this section depend on the "regularity" properties of D : where g ν g E is the minimal carrier Green's function of ν [85]. Under the regularity condition of Definition 2, we obtain results analogous to (2.21): Theorem C.1. Suppose that u satisfies Definition 1 and ∈ is such that D ∈ Reg. Then, with the notation of Theorem 2.5, we in addition have More generally, if the regularity assumption is not satisfied, it may still be the case that Theorem C.1 holds but with reduced locality exponent η. To formulate this result, we require the notion of minimal carrier capacity: (ii) If c ν > 0, then there exists a minimial carrier equilibrium distribution ω ν , a (uniquely defined) unit measure with supp ω ν ⊂ E satisfying (v) Suppose c ν > 0. Then, on defining ν n to be the discrete unit measure giving equal weight to each of the zeros of p n ( · ; ν), the condition that where ω E is the equilibrium distribution for E, is equivalent to ν ∈ Reg [85,Thm. 3.1.4]. In particular, this justifies (4.10). We therefore arrive at the corresponding result for ∈ for which the corresponding LDOS has positive minimal carrier capacity: Proposition C.2. Suppose that u satisfies Definition 1 and ∈ such that c D > 0. Then, with the notation of Theorem 2.5, we in addition have and η > 0 is the constant from Theorem C.1. The proofs of Theorem C.1 and Proposition C.2 follow from the following estimates on the derivatives of the recursion coefficients {a n , b n }, and the locality of the tridiagonal operators T N , together with the asymptotic upper bounds (i.e. Definition 2 or Remark 22). In the following, we denote by T ∞ the infinite symmetric matrix on N 0 with diagonal (a n ) n∈N 0 and off-diagonal (b n ) n∈N . (i) For each r ∈ N, we have γ r,N ∼ d N as d N → 0. Remark 24. The fact that g σ (T ∞ ) does not depend on the discrete eigenvalues of T ∞ means that asymptotically the locality estimates do not depend on defect states in the band gap arising due to perturbations satisfying Proposition 2.1, for example. Indeed, this has been shown more generally for operators with off-diagonal decay [17]. We show an alternative proof using logarithmic potential theory. We will assume Lemmas C.4 and C.3 for now and return to their proofs below. We first add on a constant multiple of the identity, cI , to the operators {T N } so that the spectra are contained in an interval bounded away from {0}. Moreover, we translate the integrand by the same constant: O(z) : = O(z − c). Then, we extend T N to an operator on 2 (N 0 ) by defining [T N ψ] i = N j=0 [T N ] i j ψ j for 0 i N and [T N ψ] i = 0 otherwise. We therefore choose a simple closed contour (or system of contours) C encircling N σ (T N ) so that Therefore, applying Lemma C.3, a simple calculation reveals that Proof of Lemma C. 3. The proof follows from the following identities: To do this, it will be convenient to renormalise the orthogonal polynomials as in Remark 15 (that is, we consider P n (x) := b n p n (x)). Moreover, we define b −1 := 1. Using the shorthand ∂ := ∂ ∂ u m , we therefore obtain: ∂b −1 = ∂b 0 = 0, ∂ P −1 (x) = ∂ P 0 (x) = 0, and for all n 0. By noting ∂ P 1 (x) = −∂a 0 and applying (C.8), we can see that ∂ P n is a polynomial of degree n − 1 for all n 0. Therefore, since P n is orthogonal to all polynomials of degree n − 1, we have which concludes the proof of (C.7). To prove a similar formula for the derivatives of a n , we first state a useful identity which will be proved after the conclusion of the proof of (C.7): Therefore, we have that (C.12) Applying (C.11) for k n − 1, we can see that ∂a n can be written as for some coefficients d 1,k , d 0,k . Using (C.11) and assuming the result for k n −1, we have for all k n − 1. Proof of (C.11). We have that where l.o.t. ("lower order term") denotes a polynomial of degree strictly less than n that changes from one line to the next. That is, since c 11 = −∂a 0 = ∂ a 0 b 0 b 0 , we apply an inductive argument to conclude that Proof of Lemma C.4. The first statement is the Combes-Thomas resolvent estimate (Lemma 1) for tridiagonal operators (which, in particular, satisfy the off-diagonal decay assumptions of Lemma 1). To obtain the asymptotic estimates of (ii), we apply a different approach based on the banded structure of the operators. Since T N is tri-diagonal, [(T N ) n ] i j = 0 if |i − j| > n. Therefore, for any polynomial P of degree at most |i − j| − 1, we have [9] . (C. 19) We may apply the results of logarithmic potential theory (see (4.15)), to conclude. Here, it is important that |σ (T ∞ ) \ σ (T N )| remains bounded independently of N so that, asymptotically, (C.19) has exponential decay with exponent g σ (T ∞ ) . The proof that |σ (T ∞ ) \ σ (T N )| is uniformly bounded can easily be shown when considering the sequence of orthogonal polynomials generated by T ∞ . A full proof is given in parts (ii) and (iv) of Lemma D.1. Proof. The idea behind the proofs are standard in the theory of Gauss quadrature (e.g. see [40]) but, for the convenience of the reader, they are collected together in D.3. Remark 25. The quadrature rule discussed in this section can be seen as the exact integral with respect to the following approximate LDOS This measure has unit mass by Lemma D. D.1. Error Estimates. Applying Remark 25, together with (2.14), we have: for every polynomial P 2N +1 of degree at most 2N + 1, Now, since σ H ⊂ I − ∪{λ j }∪ I + where {λ j } is a finite set, we may apply part (iv) of Lemma D.1 to conclude that the number of points in X N \ I − ∪ I + is bounded independently of N . Accordingly, we may apply (4.15) with E = I − ∪ I + , to obtain the following asymptotic bound where O is analytic on {z : g E (z) < γ }. In particular, we obtain the stated asymptotic behaviour. [24,85]. In this paper, we only require the much milder property that the number of eigenvalues in the gap remains uniformly bounded in the limit N → ∞. For a more general discussion of spectral pollution, see [13,64]. D.3. Proof of Lemma D.1 Proof of (i). First note that p 0 p 1 dD = 0. We assume that p 0 , . . . , p n are mutually orthogonal with respect to D , and note that, and applying (D.5). Equation (D.5) also justifies the tri-diagonal structure (4.31). Proof of (ii). We may rewrite the recurrence relation (4.29) as x p(x) = T N p(x)+ b N +1 p N +1 (x)e N where p(x) := 1, p 1 (x), . . . , p N (x) T , [e N ] j = δ j N , and T N is the tri-diagonal matrix (4.31). In particular, each ε j ∈ X N is an eigenvalue of T N (with eigenvector p(ε j )). Proof of (iii). Since T N is symmetric, the spectrum is real. Now, for each ε j ∈ X N = σ (T N ), the matrix (T N − ε j ) ¬N ¬0 formed by removing the N th row and 0 th column is lower-triangular with diagonal (b 1 , . . . , b N ). Since each b i > 0, (T N − ε j ) ¬N ¬0 has full rank and thus ε j is a simple eigenvalue of T N . Proof of (iv). Suppose that (after possibly relabelling) ε 0 , ε 1 ∈ X N ∩ [a, b]. After defining R(x) := N j=2 (x − ε j ), a polynomial of degree N − 1, and noting (x − ε 0 )(x − ε 1 ) > 0 on supp D , we obtain contradicting part (i). Proof of (v). We may write P 2N +1 = p N +1 q N + r N where q N , r N are polynomials of degree at most N and note that [ p N +1 (H)q N (H)] = 0 by (i) and P 2N +1 (ε j ) = r N (ε j ) since X is the set of zeros of p N +1 . Therefore, 10) In (D.9) we have used the fact that polynomial interpolation in N + 1 distinct points is exact for polynomials of degree at most N . Proof of (vi). j (x) 2 is a polynomial of degree 2N and so, by (v), we have Moreover, N j=0 j (x) is a polynomial of degree N equal to one on X N (a set of N + 1 distinct points) and so N j=0 j (x) ≡ 1. Finally, N j=0 w j = N j=0 j (x) d D (x) = 1. Appendix E. Numerical Bond-Order Potentials (BOP) In mathematical terms, the idea behind BOP methods is to replace the local density of states (LDOS) with an approximation using only the information from the truncated tri-diagonal matrix T N (and possibly additional hyper-parameters). Since the first N coefficients contain the same information as the first 2N + 1 moments H , . . . , [H 2N +1 ] , this approach is closely related to the method of moments [22]. Equivalently, the resolvent [(z − H) −1 ] , which can be written conveniently as the continued fraction expansion is replaced with an approximation G N only involving the coefficients from T N . For example, for fixed terminator t ∞ , we may define . (E.2) Truncating (E.1) to level N , which is equivalent to replacing the far-field of the linear chain with vacuum and choosing t ∞ = 0, results in a rational approximation to the resolvent and thus a discrete approximation to the LDOS. We have seen that truncation of the continued fraction in this way leads to an approximation scheme satisfying Theorem 2.5. Alternatively, the far-field may be replaced with a constant linear chain with a N + j = a ∞ and b N + j = b ∞ for all j 1 leading to the square root terminator [38,49,97]. More generally, one may choose any "approximate" local density of states D and construct a corresponding terminator that encodes the information from D [52,65]. For example, D (x) := 1 b ∞ π 1 − x−a ∞ 2b ∞ 2 results in the square root terminator. While we are unaware of any rigorous results, there is numerical evidence [52] to suggest that the error in the approximation scheme is related to the smoothness of the difference D − D . Equivalently, we may choose any bounded symmetric tri-diagonal (Jacobi) operator T N with diagonal a 0 , a 1 , . . . , a N , a N +1 , . . . and off-diagonal b 1 , . . . , b N , b N +1 , . . . . That is, we may evaluate the recursion method exactly to level N and append the far-field boundary condition { a n , b n } n N +1 to the semi-infinite linear chain. This approach also includes the case t ∞ = 0 as in § 4.3 by choosing a n = b n = 0 for all n. With this in hand, we define where D 2N +1,BOP is the appropriate spectral measure corresponding to T N . E.1. Error estimates Since [( T N ) n ] 00 = [(T N ) n ] 00 = [(T ∞ ) n ] 00 is independent of the far-field coefficients { a j , b j } for all n 2N + 1, we can immediately see that the first 2N + 1 moments of D 2N +1,BOP agree with those of D . In particular, we may immediately apply (2.14) to obtain error estimates that depend on supp D − D 2N +1,BOP . Therefore, as long as the far-field boundary condition is chosen so that there are only finitely many discrete eigenvalues in the band gap independent of N , the more complicated BOP schemes converge at least as quickly as the t ∞ = 0 case. Intuitively, if the far-field boundary condition is chosen to capture the behaviour of the LDOS (e.g. the type and location of band-edge singularities), then the integration against the signed measure D − D 2N +1,BOP as in (2.14) may lead to improved error estimates. A rigorous error analysis to this affect is left for future work. E.2. Analyticity Since T N is bounded and symmetric, the spectrum σ ( T N ) is contained in a bounded interval of the real line. In particular, we can apply the same arguments as in (4.44) to conclude that (E.3) defines a nonlinear approximation scheme given by an analytic function on an open subset of C 2N +1 . Appendix F. Kernel Polynomial Method & Analytic Bond Order Potentials We first introduce the Kernel Polynomial Method (KPM) for approximating the LDOS [82,83,98]. In this section, we scale the spectrum and assume that σ (H) ⊂ [−1, 1]. For a sequence of kernels K N (x, y), we define the approximate quantities of interest However, truncation of the Chebyshev series in this way leads to artificial oscillations in the approximate LDOS known as Gibbs oscillations [43]. Moreover, without damping these oscillations, the approximate LDOS need not be positive. However, on defining we obtain a positive approximate LDOS [98] where the damping coefficients d n := (1 − n N ) reduce the effect of Gibbs ringing. In practice, one may instead choose the Jackson kernel [47]. The problem with the above analysis in practice is that the damping factors that we have introduced mean that more moments [H n ] are required in order to obtain good approximations to the LDOS. Instead, analytic BOP methods [74,79] compute the first N rows of the tridiagonal operator T ∞ , thus obtaining the first 2N + 1 moments exactly. Then, a far-field boundary condition (such as a constant infinite linear chain) is appended to form a corresponding Jacobi operator T N as in Appendix E. Now, since higher order moments of T N can be efficiently computed, we may evaluate the following approximate LDOS Efficient implementation of analytic BOP methods can be carried out using the BOPfox program [47].
21,378
2022-08-06T00:00:00.000
[ "Physics", "Mathematics" ]
Sample Shuttling Relaxometry of Contrast Agents: NMRD Profiles above 1 T with a Single Device Nuclear magnetic relaxation dispersion (NMRD) profiles are essential tools to evaluate the efficiency and investigate the properties of magnetic compounds used as contrast agents for magnetic resonance imaging (MRI), namely gadolinium chelates and superparamagnetic iron oxide particles. These curves represent the evolution of proton relaxation rates with the magnetic field. NMRD profiles are unparalleled to probe extensively the spectral density function involved in the relaxation of water in the presence of the paramagnetic ion or the magnetic nanoparticles. This makes such profiles an excellent test of the adequacy of a theoretical relaxation model and allow for a predictive approach to the development and optimization of contrast agents. From a practical point of view they also allow to evaluate the efficiency of a contrast agent in a certain range of magnetic fields. Nowadays, these curves are recorded with commercial fast field cycling devices, often limited to a maximum Larmor frequency of 40 MHz (0.94 T). In this article, relaxation data were acquired on a wide range of magnetic fields, from 3.5 × 10−4 to 14 T, for a gadolinium-based contrast agent and for PEGylated iron oxide nanoparticles. We show that the low-field NMRD curves can be completed with high-field data obtained on a shuttle apparatus device using the superconductive magnet of a high-field spectrometer. This allows a better characterization of the contrast agents at relevant magnetic fields for clinical and preclinical MRI, but also refines the experimental data that could be used for the validation of relaxation models. and optimization of contrast agents. From a practical point of view they also allow to evaluate the efficiency of a contrast agent in a certain range of magnetic fields. Nowadays, these curves are recorded with commercial fast field cycling devices, often limited to a maximum Larmor frequency of 40 MHz (0.94 T). In this article, relaxation data were acquired on a wide range of magnetic fields, from 3.5 9 10 -4 to 14 T, for a gadolinium-based contrast agent and for PEGylated iron oxide nanoparticles. We show that the low-field NMRD curves can be completed with high-field data obtained on a shuttle apparatus device using the superconductive magnet of a high-field spectrometer. This allows a better characterization of the contrast agents at relevant magnetic fields for clinical and preclinical MRI, but also refines the experimental data that could be used for the validation of relaxation models. Introduction Nuclear magnetic resonance (NMR) relaxometry consists in the measurement of the relaxation times (T 1 , T 2 ) of a nucleus observable in NMR as a function of the magnetic field. The dependence of the relaxation rates (R 1 = 1/T 1 and R 2 = 1/T 2 ) with the magnetic field bears important information since it gives access to the spectral density and thus to the mechanism of relaxation [1,2]. This allows one to probe the molecular dynamics of different systems such as proteins, polymers, and water trapped in porous systems. Relaxometry provides sufficiently extensive experimental datasets so that the relaxation mechanisms can be determined in yet poorly characterized systems containing magnetic entities. This is especially true for water proton relaxation induced by paramagnetic ions [3] and superparamagnetic particles used as contrast agents for magnetic resonance imaging [4]. The curves representing the evolution of relaxation rates with the magnetic field are called nuclear magnetic relaxation dispersion (NMRD) profiles. The term ''dispersion'' indicates that for many systems the rates decrease for increasing fields, which reflects the dispersion of the spectral density function for increasing Larmor frequencies. Experimentally the R 1 NMRD curves are often measured with the fast field-cycling (FFC) technique, and can be complemented by conventional R 1 measurements on NMR devices working at a single field. Recording R 2 rates requires the application of a series of refocusing radiofrequency pulses, which is difficult on fast field-cycling systems. R 2 NMRD profiles are thus obtained through measurements on a series of instruments, relaxometers and spectrometers. The use of high field spectrometers already allows to probe the high field region of the NMRD profiles [5][6][7][8][9]. However, it necessitates the access to numerous instruments and can be time consuming. Using FFC, the sample is submitted to sudden changes of magnetic field from the polarization field B pol to the relaxation field B rel , which causes relaxation, without using any excitation pulse. The detection is always done at the same field B det , whatever the relaxation field B rel is. This allows to use a single probe tuned at the resonance frequency of the detection field B det , where a 90°pulse has to be used to record a free induction decay. In available commercial devices, called FFC-relaxometers, the change in magnetic field is caused by a change in the electrical current circulating in an electromagnet. The technique is demanding for both the magnet and its cooling system. Indeed the Joule effect is considerable and the heat produced at the magnet must be evacuated through an appropriate cooling system to ensure magnetic field stability. As a consequence, the maximum electromagnet field accessible is limited: magnetic fields can be as low as 0.23 mT but often limited to 1 T. However, the field change can be really fast (*1 ms) which allows the measurement of short T 1 . In order to reach higher fields, the sample shuttle technique can be used: it consists in moving the sample in different regions of the stray field of a strong superconducting magnet, usually the magnet of a commercial high-field NMR spectrometer. The motion of the shuttle can be driven by a pneumatic system [10][11][12] or a motorized apparatus [13,14]. The speed at which the field can be changed is limited by the motion of the sample and thus slower than in FFC-relaxometers (*50-100 ms) so that the measurement of short T 1 is challenging. However, the maximum magnetic field accessible is limited to the magnetic field of a high-field NMR magnet. In the case of contrast agents, this covers, in principle, all fields accessible to MRI. Excellent descriptions of the FFC techniques and of some applications can be found in the literature [15][16][17]. The development of the theories describing the relaxation induced by paramagnetic and superparamagnetic contrast agents is closely related to the measurement of NMRD profiles of aqueous suspensions of these magnetic systems. Indeed theses curves constitute unique experimental data to test the adequacy of the theoretical models through a simple fitting of the NMRD profile. From a fundamental point of view, T 1 data at fields larger than 1 T can therefore be necessary to test/develop relaxation models for magnetic contrast agents. From a more practical point of view, NMRD profiles of contrast agents also provide at a glance their relaxation efficiency at different Larmor frequencies, which is valuable for MRI at a given field. However, most new MRI systems operate at 3 T, typical small animal MRI systems operate at 7 T while some new devices reach 21 T. Such fields are not accessible to commercial FFC equipments which are often limited to 1 T, while some recent hybrid systems using a superconducting magnet reach 3 T. In this communication, we show that both the low-field (obtained with a commercial device) and high-field NMRD profiles (recorded with a shuttle system) are necessary for the evaluation of contrast agents efficiency as well as for the development of relaxation models for magnetic contrast agents. Materials and methods Oleylamine-and PEG-coated ultra small superparamagnetic iron oxide particles (USPIOs) were synthesized using a two-step reaction based on a modification of a recently published method [18]. First, 1.042 g Fe(acac) 3 were added to 30 mL of oleylamine. The solution was gradually heated to 128°C at a rate of 363.5°C/h under a N 2 flow followed by a temperature increase to 180°C over a period of 1 h, and finally heating to 270°C at a ramping rate of 396°C/h after which the heating appliance was removed. The solution was left to cool to room temperature and the oleylamine-iron oxide nanoparticles precipitated upon the addition of 30 mL of ethanol, followed by centrifugation at 9000g for 4 min. The supernatant was discarded and the process repeated with another 35 mL of ethanol, then a further 56 mL. The resulting particles had a core of 5.2 ± 0.7 nm, based on the statistical analysis of 100 particles observed by transmission electron microscopy (TEM). The process of functionalization with PEG(5)-BP [polyethylene glycol (5 kDa)-bisphosphonate] allowed for high yields to be reached in a short time and at room temperature. First, 1 mg oleylamine-coated USPIOs and 10 mg PEG(5)-BP were added to 1 mL of dichloromethane in an open glass vial, and the mixture was sonicated for *15 min until the solvent had evaporated. 2 mL of water were added to the remaining residue resulting in a clear brown solution. The mixture was washed with 2 mL of hexane to remove the oleylamine, followed by removal of hexane by evaporation under a N 2 flow. This process was repeated two more times. The final mixture was filtered through a 0.2 lm hydrophilic polytetrafluoroethylene filter, followed by several cycles of washing/concentrating using a Vivaspin two centrifugal filter (30 kDa molecular weight cut off) using water to remove excess PEG(5)-BP leaving an amber dispersion of PEG(5)-BP-USPIOs. After PEGylation, there was no significant difference in the core size of the PEG(5)-BP-USPIOs (5.2 ± 0.6 nm based on the statistical analysis of 100 particles), and the hydrodynamic diameter, measured using dynamic light scattering (DLS), was 30 ± 9 nm. GadoSpin TM P was purchased from Miltenyi Biotec GmbH (Germany). It is a polymeric gadolinium-based contrast agent (intended for MRI of small animals) containing several gadolinium chelates (Gd-DTPA-pentaamide) bound to a polymer backbone. The synthesis protocol is described in [19]. The molecular weight of the molecule is about 200 kDa. The lyophilized compound was reconstituted with 850 lL of physiological saline solution, which provided a final Gd 3? concentration of 20 mM. The stock solution was further diluted in the same buffer to reach a final Gd 3? concentration of 0.5 mM for sample shuttling measurements. Low-field NMRD profiles (T 1 ) of aqueous suspensions were measured from 0.015 to 40 MHz with a Spinmaster FFC relaxometer (STELAR, Mede, Italy) at 25°C using 600 lL of suspensions in a dedicated NMR tube. The high-field parts of the NMRD profiles were recorded on a Bruker Avance IIIHD 600 MHz spectrometer equipped with a pneumatic sample shuttle already described in detail [11,12]. The magnetic field above the magnetic center was measured in steps of 1 mm using a homemade device with two calibrated triple-axes Hall probes (Senis) with a precision of 0.1 %. A CH3A10mE3D transducer was used for measurements from 0.05 to 2 T, while a 03A05F-A20T0K5Q transducer was used between 1 and 13 T. A systematic error of up to 3 % cannot be excluded and was reported on Figs. 1 and 2. The magnetic field for relaxation was constant with 1 % for all measurements carried out at each magnetic field (this corresponds to a displacement of the top position by less than 1 mm). The system includes a triple-resonance ( 1 H, 13 C, 15 N) probe with a z-gradient. The low volume of the sample (100 lL) and the fact that detection on the proton channel is performed with the outer coil make this system mostly immune to radiation damping. Longitudinal relaxation rates at 14.1 T were measured using saturation recovery experiments. All other relaxation rates were measured with a sequence given as supporting information (Fig. S1). After a 5 s delay for polarization at 14.1 T, the longitudinal polarization is inverted every other scan before the transfer to the desired low-field spot, the sample shuttle is transferred back to 14.1 T after the relaxation delay for detection (see Fig. S1). The measured intensity can be fitted by a single exponential, which decays towards zero. In the case of the Gadospin study, two sub-spectra (with and without inversion at high field) were fitted independently, as the measurement of signal differences was challenging in the absence of lock (no D 2 O was added to the sample). All experiments were repeated three times, error bars represent the standard deviation of the three (six) fitted relaxation rates for USPIO (respectively, Gadospin) samples. The error was larger for low fields because of relaxation occurring during the transport of the sample, especially for small relaxation times. All exponential fits were carried with the T 1 /T 2 module of the Topspin software. The iron and gadolinium concentrations of the samples were determined by Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) after microwave digestion in a mixture of nitric acid and hydrogen peroxide. All the relaxation data are presented as relaxivities r 1 , defined as the relaxation rate R 1 = 1/ T 1 normalized by the iron (or gadolinium) concentration. Figures 1 and 2, respectively, present the T 1 NMRD profiles of PEG(5)-BP-USPIO particles and Gadospin TM paramagnetic contrast agent. For both compounds, the data obtained at high field with the shuttle system are in good agreement with the low-field data measured with the commercial FFC device. This can be verified in particular by the inspection of the data obtained on both equipments between 20 and 40 MHz. This zone is crucial since it allows for a direct comparison of the results obtained with the two techniques. The concentration used for high-field NMRDs was smaller than for low-field NMRDs since the shuttle times are rather large compared to the electronic switching times. As relaxation must be minimal during these time intervals, samples with longer relaxation times must be used with the shuttle device. It is worth noting that the 600 MHz measurement, obtained by a conventional saturation-recovery sequence in the normal configuration of the spectrometer, is clearly compatible with the data obtained at lower fields with the shuttle system. The relaxation data of the superparamagnetic nanoparticles was fitted with the theory developed by Roch et al. [4] with a water diffusion coefficient at 25°C of 2.3 9 10 -9 m/s 2 . The fitted parameters are provided in Table 1. Results and discussion The NMRD profile of the gadolinium compound was fitted thanks to the Solomon-Bloembergen-Morgan (SBM) inner sphere relaxation theory with additional contributions from outer sphere [3] and second sphere relaxation [20][21][22]. Simple Lorentzian spectral density functions were used in the SBM equations even if for macromolecules the Lipari-Szabó spectral density functions could be more appropriate [6,23,24]. However, this approach-using a global rotation time, a correlation time refecting rapid local motions and a general order parameter-would add two parameters for the fitting of the NMRD profile, for a total of 8 parameters. Moreover, to confirm that the Lipari-Szabó model is adequate, additional 17 O NMR measurements would be needed. This is beyond the scope of this communication and therefore we chose to use the Lorentzian spectral density function in the SBM equations. It is worth noting that even the SBM equations are sometimes unable to fit the low field part of NMRD profiles for slowly rotating systems [25]. Second sphere relaxation was introduced in order to take into account the contribution from water molecules of the second coordination sphere of the ion. Indeed some of the water molecules are not freely diffusing around the complex but hydrogen bonded to polar groups of the ligand. This latter relaxation term is not always easy to define but was shown to be non negligible for many Mn 2? and Gd 3? complexes. Its accurate description is difficult without further pH and temperature dependence studies of the relaxation rates. Therefore, and even if it constitutes an approximation, we used the same distance of approach (0.36 nm) for the the outer and second sphere contributions, which is a reasonable approximation for Gd 3? complexes [20][21][22]. As our introduction of the second sphere relaxation is only approximate, the number of water molecules of the second sphere contributing to relaxation (q ss ) was not forced to be an integer. The fixed parameters of the fit were: Spin (Gd 3? ) = 3.5, number of coordinated water molecules q = 1, distance of closest approach for inner sphere = 0.31 nm, distance of closest approach for outer sphere and second sphere = 0.36 nm, diffusion coefficient at 25°C = 2.3 9 10 -9 m/s 2 . As usual for gadolinium, the scalar contribution was neglected. The fitted parameters are provided in Table 1. The profile of iron oxide particles was fitted using the theory of Roch et al. M sat is the saturation magnetization of the particle, R is the radius of the particle, s N is the Néel relaxation time and p is an empirical parameter related to the anisotropy of the crystal. The Gadospin NMRD profile was fitted with the Solomon-Bloembergen-Morgan inner-sphere relaxation theory with additional contributions from outer-sphere and second-sphere relaxation. s R is the rotational correlation time of Gd 3? individual complexes, s M is the coordinated water residence time, s SO is the zero-field electron relaxation time, s V is the correlation time associated with the electron relaxation modulation, q SS is the number of water molecules in the second sphere and s SS the correlation time of the interaction of Gd 3? with water molecules belonging to the second sphere. The high-field data show that the relaxivities of both compounds decrease at high fields. This effect is well known-and ineluctable-for superparamagnetic particles. Such a decrease could be moved to higher field for gadolinium-based contrast agents by slowing down the rotational tumbling of the gadolinium complex. For example, this is achieved by grafting gadolinium chelates to nanoparticles and macromolecules, as dendrimers and polymers, which can even result in the appearance of a relaxivity peak at high fields [6,[26][27][28][29]. In the case of Gadospin TM , the rotation of the complex is too fast (s R = 0.48 ns) to maintain high longitudinal relaxivities at high fields. This indicates the high flexibility of the gadolinium complex in the polymeric backbone, which is not beneficial in this case. From a fundamental point of view, the agreement between theories and experimental data is satisfactory. The long exchange time obtained from the fitting is compatible with previous results for pentaamide derivatives [30]. In our opinion, it is clear that the very high-field data, which are usually not included in the analysis of NMRD profiles, contains a large amount of information, and that including them in the fitting is more demanding for the theory. For example, a systematic deviation can be observed between theory and experimental results at the highest fields (300 MHz and above), for Gadospin TM which could mean that the usual Lorentzian spectral density functions are not adequate and that the Lipari-Szabó spectral density functions must be used instead. Similarly, the high-field dispersion of the NMRD profile of PEGylated superparamagnetic particles is not perfectly fitted by the Roch model [4]. This could be caused by the size distribution of the particles, which is not taken into account in this relaxation theory but was previously shown to influence relaxation [31][32][33]. The introduction of the influence of the particles size distribution in the NMRD fitting could be really interesting, since it would provide an estimation of the polydispersity of the particles which is crucial for biomedical applications. The development of such a model would require the whole NMRD profile, and especially the high field dispersion obtained on the shuttle device. However, even the current fitting procedure of the NMRD profile can bring interesting information about the sample: the size obtained by the NMRD fitting is 9.28 nm while the size obtained by TEM was 5.2 nm. Moreover, the magnetization provided by the fitting (Mv = 189000 A/m) is significantly smaller the bulk magnetization of magnetite (Mv = 380000 A/m). This is a clear sign of clustering of the iron oxide cores in a polymer matrix [34], which is supported by the large hydrodynamic size (30 nm) of the system. Indeed a single core with a 5 kDa PEG coating would present a smaller hydrodynamic size. Conclusion NMRD profiles ranging from very low fields to very high fields were measured with a commercial fast field-cycling device and a shuttle apparatus for PEGylated superparamagnetic iron oxide particles and a polymeric gadolinium chelate. The agreement between the FFC and shuttle-based measurements at their intersection is good. The data were compared to the fit of the usual relaxation theories with a rather good agreement, even if the high-field results are not perfectly reproduced by the fitting. This shows that the complete NMRD profile should be used when trying to refine relaxation theories. Indeed they bear relevant information and are therefore more demanding for the questioned theory. Last but not least, they also allow for the direct evaluation of the efficiency of a potential contrast agent at all fields, including fields typical of clinical and small animal MRI.
4,890.2
2016-01-30T00:00:00.000
[ "Chemistry", "Physics" ]
High yield recombinant penicillin G amidase production and export into the growth medium using Bacillus megaterium Background During the last years B. megaterium was continuously developed as production host for the secretion of proteins into the growth medium. Here, recombinant production and export of B. megaterium ATCC14945 penicillin G amidase (PGA) which is used in the reverse synthesis of β-lactam antibiotics were systematically improved. Results For this purpose, the PGA leader peptide was replaced by the B. megaterium LipA counterpart. A production strain deficient in the extracellular protease NprM and in xylose utilization to prevent gene inducer deprivation was constructed and employed. A buffered mineral medium containing calcium ions and defined amino acid supplements for optimal PGA production was developed in microscale cultivations and scaled up to a 2 Liter bioreactor. Productivities of up to 40 mg PGA per L growth medium were reached. Conclusion The combination of genetic and medium optimization led to an overall 7-fold improvement of PGA production and export in B. megaterium. The exclusion of certain amino acids from the minimal medium led for the first time to higher volumetric PGA activities than obtained for complex medium cultivations. Background The Gram positive bacterium Bacillus megaterium has several advantages over other microbial host-systems for the production and secretion of recombinant proteins [1]. In contrast to Escherichia coli it has a high capacity for protein export [2]. Compared to Bacillus subtilis, B. megaterium reveals an useful plasmid stability and a low intrinsic protease activity [2]. Important prerequisites for a biotechnological application of this organism include an efficient transformation system, multiple compatible, freely repli-cating plasmids and the possibility to integrate a heterologous gene into the genome [3,4]. Recently, production of heterologous exoproteins by B. megaterium was further improved by use the exoprotease NprM-deficient B. megaterium strain MS941 [5,6] and by the coexpression of the signal peptidase gene sipM [7]. However, some bottlenecks were still observed for the production and secretion of some of the studied heterologous proteins. The multidomain and high molecular weight dextransucrase DsrS (M r = 180,000) from Leuconostoc mesenteroides aggregated extracellularly during high cell density cultivation [1]. The heterologous gene of the Thermobifida fusca hydrolase (tfh) was only successfully expressed in B. megaterium after its codon bias was adapted to B. megaterium codon usage [8]. Other unknown limiting factors contained in the employed semi-defined medium repressed protein production and secretion in high cell density cultivation [8]. Here, we report on the expression of the penicillin G amidase gene (pga) isolated from B. megaterium ATCC14945 in derivatives of B. megaterium DSM319. This homologous penicillin G amidase (PGA) has a relative molecular mass of 90,000 consisting of two autocatalytically processed subunits (α, β) [9]. The function of PGA in nature is not yet fully understood. B. megaterium may produce PGA extracellularly to degrade phenylacetylated compounds in order to generate phenylacetic acid (PAA) which can be used as carbon source [10]. In industry, PGA is used for the production of new β-lactam antibiotics. It hydrolyzes penicillin G yielding phenyl acetate and 6-aminopenicillanic acid (6-APA). The 6-APA provides the molecular core of all β-lactams to which D-amino acid derivatives can be substituted to create novel antibiotics, e.g. amoxicillin. PGA of B. megaterium is industrially used for the outlined reverse synthesis reaction due to its higher synthesis rate compared to E. coli PGA [11,12]. The intensively studied E. coli PGA is predominantly exported into the periplasm [10]. In contrast, using B. megaterium to secrete homologous B. megaterium PGA directly into the growth medium should facilitate its purification and consequently decrease the downstream processing and final production costs. In this contribution we tested directed molecular strategies for the stepwise improvement of PGA production and secretion using B. megaterium. Rationale of the experimental approach for PGA production in B. megaterium First, in order to stabilize the desired product PGA in the growth medium the influence of calcium ions and the extracellular protease NprM on enzyme stability and activity were investigated. Subsequently, the leader peptide of the extracellular lipase LipA from B. megaterium was tested for the improvement of PGA export. Gene induction using the xylA promoter was analyzed in a xylA mutant strain to prevent inducer utilization. Finally, medium optimization and up scaling were approached systematically. Increased recombinant PGA production and secretion using B. megaterium by the addition of calcium ions Previous investigations of homologous PGA production in E. coli identified calcium as an important factor for protein folding and maturation [13,14]. An amino acid sequence alignment of PGA from B. megaterium and E. coli showed that all active site amino acids were conserved. An overall amino acid sequence identity of 28.4 % was observed. Although the degree of sequence identity is low, functionally and structurally important amino acids were found conserved, indicating homology at the structural level. Hence, the influence of calcium ions on the activity of B. megaterium PGA was tested. The complete pga gene was cloned into the BsrGI/SacI site of pMM1522 placing its expression under control of the xylose inducible promoter. The new vector pRBBm23 was transformed into protoplasted B. megaterium MS941 cells. This B. megaterium strain is deficient in the major extracellular protease NprM due to deletion of the corresponding gene. Significant stabilization of exported proteins by B. megaterium MS941 was reported before [5,7]. The influence of different calcium ion concentrations on the secretion of recombinant PGA was tested in shaking flask cultivations. Comparing the addition of various calcium ion concentrations to the complex LB growth medium demonstrated that 2.5 mM CaCl 2 was optimal for PGA production ( Fig. 1). Three hours after induction 189.4, 489.9, and 287.3 U PGA g CDW -1 were measured in the growth medium containing none, 2.5, and 5 mM CaCl 2 , respectively. The addition of 2.5 mM CaCl 2 increased the amount of secreted PGA 2.6-fold compared to the culture without CaCl 2 addition. Furthermore, addition of 5 mM CaCl 2 resulted in lower amounts of biomass which is probably due to growth inhibition by higher concentration of calcium ions (data not shown). Therefore, 2.5 mM CaCl 2 were added to the growth medium for recombinant PGA production in all following experiments. Characterization of secreted B. megaterium PGA The pga gene was initially cloned with the 5' region encoding its mature signal peptide SP pga . SDS-PAGE analysis of the extracellular proteins of recombinant B. megaterium carrying pRBBm23 (encoding SP pga -PGA) revealed two subunits of PGA with relative molecular masses (M r ) of 27,000 (α-chain) and 57,000 (β-chain) (Fig. 1). The Nterminal amino acid analysis of both recombinant exported proteins indicated that the α-chain started at amino acid residue 25 (GEDKNEGVKVVR) while the Nterminal amino acid sequence of the β-chain SNAAIVG-SEKSATGN corresponded to residues 266 to 279. Hence, the αand β-subunit of PGA range from residue 25 to 265 and from 266 to 802 with calculated molecular masses of 27,753 Da and 61,394 Da, respectively. These calculated masses corresponded well to the experimentally observed masses of the subunits and suited perfectly the report by Panbangred et al. [9]. The native signal peptide sequence was deduced as MKTKWLISVIILFVFIFPQNLVFA. The signal peptide of the extracellular lipase LipA increases PGA export in B. megaterium In previous works, the signal peptide of the B. megaterium extracellular esterase LipA (SP lipA ) was successfully used for the secretion of Lactobacillus reuteri levansucrase [6] and T. fusca hydrolase [8]. In order to analyze the efficiency of the LipA signal peptide for the secretion of recombinant B. megaterium PGA, protein secretion mediated by SP lipA and by its natural signal peptide (SP pga ) were compared. B. megaterium strain MS941 carrying the plasmid pRBBm49 encoding a SP lipA -PGA fusion and the plasmid pRBBm23 encoding the native PGA (SP pga -PGA), respectively, were cultivated in LB medium. A maximum of 380.0 and 230.0 U PGA g CDW -1 were measured for the exported PGA using the SP lipA and SP pga , respectively. Hence, changing the original signal peptide of PGA to the one of LipA improved the amount of secreted PGA 1.7fold (Tab. 2). Construction of a B. megaterium strain deficient in the utilization of the gene expression inducing xylose HPLC analysis of growth medium of batch cultivations with MS941 carrying pRBBm23 (encoding SP pga -PGA) in A5 medium indicated the utilization of xylose as carbon source after the majority of glucose in the growth medium was consumed (data not shown). In order to improve target gene induction efficiency, a constant level of the inducer xylose during cultivation had to be guaranteed. This was achieved by constructing a stable strain deficient in xylose utilization [4]. In agreement with this assumption, the use of the xylA knock-out mutant strain B. megaterium WH323 in protein production using the xylose inducible promoter resulted in higher yields of intracellularly produced heterologous protein [15]. However, a major drawback of WH323 was an increased secretion of the neutral extracellular protease NprM [8]. B. megaterium MS941 employed in this study lacks NprM [1,5] Hence, a strain deficient in xylose utilization based on B. megaterium MS941 was constructed by interrupting the gene encoding the xylose isomerase XylA with the cat gene mediating chloramphenicol resistance. The new strain was named YYBm1. To compare their sugar metabolization, B. megaterium strains MS941, WH320, YYBm1, and WH323 were cultivated in minimal medium with glucose as sole carbon source. When glucose in the growth medium was consumed, all B. megaterium strains stopped growing and entered the stationary phase. After addition of xylose as second carbon source, the strains MS941 and WH320 were entering a second exponential phase of growth, whereas cells of the strains YYBm1 and WH323 died. Hence, YYBm1 and WH323 were unable to utilize xylose as carbon source (Fig. 2). Consequently, the xylA nprM double mutant YYBm1 revealed the expected phenotype. When tested in protein production experiments, YYBm1 secreted 390.0 U PGA per gram CDW compared to 380.0 U PGA per gram CDW by MS941 (Tab. 2). Comparing the two strains for the export of PGA carrying its Influence of calcium ions on PGA production and export in B. megaterium Figure 1 Influence of calcium ions on PGA production and export in B. megaterium. MS941 carrying pRBBm23 (encoding SP pga -PGA) was cultivated in LB medium with indicated concentrations of CaCl 2 . Proteins from 1.5 mL cell-free growth medium were precipitated by ammonium sulfate, analyzed by SDS-PAGE and stained with Coomassie Brilliant Blue G250. Lane M shows Precision Plus Protein Standard (Bio-Rad, Muenchen, Germany). natural leader peptide an increase of 1.2-fold in the specific activity was observed (Tab. 2). Next, early and late induction of gene expression by the addition of xylose were compared. When the inducer xylose was added right at the beginning of the cultivation, the maximal specific activity was reached 7.5 h after the start of cultivation. Similar final activities were reached when xylose was added at an OD 578nm of 0.4 (data not shown). An induction of gene expression at higher optical density, e.g. at OD 578nm 4, led to a faster appearance of PGA activity after induction, but just half the amount of PGA was obtained compared to the early induction (data not shown). Hence, 5 g L -1 xylose was added right at the beginning of the cultivation. Optimization of the complex growth medium Next, the effects of the addition of tryptones from two different companies to the complex growth medium were investigated. PGA secretion by MS941 carrying pRBBm49 (encoding SP lipA -PGA) in LB medium was improved 1.8fold to 36.0 mg L -1 by utilizing tryptone from Oxoid (Wesel, Germany) instead of that from Bacto (Heidelberg, Germany) (Tab. 2). These two tryptones varied in the concentrations of contained amino acids, especially in the amount of arginine, aspartic acid and tyrosine. Used Oxoid versus Bacto tryptone contain 5.53 % to 3.03 % arginine, 7.31 % to 6.11 % aspartic acid, and 3.1 % to 1.42 % tyrosine, respectively. 1.8 times more PGA (41 mg L -1 ) was secreted by YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) in LB medium utilizing Oxoid tryptone compared to Bacto tryptone (Tab. 2). For MS941 and YYBm1 a maximal OD 578nm of 14 were reached during cultivation with Oxoid tryptone. Only OD 578nm of 4 and 6 were B. megaterium YYBm1 is deficient in xylose utilization reached by MS941 and YYBm1, respectively, when grown in LB containing Bacto tryptone. Interestingly, in contrast to the volumetric activity the specific activity is 1.4-and 2fold higher for PGA obtained from cultivations of B. megaterium MS941 and YYBm1 using tryptone from Bacto instead of Oxoid, respectively (Tab. 2). Another difference in cultivation with these two media was the production of an extracellular immune inhibitor A metalloprotease like protein Q73BM2 (M r = 84,400) in the presence of tryptone from Oxoid (Fig. 4). The production of this protein was observed before for B. megaterium by Wang et al. (2006) [16]. The protein was identified using the MAS-COT program with MALDI-TOF/MS data and the strainspecific protein database "bmgMECI". From complex to mineral medium For the control and subsequent directed optimization of the fermentation process defined mineral media are desired. Moreover, these mineral media usually are less cost intensive compared to complex media. Therefore, we systematically developed a mineral medium for PGA production and export in B. megaterium. First, the previously developed semi-defined A5 medium [1] containing 0.5 g L -1 yeast extract and a newly developed mineral medium based on MOPSO buffer were tested in comparison to complex medium. Growth and secretion of PGA were initially compared for the different media in shaking flask cultivations of B. megaterium MS941 carrying pRBBm23 (encoding SP pga -PGA) (Fig. 4). Using complex medium, maximal specific PGA activity of 131 U g CDW -1 was reached 5 h after induction of pga expression. A cultivation in semi-defined A5 medium led to a drastic 22-fold reduction (maximum of 6.0 U g CDW -1 ) while in MOPSO derived medium specific PGA activity was reduced 9.4fold (maximum of 14 U g CDW -1 ) (Fig. 3). Although the MOPSO derived mineral medium was a protein and amino acid free medium, similar cell densities were reached compared to complex medium. In addition, higher specific PGA activities compared to the semidefined A5 medium were achieved. Hence, we started to optimize the MOPSO-based medium by systematic supplementation of nutrients to increase PGA production and export. Acevedo et al. (1973) and Pinotti et al. (2000) showed that for high production of PGA in B. megaterium ATCC14945 certain amino acids were required [17,18]. Hence, for improving the productivity in minimal medium, the influence of the amino acids addition on PGA secretion was investigated. Free amino acids as arginine, proline, histidine, and asparagines were selectively added to the medium including glucose as carbon source and casein as nitrogen source [18]. Growth and PGA production of B. megaterium YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) was systematically investigated in 96-well microtiter plates. The expression of pga was induced at the beginning of cultivation. First, the cell growth and protein production characteristics were compared to shaking flasks cultivations using LB medium. Similar cell growth curves and comparable amounts of enzyme were achieved at the end of cultivations (Fig. 5). Hence, the microtiter plates allow a cultivation comparable to shaking flasks with the advantage of high throughput. Next, according to their corresponding metabolic pathways [19] the 20 amino acids to be added were grouped into 7 families: I. glycine and serine; II. valine, leucine and isoleucine; III. alanine; IV. glutamine, glutamic acid, proline, and arginine; V. histidine; VI. lysine, threonine, methionine, aspartic acid, cysteine, and asparagine; VII. phenylalanine, tyrosine, and tryptophan. Seven different combinations of amino acid solutions were prepared each time excluding one group of amino acids. When group II, IV or VII were excluded, specific activity of PGA increased up to 1.9-, 1.8-and 2.5-fold, respectively. The highest increase in PGA production was observed when amino acids from group VII were excluded. Group VII contains the aromatic amino acids (F, Y, W) which are usually produced from the pentose phosphate pathway. The minimal medium supplemented with all amino acids excluding group VII was chosen for the described scale-up experiments from microtiter plate over shaking flasks to the bioreactor. Next, the amount of added amino acids solution was optimized in shaking flask cultivations (Fig. 6). B. megaterium strain YYBm1 carrying plasmid pRBBm49 (encoding SP lipA -PGA) was cultivated in 100 mL minimal medium with a final concentration of none, 0.5 ×, 1 ×, and 2 × of the amino acids solution excluding the group VII amino acids. The 2 × addition of the amino acids solution led to increased final cell density at the end of the cultivation. However, optimal PGA production was obtained in minimal medium with 1 × addition of the amino acid solution, which was also verified by SDS-PAGE analysis of extracellular proteins. These results indicated that amino acids were essential for PGA production, but the higher concentration of amino acid, here 2 ×, limited PGA production. Upscale of PGA production using B. megaterium to a 2 Liter bioreactor Finally, this optimized minimal medium containing 1x amino acids solution excluding group VII amino acids was used for an upscale with a pH controlled 2 L bioreactor (Fig. 7). As control, LB complex medium with tryptone from Bacto was also tested in the bioreactor. 29.0 mg L -1 PGA were produced by YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) using the optimized minimal medium. This was only a slight 1.1-fold increase compared to PGA production in the complex medium. For the Comparison of growth media for PGA production and export using B. megaterium first time, a higher volumetric productivity was reached in a batch cultivation using a defined minimal medium compared to an undefined complex medium. However, after cultivation in LB medium the specific PGA activity was still 2 times higher than after cultivation in minimal medium due to the 2 times higher biomass production in minimal medium. No PGA precursor was observed in the medium. Next, the obtained improvements in the bioreactor were compared to a bioreactor cultivation performed at the beginning of the study. This comparison of the described complex and minimal medium with a pH-controlled batch cultivation of B. megaterium strain MS941 carrying pRBBm23 (encoding SP pga -PGA) using A5 semi-defined medium excluding calcium ions (Fig. 7) provided insights into the improvement process via the different described steps. In cultivations using either LB or minimal medium, PGA secretion started in the exponential phase, whereas in a cultivation using semi-defined A5 medium it started in the stationary phase. Finally, only 4.2 mg PGA per Liter growth medium were obtained using strain MS941 carrying pRBBm23 (encoding SP pga -PGA) in A5 medium. Hence, using the newly constructed strain YYBm1 deficient in xylose utilization, the signal peptide of LipA, an optimized minimal medium supplemented with calcium Comparison of different leader peptides for the production and export of B. megaterium PGA Figure 4 Comparison of different leader peptides for the production and export of B. megaterium PGA. PGA was produced in shaking flask cultivation of B. megaterium MS941 and YYBm1 carrying either pRBBm23 (encoding SP pga -PGA) or pRBBm49 (encoding SP lipA -PGA) in LB medium containing tryptone from different companies. At OD 578nm of 0.4 pga expression was induced by the addition of 5 g L -1 xylose to the growth medium. Samples were taken at various time points after induction. Proteins from 10 μL unconcentrated growth medium were separated by SDS-PAGE and stained with Coomassie Brilliant Blue G250. Biomass concentration and PGA volumetric activity 24 h after induction of recombinant gene expression are shown. ions and a defined mix of amino acids the volumetric PGA productivity was improved 7-fold resulting in 29.0 mg PGA per Liter growth medium. Discussion We systematically optimized B. megaterium for the recombinant production of PGA. Some unexpected observations were made. A potential metalloprotease was exclusively produced by B. megaterium MS941 and YYBm1 cultivated in medium containing tryptone from Oxoid and not in the presence of Bacto tryptone. The Oxoid tryptone was characterized by its higher content in arginine, aspartic acid, and tyrosine. This might have provided an external stimuli of unknown nature which induced expression of the metalloprotease gene. Determination of the corresponding mRNA levels via Northern blot analysis might help to shed some light on the observed phenomena. Similarly, PGA production was also influenced by the tryptone source as well as the amino acid composition and content of the growth medium. The observed production pattern might be the result of a complex interplay of various factors influencing growth, protein production and export as well as stress responses. Usually complex media provide better growth due to the supplement of the full set of known and unknown essential growth factors. Nevertheless, the supply of C-, N-, S-sources and other growth factors in an excess often causes stress and other regulatory responses to optimize the bacterial metabolism towards the environmental stimuli. As a consequence intracellular amino acid synthesis and protein production and export might be decreased. A system biotechnology approach with the systematic high throughput determination of transcriptome, cytoplasmic proteome, secretome and especially the metabolome for the various growth and protein production conditions will finally help us to determine the exact cellular parameters involved in the observed protein production behaviour. This information might provide a solid bases for the directed further metab-Cultivation and PGA production in microtiter plates Figure 5 Cultivation and PGA production in microtiter plates. YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) was cultivated in LB medium using microtiter plates and shaking flasks. OD 578nm from microtiter plate cultivation was measured with a spectrophotometer and Multiskan Ascent photometer. PGA activity measurements were performed as described in material and methods. The influence of the concentration of amino acids supplementation on cell dry weight and PGA activity Figure 6 The influence of the concentration of amino acids supplementation on cell dry weight and PGA activity. Shaking flask cultivation of YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) was employed. olomic engineering of B. megaterium for optimal protein production and export. In contrast to the complex explanations for the outlined observations the more efficient PGA production and secretion via an induction of pga gene expression at low cell densities compared to high cell densities might simply be caused by the longer gene induction and protein production time. This phenomena was observed before by our group [8]. Once one step of protein production like protein export becomes limiting, longer protein production times increase the overall product yield. Nevertheless, produced PGA amounts (~2000 U L -1 ) in this study did not completely reach such of previously described B. megaterium PGA production strains (~3000 U L -1 in [17], 9000 U L -1 in [9]) or that of the published intracellular production of the enzyme in E. coli (~30,000 U L -1 in [20]). Outlined enzyme activity results are not simple to compare since absolute protein amounts are not given by the mentioned PGA productions. Therefore, observed dif-ferences between the various B. megaterium production strains might be due to differences in the employed enzymatic test systems. Currently, intracellular protein production in E. coli is still more efficient compare to recombinant protein production and export with B. megaterium. Limitations in up scaling protein production processes including protein export were observed for B. megaterium [1,6]. Again, a system biology approach should help us to identify existing bottlenecks and allow for systematic bioengineering solutions. Conclusion A systematic improvement of the recombinant production and export of B. megaterium ATCC14945 penicillin G amidase using B. megaterium was performed. The addition of 2.5 mM calcium ions increased the specific activity by 2.6-fold. Exchange of its natural signal peptide by the one of the B. megaterium extracellular lipase LipA increased secretion by 1.7-fold. A B. megaterium strain deficient in the extracellular protease NprM and in xylose utilization Upscaling of PGA production and export using B. megaterium and a 2 L bioreactor Figure 7 Upscaling of PGA production and export using B. megaterium and a 2 L bioreactor. The pH controlled batch cultivation of B. megaterium YYBm1 carrying pRBBm49 (encoding SP lipA -PGA) was performed in complex medium (square) and optimized minimal medium (circle). B. megaterium MS941 carrying pRBBm23 (encoding SP pga -PGA) was grown in semi-defined A5 medium (triangle). For induction of recombinant gene expression 5 g L -1 xylose were added at the beginning of the cultivation. Samples were taken at indicated time points to determine cell dry weight (open) and PGA volumetric activity (solid). (ΔxylA) was developed allowing for stable extracellular proteins and long time induction of gene expression by xylose. Next, a defined minimal medium with defined amino acid additions for high yield PGA production was developed. Finally, PGA production successfully scaled up to 2 L controlled batch fermentations. Plasmids and strains All strains, plasmids and primers (biomers, Ulm, Germany) used in this study are listed in table 1. Molecular biology methods were described previously [21]. The complete wild type pga gene including its native signal peptide was amplified by PCR using isolated genomic DNA from B. megaterium ATCC14945 as template and the primers pga_23_for and pga_23_rev. After BsrGI/SacI digestion of the PCR product, it was cloned into the BsrGI/ SacI site of pMM1522 [6]. The new vector was named pRBBm23. The pga gene was combined with the signal peptide of LipA by cloning the PCR product generated using B. megaterium ATCC14945 DNA and the primers pga_49_for and pga_49_rev into BglII/EagI cut pMM1525 [6] generating pRBBm49. A xylose deficient strain was generated from B. megaterium MS941 by integration of the chloramphenicol resistance mediating cat gene into the chromosomal copy of the xylA gene via a double crossover [22]. For the necessary construct, the xylA gene was amplified by PCR from B. megaterium MS941 genomic DNA using the primers xylA_as and xylA_s. After digestion of the PCR product, the DNA fragment was cloned into the SacI/SacI sites of pHBIntE [23]. The resulting plasmid was called pYYBm4. The plasmid contained a temperature sensitive origin of replication. The cat gene was amplified by PCR from pHV33 [24] using the primers cml_as and cml_s and cloned into the NdeI/XbaI sites of pYYBm4. The resulting plasmid was called pYYBm8. The constructed plasmid was transformed into protoplasted B. megaterium MS941. Cells were grown at a non-permissive temperature of 30°C [3]. The double crossover was achieved by dividing the chromosomal integration process into two screenable step: First, singlecrossover recombination was achieved by cultivation of the culture at 42°C and addition of 3 mg L -1 chloramphenicol. Second, excision of the carrier replicon was screened via isolation of chloramphenicol resistant bacteria deficient in xylose utilization. The new strain B. megaterium YYBm1 grew on chloramphenicol M9 agar plates and exclusively used glucose as carbon source. Constructed expression plasmids pRBBm23 and pRBBm49 were transformed in B. megaterium strains MS941, YYBm1, WH320, and WH323 by protoplast transformation [3]. All used strains are derivatives of the wild type strain DSM319. MS941 has a defined deletion in the gene of the major extracellular protease NprM [5]. WH323 is derived from WH320 (a chemically obtained βgalactosidase deficient mutant of DSM319) by inserting the E. coli lacZ gene in the xylA gene. Hence, YYBm1 and WH323 do not consume xylose as carbon source. Cultivation B. megaterium precultures were cultivated in 50 mL of the indicated medium at 37°C and 120 rpm for 16 h. For microtiter plate cultivation, 200 μL culture medium with an adjusted initial OD 578nm of 0.1 to 0.2 was transferred to a 96-well microtiter plate except the outer wells which were filled with water because of the evaporation. The plate was cultivated in the Fluoroskan Ascent fluorescence reader (Thermo electron corporation, Dreieich, Germany) at 37 C and 1,020 rpm with an orbital shaking diameter of 1 mm as described previously [26]. For shaking flask cultivation, B. megaterium strains were grown in 500 mL baffled Erlenmeyer flasks with 100 mL medium at 37°C and 250 rpm. Expression of the pga gene was induced by addition of 5 g L -1 xylose to the growth medium. For bioreactor cultivation, a Biostat B2 (B. Braun, Melsungen, Germany) with 2 L working volume connected to an exhaust gas analysis unit (S710, Sick Maihak, Ger-many) was used. The bioreactor was inoculated with 1 % (v/v) cells and cultivated at 37°C with controlled pH at 7 as previously described [1,8]. Analytical procedures In microtiter plate cultivation OD 580nm was measured in the Multiskan Ascent photometer (Thermo electron corporation, Dreieich, Germany). The relationship between OD 580nm measured from microtiter plate and OD 578nm measured from 1 cm cuvette was determined as OD 578nm = 3.719 × OD 580nm . For shaking flask and bioreactor cultivation samples for biomass, metabolites, and PGA activity were taken at regular intervals. The OD 578nm was measured in triplicates with an Ultrospec 3100 pro spectrophotometer (Amersham Pharmacia, UK). The relationship between CDW and OD 578nm was determined as CDW [g L -1 ] = 0.395 × OD 578nm for YYBm1 and as CDW [g L -1 ] = 0.334 × OD 578nm for MS941. The concentration of glucose and metabolites was determined by HPLC (Shimadzu, Japan) using an Aminex HPX-87H column (Biorad, USA) and 10 mM H 2 SO 4 as the mobile phase. A flow rate of 600 SDS-PAGE was performed using a Mini Protean 3 apparatus (Bio-Rad, USA). Proteins were stained by Coomassie Brilliant Blue G250. For N-terminal sequencing, the separated proteins were transferred onto a polyvinylidene difluoride (PVDF) membrane using a Trans-Blot Semi-Dry Transfer Cell (Bio-Rad, Munich, Germany) as described by the manufacturer and the N-terminal amino acid sequence was determined by Edmann degradation. Directly after sampling, PGA activity was measured spectrophotometrically (Ultrospec 3100 pro, Amersham Biosciences, Sweden) via release of the 6-nitro-3phenylacetamido-benzoic acid (NIPAB) as described previously [20]. Freshly prepared NIPAB solution was prepared by dissolving 60 mg 6-nitro-3-phenylacetamidobenzoic acid in 100 mL 50 mM Na-Phosphate buffer. After addition of the enzyme sample, the absorption was immediately measured at 405 nm and 37°C for 60 s after a 20 s delay against a standard without addition of enzyme. One unit was defined as the amount of enzyme that caused the release of 1 μmol 6-nitrophenol per minute under the test conditions. The extinction coefficient of 6-nitrophenol is 8.98 cm 2 μmol -1 . The purified enzyme has a specific activity of 45 U mg protein -1 [27]. Standard derivations performed experiments were below 10 %.
7,085.4
0001-01-01T00:00:00.000
[ "Biology", "Chemistry" ]
Propulsion for Spacecrafts using on-board Laser Reflection and Absorption In this paper, a new laser-based, on-board propulsion system has been produced. This system utilizes Laser reflection and absorption to obtain a net momentum. The proposed system is also unique in the sense that no on-board propellant is required. Further, the power consumed can be ideally recycled 100%. This makes it a viable mode of propulsion for long distance trips to planets or solar systems. Further, it has been calculated that depending upon the lasers used, this system can propel a 4000kg spacecraft with an acceleration of 100-10000 ms -2 . This means, a 4000kg mass can be propelled to one-third the speed of light in just 5*10 4 seconds, or 13.8 hours roughly. This is a big jump from current technology, which cannot achieve sub-luminal speeds, that too so quickly. This makes this system of propulsion our best bet for sub-solar system travel. Further, the setup proposed is expected to be improved. Theory and Setup The proposed system uses lasers as the mode of propulsion.It uses concept developed and discussed by R.L. Forward in a 1984 paper [1].Proposed setup includes one absorbing surface, an inclined reflecting surface, and another reflecting surface in front of inclined plane, as shown below: Absorbing Plane The idea is to plant lasers on one surface which is ideally made of an ideal Laser absorber.Lasers are fired normal to this surface producing force F 1 on surface (actually, space craft) as shown in diagram 1.The lasers are reflected by ideally reflecting inclined plane, which produces a force F 2 on inclined plane, normal to it. The lasers then fall on third surface, which is again a perfect reflector.This Reflecting surface *E-mail<EMAIL_ADDRESS>a Force F 3 normal to this surface.Due to reflection, a force F 4 is again applied on this reflecting surface, which is equal to F 3 .This reflected beam again falls on reflecting inclined surface; again apply a force F 5 perpendicular to surface. Finally, the lasers reach absorbing surface applying Force F 6 equal to F 1 .The absorbed lasers are again converted to electric potential and stored in batteries.This all is illustrated in following diagrams: Laser Beams Absorbing Plane Figure 1 In the calculation part, it has been calculated that there is a net force propelling the system, whose magnitude and direction depends upon the angle of inclined plane and power of lasers. Calculations Let F 1 be the Force produced on surface one due to firing of lasers F 2 be the net Force produced when lasers hit inclined plane and reflect F 3 be the Force produced on third surface F 4 be the force produced on third surface upon reflection of lasers F 5 be the net Force produced when lasers hit inclined plane and reflect F 6 be the force on absorbing surface when lasers are absorbed Now, This all is illustrated in following diagrams: *E-mail<EMAIL_ADDRESS>produced upon firing, reflecting, and absorbing of lasers normal to plane will be equal in magnitude.This gives: Force normal to inclined plane can be given as Fcos 2 α α is the angle of inclined plane with respect to lasers from surface As laser is reflected, so the net force will be 2Fcos 2 α Now, The lasers reflected from the third surface will fall on inclined reflecting plane to further apply a force 2Fcos 2 (90-α)=2Fsin 2 α Net force on the absorbing surface after one complete cycle of laser reflection will be F 1 +F 6 =2F Net force on the reflecting surface (third surface) after one complete cycle of laser reflection will be *E-mail<EMAIL_ADDRESS>To maximize the net force on inclined plane, let α =45 Therefore, net force will be 2Fcos 2 α +2Fsin 2 α = + =2F Resolving the force vector into its components gives: Magnitude of force along x-axis= 2Fcos45 =√2F Direction will be exactly opposite to the force applied on third surface, as shown in figure 2. Magnitude of force along y-axis= 2Fcos45 =√2F Direction will be exactly opposite to the force applied on first surface, as shown in figure 2. Therefore, net force on the system after one laser cycle will be: Component along x-axis F r =2F-√2F=F(2-1.4)=.6F(approx..) √2=1.4 *E-mail<EMAIL_ADDRESS>Component along y-axis F p =2F-√2F=F(2-1.4)=.6F(approx..) Net Force Note that angle between F r and F p is 90 Therefore magnitude of net force on space craft will be Now, this gives 85% force to space craft, as opposed to direct laser propulsion. However, this makes it more viable because in ideal setup, the lasers fired are being again converted to electrical potential and stored again, while still giving a net force of .85F. Calculations for 300 Trillions watt lasers system If lasers used are of combined power 3*10 14 watts, Force F will be F= =3*10 14 /3*10 8 =10 6 Newtons Net force on space craft F n =.85*10 6 =8.5*10 5 N Let mass of space craft be around 8500kg This will give an acceleration of 100ms -2 to a spacecraft of 8500kg mass.a=10 2 ms -2 Using classical non-modified equation of motion v=at To reach one third the speed of light, the given system will take t=10 8 /10 2 =10 6 seconds, which is almost 11 and a half days. *E-mail<EMAIL_ADDRESS>Calculations for 3000 Trillions watt lasers system and 4250 kg mass Using classical non-modified equation of motion v=at To reach one third the speed of light, the given system will take t=10 8 /2*10 3 t=50,000 seconds or 13.89 hours mail<EMAIL_ADDRESS>Figure 1
1,286.6
2020-10-06T00:00:00.000
[ "Engineering", "Physics" ]
QoS Improvement Using In-Network Caching Based on Clustering and Popularity Heuristics in CCN Content-Centric Networking (CCN) has emerged as a potential Internet architecture that supports name-based content retrieval mechanism in contrast to the current host location-oriented IP architecture. The in-network caching capability of CCN ensures higher content availability, lesser network delay, and leads to server load reduction. It was observed that caching the contents on each intermediate node does not use the network resources efficiently. Hence, efficient content caching decisions are crucial to improve the Quality-of-Service (QoS) for the end-user devices and improved network performance. Towards this, a novel content caching scheme is proposed in this paper. The proposed scheme first clusters the network nodes based on the hop count and bandwidth parameters to reduce content redundancy and caching operations. Then, the scheme takes content placement decisions using the cluster information, content popularity, and the hop count parameters, where the caching probability improves as the content traversed toward the requester. Hence, using the proposed heuristics, the popular contents are placed near the edges of the network to achieve a high cache hit ratio. Once the cache becomes full, the scheme implements Least-Frequently-Used (LFU) replacement scheme to substitute the least accessed content in the network routers. Extensive simulations are conducted and the performance of the proposed scheme is investigated under different network parameters that demonstrate the superiority of the proposed strategy w.r.t the peer competing strategies. Introduction The Internet is initially designed as a "collection of hosts" which is used to access available resources that are distributed in the network. The traditional TCP/IP Internet architecture supports the host-centric content retrieval mechanism, where the contents are accessed using the IP addresses of network nodes. The Internet has become a global infrastructure and with its tremendous growth in applications, the IP-based network traffic is estimated to be 4712 Exabytes per year at the end of 2022 [1]. Moreover, modern Internet applications [2,3] impose intensive Quality-of-Service (QoS) requirements during content retrieval operations such as minimal content access delay, network traffic, and effective use of available network resources, etc. The quality improvements in the IP-based environment have various techniques implied in recent research as per authors Tiwari et al. [4,5]. However, the patch-based TCP/IP architecture starts showing its limitations towards the current Internet applications and their increased new requirements due to its host-centric nature [6,7]. In this context, the Content-Centric Networking (CCN) is proposed as a clean slate architecture for the future Internet [8]. CCN supports a content-name-based data retrieval mechanism instead of searching for the IP address-based host in the network to access the required data. Thus, the data can be retrieved from any network node that has a copy of the requested content in CCN. Furthermore, the CCN offers the in-network caching capability and the requested contents can be served from the origin servers or the cache of nearby intermediate network routers. The underlying content caching improves QoS for the end-users by minimizing content retrieval delay, reducing the load on the network nodes, and traffic during data dissemination [9,10]. The in-network content caching policy takes decisions related to the selection of suitable locations for the content placement and selection of older contents for replacement operations when the cache becomes full. These caching policies are generally categorized into on-path and off-path caching schemes [11]. In on-path schemes [12], the content is cached in the intermediary routers that forward the content from the content provider towards the requester. In recent, several on-path caching schemes are proposed by various researchers that takes content placement decisions based on the content popularity [13,14], node importance [10,15], content age [16], and distance-based parameters [17,18], etc. Contrarily, the off-path schemes can place the content in any of the network router that may or may not exist in the content delivery route. Generally, the off-path caching schemes considers a hash-based mechanism during content caching decisions such as [19][20][21]. Due to hash-based content caching decisions, most of the off-path caching schemes suffers from higher network traffic and increased path stretch. Additionally, these schemes do not consider the content popularity or topological information during content placement decisions. In contrast to these schemes, the on-path schemes creates lesser communication overhead and computational complexity during content caching decisions. Therefore, the on-path caching schemes are widely implemented in the CCN. After exhaustive analysis of the existing on-path caching strategies, there are mainly two reasons that motivated us for the proposed content caching scheme. • Network traffic and redundancy: The conventional on-path caching policy of CCN, called ubiquitous caching [22] allows each intermediary router in the retrieval path to temporarily store the incoming contents. This increases the availability of contents near the end-user devices and reduces content retrieval delay up to certain extent. However, the scheme suffers from higher content redundancy as the same content is placed in all the on-path routers during content forwarding. Due to this, the other content requests need to be served by the server, which causes excessive network traffic due to poor cache diversity. This leads to degraded network performance and QoS for end user devices. Therefore, although caching of contents in the intermediate routers improves network performance, the determination of appropriate network routers and the selection of contents for the caching operations is an open research gap that needs to be addressed. • Content retrieval delay: Most of the existing on-path caching schemes takes autonomous caching decisions. Before forwarding the content to downstream nodes, each onpath router needs to perform certain computations for content caching decisions. This excessive computation for content caching becomes an obstruction in real-time content delivery and also causes excessive consumption of computational resources in the network routers. Therefore, it is essential to reduce the computational delay during caching decisions and the suitable contents need to be placed in appropriate network routers. With these motivations, the objective of this paper is to propose an efficient content caching scheme that reduces the content retrieval delay and resource consumptions to offer improved network performance in CCN networks. Towards this, the proposed scheme provides two-folded content caching strategy. First, it partitioned the network nodes into the non-overlapping clusters using the topological information of the network. The clustering is performed to reduce content placement/replacement operations and to decrease computational latency in the network routers. During content retrieval, at most one copy of the incoming content is cached in that cluster from where the request is generated. The intermediate routers that do not belong to requester's cluster in the path, cannot cache the forwarded contents. Hence, the computational latency is significantly reduced for the network routers. Secondly, to take caching decisions, the proposed scheme considers the content popularity and the hop count information to place popular contents near the end-user devices. When an intra-cluster router cache the incoming content, the remaining routers of that cluster just forward the content towards the requester without further caching operations. Thus, the proposed heuristics also control the excessive content redundancy and lead to comprehensive use of the caching capacities of the network. The major contributions of the paper are as follows: • A clustering-based in-network content caching scheme is proposed for the CCN to improve QoS for end-user devices and comprehensive use of cache space. By clustering the network nodes, the proposed scheme constrains excessive caching operations and content redundancy in the network. • The proposed caching scheme considers content popularity and hop-count metrics along with the clusters information for the caching decisions. Using these heuristics, the caching probability increases for the frequently accessed contents near the end-user devices to reduce content access delay. • The performance of the proposed caching scheme is examined through extensive simulations on the realistic network topology. Simulations results show the necessity of the proposed clustering-based caching scheme since the conventional scheme does not achieve a considerable hit rate in the network. Moreover, the proposed scheme demonstrates a significant decrease in the content retrieval delay and network traffic from the existing caching strategies. The organization of the remaining paper is as follows. The next section (Section 2) provides the overview of CCN. Section 3 discuss the brief survey of the prior related works. The system model is presented in Section 4. In Section 5, the novel clustering and the caching schemes are proposed. The performance of the proposed scheme is evaluated and compared with peer caching schemes in Section 6. Finally, the paper is concluded in Section 7. Overview of CCN Architecture This section briefly describes the CCN architecture and its operations to provide the foundation for further discussions. As CCN is a data-centric network, the content retrieval mechanism relies on two types of messages: Interest message and Content message [23]. The end-user device generates the Interest message to request for the specific content and the in-network router/provider replies with the corresponding Content message. For the routing and caching operations, each router maintains a Forwarding Information Base (FIB), Content Store (CS) and the Pending Interest Table (PIT) [24]. The FIB contains the interface information to forward the Interest message towards the content source. The incoming content can be cached in the CS of on-path routers based on the caching policy. When a router receives an Interest message from one or more interfaces, the information of those pending Interest messages and their interfaces is stored in the PIT. On receiving the Interest message from the end-user device, the network router first searches its CS for the requested content. If a cache hit occurs then the Content message is created by the router and forwarded towards the end-user device using the interface through which the Interest message arrived. If a cache miss occurs, then the router investigates its PIT. If a matching entry is found in PIT then the interface information of the incoming Interest message is aggregated in the PIT and the message is disposed from the network. Otherwise, a record is created in the PIT and the Interest message is forwarded towards the source using FIB. When an intermediate router receives a Content message, it checks its PIT for the matching records. If the entry is found then the router forwards the Content message toward those interfaces that are mentioned in the PIT and cache the Content message in its CS based on the content placement and replacement policies. After content forwarding, the router removes entries for that Content message from the PIT. Literature Review In-network content caching is an inherent characteristic of CCN architecture that raises several challenges during content placement and replacement operations. To improve the network performance and QoS for the end-user devices, various content caching schemes are proposed by the research community [25,26]. The traditional Leave-Copy-Everywhere (LCE) [27] caching scheme places the content in each intermediate router throughout the delivery path. The scheme cache the contents near the end-user devices and reduces content retrieval delay for future Interest messages. However, this excessive caching causes high energy consumption and cache replacement operations. Moreover, the excessive content redundancy also increases cache miss probability as the cache size is limited in realistic networks. Therefore, a trade-off exists between the caching and no-caching operations. Excessive caching operations can reduce the latency up to a certain extent but causes extreme exploitation of network resources. On the other side, no-caching in the network routers leads to higher delays and network traffic. Hence, it is necessary to focus on frequently requested contents and suitable locations for optimal network performance. For content placement decisions, a random probability-based caching scheme called RandProb is proposed in [28]. The scheme randomly places the incoming contents in the on-path routers and does not involve significant computational latency during caching decisions. To reduce cache replacements, the Leave-Copy-Down (LCD) scheme is suggested in [29] that drops the accessed content one-hop downside from the content provider. With this, the frequently accessed contents are gradually placed towards the edges of the network. The Probcache caching strategy [18] approximates the caching capacity of the path and multiplex the contents between the server and the end-user device (requester). Using the proposed mechanism, the Probcache scheme fairly allocates the network resources among different network flows. However, these caching schemes [18,[27][28][29] do not consider the router's characteristics and content popularity during caching decisions and hence unable to make efficient use of caching resources. To increase cache hit probability on those routers that observe high network traffic, various centrality-based caching schemes are also proposed [30]. A betweenness centralitybased caching approach is suggested in [31] that eliminates the uncertainty of randomprobability-based content placement decisions and shows improved caching gains. An in-depth comparison of several centrality-metrics-based caching mechanisms has been performed in [15] that involve Degree Centrality (DC-based), Stress Centrality, Betweenness Centrality, etc. The results illustrate that the degree centrality is a simple and effective parameter for efficient cache use. The CPNDD (Content Placement based on Normalized Node Degree and Distance) caching scheme [17] shows that considering a single parameter for the caching decisions does not achieve significant performance gain. The scheme suggests to jointly consider the degree centrality and hop count parameters for content placement decisions. Using these parameters, the caching probability increases in those routers that have a high degree centrality and are far from the content provider. The results show improved cache hit ratio and reduction in server load from LCE and DC-based caching strategies. Various researchers have also recommended considering the content popularity for caching decisions in the network. Towards this, in the Most-Popular Content Caching (MPC) scheme [32], each router computes content access frequencies autonomously. When the content becomes popular enough, the router suggests its adjacent routers to cache the popular content in their storage. Using this approach, the cache redundancy increases for popular contents in the network. The Content Popularity and User Location (CPUL)-based caching scheme [33] divides the contents into popular and normal contents using a centralized server. The scheme then suggests taking caching decisions based on the type of content and the user location in the network. However, as defined in the scheme, the determination of content popularity on a centralized server causes scalability concerns for large-scale networks. The Dynamic Popularity Window-based Caching Scheme (DPWCS) [14] proposed to implement a large popularity window in each network router, which is used to determine the popularity of contents. The scheme identifies popular contents based on the request distribution model, caching capacity of the routers, and the number of distinct contents in the networks. One of our prior work proposed in Tiwari et al. [34] discusses a content Popularity and Distance-based Caching scheme (PDC) for content placement/replacement decisions. The scheme jointly considers the content popularity and hop count-based distance attributes during content caching in the network and shows improved network performance as compared to conventional LCE and DC-based caching strategies. However, most of the above discussed caching schemes [14,15,17,[27][28][29]34] take autonomous caching decisions where routers do not cooperate for content placement operations. Although autonomous content caching reduces communication overhead in the network, these scheme suffers from higher content redundancy and cache replacement operations. Moreover, many schemes consider at most one parameter for the caching decisions such as node centrality, content popularity, and hop count [18,[29][30][31][32]. Due to this, these schemes suffers from load imbalance events as the routers that are near the server or have a higher degree centrality would experience more caching operations as compared to other routers in the network. To alleviate the load im-balancing issues and reduction in excessive caching operations, several cluster-based caching schemes are also proposed in the CCN [35][36][37][38]. The Hierarchical Cluster-based Caching (HCC) scheme [35] partitioned the network routers into the core routers and the edge routers. The core routers do not have caching capability and the few selected edge routers can cache the contents. For caching decisions, the scheme jointly considers node degree centrality, hop-count, and delay metrics. In [36], the authors proposed k-split and k-medoid clustering schemes to partition the network. The scheme performs hash-based caching operations and thus, it does not consider content or router's characteristics during content placement decisions. The scheme mentioned in [37] creates a fixed number of partitions in the network based on the hop count information. The scheme performs caching operations using the partition information and the content popularity in the network. A cluster-based scalable scheme is suggested in [38] that combines the physical routers together and these routers are seen as a single unit to the outside nodes. However, internally, the traffic load has been distributed among the physical routers. Once the cache of the network routers becomes full, the older content needs to be evicted to cache the incoming content. Generally, this cache replacement operation is performed using the First-In-First-Out (FIFO), Least-recently Used (LRU), Least-Frequentlyused (LFU), and optimal cache replacement strategies [39,40]. As discussed in [39,41], the optimal replacement scheme achieves improved network performance as compared to peer schemes. However, the implementation of the optimal strategy is not feasible as the content requests pattern cannot be predicted in realistic network topologies. Due to this, the LRU and LFU algorithms are widely implemented with the content placement schemes due to their sensitivity towards content access pattern and content popularity, respectively. The distinguishing features of the reviewed caching strategies are summarized in Table 1. As defined in Table 1, in most of the existing on-path caching schemes the routers take caching decisions independently and do not cooperate with each other. This leads to excessive number of caching operations and increases duplicate contents in the network. Due to this, the existing schemes achieves limited gain in the network performance. Additionally, the existing clustering-based caching schemes have not explored the joint effect of content popularity and the distance attributes on caching performance. Therefore, a novel network clustering scheme is proposed in this paper for efficient use of the caching resources and improved QoS for the end-users. The proposed scheme considers hop-count and link bandwidth information to form tightly coupled clusters. Then, the proposed caching scheme jointly considers the clustering information, content popularity, and the content provider distance for caching decisions. With this, the popular contents are placed near the end-users with fairly multiplexed content redundancy in the path. This makes the proposed scheme suitable for CCN-based applications. System Model and Assumptions Let G(V, E) be a network topology having a set of nodes represented as Here, E denotes the set of connections that are used for the Interest/Content message forwarding among nodes in the network. Figure 1 illustrates an example of the network topology. Here, U i represents the ith end-user device and it generates Interest messages in the network. The R i denotes ith router in the network and these routers perform Interest/Content message forwarding and caching operations. The notation (serv) defines the servers in the network and each server works as an Interest message sink that satisfies all Interest messages. In the system, all the network routers have caching capability (for simplicity, although it is not necessary) and the decisions related to content placement depend on several parameters as described in Section 5. Our recent studies [14,34] establish the effective heuristics for the determination of content popularity that can assist in computing the content access frequencies. However, these previously suggested schemes take autonomous caching decisions and have a further scope of improvement using cooperation among network nodes. To simplify further discussions, the notations used in the model are defined in Table 2. It has been assumed that the content packets are of fixed size and the content access pattern follows Zipf distribution model [15,42]. The Zipf distribution is widely implemented in large-scale networks to model realistic network traffic patterns as it assigns ranks to the contents based on their popularity. Here, content popularity is defined as the content access frequency from the catalogue [10]. It has also been assumed that the proposed scheme implements a request-response model [43] of Content-Centric Networking. In this model, the Content message follows the same route through which the Interest message arrived at the content provider. In general, these assumptions are unbiased under consideration of location independence and name-based routing features of CCN. As shown in Figure 1, the network has been partitioned into three clusters namely C 1 , C 2 , and C 3 using the proposed network clustering scheme elaborated in the subsequent section. Cluster C 1 contains routers R 1 , R 2 and R 3 and the end-user devices U 1 to U 6 . In other words, {R 1 , R 2 , R 3 , U 1 , U 2 , . . . , U 6 } ∈ C 1 . Similarly, {R 4 , R 5 , U 6 , . . . U 11 } ∈ C 2 and {R 6 , R 7 , U 12 } ∈ C 3 . Suppose, the end-user device U 3 generates an Interest message for the content name "\pre f ix\xyz and forward this message towards the server. Lets assume that the Interest message follows a path U 3 → R 1 → R 2 → R 5 → R 7 → Serv and no intermediate router have a copy of the requested content. Then, the server would prepare the corresponding Content message with the required payload and transmit it in the backward direction towards U 3 . In the proposed caching scheme, at most one copy of the incoming content would be cached in the cluster from where its request is generated (C 1 ; as U 3 ∈ C 1 ). As the Interest message for content "\pre f ix\xyz is generated from U 3 ∈ C 1 , the on-path routers R 1 and R 2 would take content placement decisions based on the content popularity and the hop count parameters (discussed in Section 5.5). Thus, the remaining intermediate routers in the path (R 5 and R 7 ) simply forward the content "\pre f ix\xyz towards U 3 without caching operation as {R 5 , R 7 } / ∈ C 1 . Therefore, the content redundancy and the number of caching operations are reduced significantly in the network. It has been argued that this would lead to lower content retrieval delay, network traffic, and improved QoS for the end-user devices. Request rate from each end-user device in per unit time H(I j ) Number of in-network routers and servers traversed by the message I j . H(D j ) Number of in-network routers and servers traversed by the Content message D j . Number of in-network routers between the routers R i and R j . Min(B(R i , R j )) Minimum bandwidth in the intermediate links between R i and R j . α Exponent value in Zipf distribution Clus(I j ) Unique identification number of the cluster in which I j is generated. Boolean variable to control intra-cluster caching operations. T R Threshold value for caching decisions in the network routers. |Ctlg| Number of distinct contents in the network For caching decisions, the content popularity and hop count metrics are determined using the following concepts: Content popularity determination using Popularity Table: According to the Zipf distribution, there are always few content requests for the unpopular contents in the network. If the caching scheme does not consider content access patterns during placement decisions, then the unpopular contents may be stored for longer durations in the network routers without being accessed again. This leads to poor use of network resources as cache miss probability increases due to caching of unpopular contents. Moreover, it has also been observed that the few routers with high importance receive more number of Interest messages as compared to other routers in the network. To resolve these issues, our previous work [17] has suggested to integrate a large size Popularity Table with each network router. This table is used to determine the content access frequency. The Popularity Table stores only the name of the requested content in its slots (PT s R i ) and hence, this has negligible space overhead on the routers. When, the Popularity Table reaches its maximum size (Max(|PT R i |)), then First-In-First-Out (FIFO) replacement mechanism is used to evict oldest content request from the table to store incoming request information. During caching decisions, the router computes the popularity of the incoming content by counting its occurrences in the Popularity Table. Figure 2 illustrates the working of the Popularity Table. Suppose, the maximum size of the Popularity Table Max(|PT R i |) is 5. Figure 2a shows the structure of a Popularity Table, implemented in a specific router (R i ), after arrival of Interest messages: I 1 , I 4 , and I 3 in a sequence. As shown in the figure, only the name of the requested contents (Name(I i )) are stored in the Popularity Table and therefore, this structure does not causes significant storage overhead in the cache. In Figure 2a, two slots of the Popularity Table are empty and it has been described as Max(|PT R i |) = 5 and |PT R i | = 3. After arrival of Interest message I 2 and I 4 , the empty slots of the Popularity Table are updated as demonstrated in Figure 2b and the structure reaches to its maximum capacity (Max(|PT R i |) = |PT R i | = 5). When a new Interest message (I 5 ) arrives, the router determines that the Popularity Table has no free slot and hence, the FIFO replacement algorithm is used to evict the oldest content name from the Popularity Table to store the information of incoming Interest message. Therefore, the information of oldest Interest message (I 1 ) is replaced with Name(I 5 ) as shown in Figure 2c and now, Name(I 4 ) becomes the oldest content (slot-2) for eviction during future Interest message arrival. Figure 2. An illustration of the management of Interest message information in the Popularity Table. ( Hop count monitoring: The hop count is a simple and effective metric to increase caching probability towards the edges of the network [18,34]. The hop count metric for the Interest/Content message has been computed as the number of hops (routers/server) traversed by the message to reach the content provider/requester, respectively. Proposed Caching Scheme In this section, the proposed network clustering scheme is discussed in Section 5.1. Section 5.2 defines the updated structures of the Interest and Content message for the caching decisions. Then, the proposed Interest and Content message processing mechanisms are introduced in Sections 5.3 and 5.4, respectively. Proposed Clustering Scheme Algorithm 1 shows the proposed clustering mechanism to form the clusters. The intracluster nodes collaborate with each other to take caching decisions without any additional communication overhead. In the proposed clustering strategy, initially the top "k routers are identified according to their degree centrality in the network. The degree-centrality is computed as the total number of inbound and outbound links connected to a router. The optimal number of clusters are obtained by observing the network performance (in terms of cache hit ratio) for different number of clusters. Therefore, the network clustering is dynamic and changes for different network topologies. These "k routers are designated as the initial centroids (Centroid i ∈ C i ) before start clustering of the network nodes. Using degree centrality metrics, the clusters would be tightly coupled as more number of routers become adjacent to the centroids. It is mentioned in step-1 and step-2 of Algorithm 1. It would also be interesting to analyze the other metrics for selection of initial centroids such as betweeness centrality and closeness centrality. However, the earlier works [15,44] in this direction have shown that the node degree centrality is a sufficiently good criteria for network clustering. Additionally, the time complexity to determine the degree centrality in a network topology is O(V 2 ), which is much lesser than the time complexity to compute betweeness and closeness centrality measures that have the time complexity of O(VE + V 2 ). Therefore, the degree centrality measure is used to select initial centroids. 1. Sort the routers according to their decreasing order of degree centrality. 2. Designate top "k routers as initial centroids that have higher degree centrality (Centroid i ∈ C i ). 3. Iterate step-3(a), 3(b) and step-4, till there is a change in centroids: (a) Determine the distance between the routers (R j ) and each of the centroid (Centroid i ) using following equation: Assign each router (R j ) to the closest centroid (Centroid i ), i.e., R j ∈ C i . 4. Determine the new centroid (Centroid i ) in each cluster that has minimum distance from the intra-cluster routers. Then, the scheme determines the distance of each router (R j ) from all the centroids (Centroid i ; i ∈ {1, 2, . . . , k}) as illustrated in step-3(a). The distance between a centroid Centroid i and the router R j is determined using the hop count and bandwidth parameters as defined in Equation (1). The probability to associate a router into a specific cluster increases with a decrease in the number of hops between its centroid and the router. The value of distance parameter (Dist(Centroid i , R j )) decreases with an increase in the bandwidth between the centroid and the router. Therefore, using Equation (1), the router is assigned to a centroid that has minimum hop count from the router and is also connected through the high bandwidth links to form tightly coupled clusters (shown in step-3(b)). It improves the efficiency of content forwarding from one node to another node within the clusters using higher bandwidth connections. After each iteration of the cluster formations, the router that has minimum distance (computed using Equation (1)) from its intra-cluster routers is designated as a new centroid for its cluster. If the centroids are changed as compared to the previous iteration, then step-3 is executed again. Otherwise, if there is no change in centroids, then it indicates that the cluster formation process is completed and the routers are partitioned into "k clusters. After clustering of the network routers, the end-user devices connected with the edge routers also become part of their respective clusters. Structure of Interest and Content Message The proposed caching scheme considers the cluster information, content popularity, and hop count parameters for caching decisions. Therefore, the structures of Interest and Content messages are updated to store information for these parameters. Towards this, each Interest message I j is updated with the novel fields, H(I j ) and Clus(I j ) as shown below. Structure of Interest message: Name(I j ) H(I j ) Clus(I j ) . . . Here, the name of the requested content is stored in the Name(I j ) field. The H(I j ) field stores the total number of hops traversed by the Interest message (I j ). The Clus(I j ) field contains the unique identification number of the cluster in which the I j is generated by the end-user device (U u ) in the network. This unique cluster identification id is identical for all the end-user devices and routers that are grouped together in a cluster and unique for different clusters. As the content caching operations are performed during the Content (Data) message (D j ) forwarding towards the end-user devices, the H(I j ), Clus(I j ) and H(D j ) fields are appended in D j for efficient caching decisions. The structure of the content message is illustrated below. Structure of Content message: Name(D j ) The name of the requested content is stored in the Name(D j ) field. The H(I j ) field contains the hop count information which is traversed by the Interest message (I j ) from the end-user device to reach the content provider. The value of H(I j ) and Clus(I j ) field in the D j are replicated from the Interest message (I j ) to D j and the count of hops traversed by D j is stored in the field H(D j ). Interest Message Forwarding Mechanism In this section, the Interest message forwarding and processing mechanism are discussed and summarized in Algorithm 2 (Interest message forwarding mechanism). As shown in step-1 of the algorithm, when an end-user device (U u ) requires a content (Data) D j , then it prepares the corresponding Interest message I j with the requested content name as Name(I j ) and initializes the HC(I j ) field as 0. The network is already clustered according to the proposed clustering scheme and each cluster has a unique identification number which is same for all the intra-cluster nodes (end-users and routers). Therefore, the device U u write its cluster identification id in the Clus(I j ) field of I j and forwards it to the adjacent router R i (step-2). On receiving the message I j , each on-path router R i increases the value of H(I j ) field by 1 (step-3(a)) and insert the requested content name Name(I j ) in its Popularity Table according to FIFO replacement mechanism as shown in step-3(b). Then, R i searches its cache for the requested content and if the content exists then Algorithm 3 (Content message forwarding and caching mechanism) (discussed in Section 5.4) is executed. Otherwise, the traditional Interest message forwarding process is executed as illustrated in step-3(d) to ( f ) and elaborated in Section 2. Algorithm 2: Interest message forwarding mechanism (U u , I j , R i , R m ) 1. U u prepares an Interest message (I j ) to retrieve the content D j and initialize H(I j ) = 0. 2. U u writes its unique cluster identification id in the Clus(I j ) field of I j and forward towards its adjacent upstream router R i . 3. Then, any intermediate router R i performs following steps after receiving I j . (a) Update the value of H(I j ) field as H(I j ) = H(I j ) + 1. where "s represents the next empty slot in the Popularity Table of If requested content exists in the CS(R i ) then navigate to Algorithm 3: Content message forwarding and caching mechanism. Else, if PIT of R i has a record for I j , then aggregate I j in its PIT. (e) Else, Search the FIB of R i to forward I j to appropriate upstream router. If entry found, then forward I j accordingly and create an entry in the PIT. (f) Else, discard I j from the network. Algorithm 3: Content message forwarding and caching mechanism (U u , D j , R m /serv, R y ) 1. If requested content exists in the CS(R m ) or I j reaches the server (serv), then following steps are performed: (a) Prepare a Content message D j with initializing corresponding field Name(D j ) and the requested payload. Replicate the values of Clus(I j ) and H(I j ) fields from I j to the Clus(I j ) and The content provider (R m /serv) writes its unique cluster identification id (Clus(R m )/Clus(Serv) in the Clus(D j ) field of D j . (e) Initialize the boolean field η as TRUE. (f) Transmit D j towards U u . 2. When D j reaches to an intermediate router R y , then R y perform following steps for caching decisions and content forwarding towards U u . 3. Update the value in H(D j ) field as H(D j ) = H(D j ) + 1. 5. Else, If T R ≤ Caching_Gain and η = TRUE then, Cache D j in the CS(R y ) using LFU cache replacement strategy. Reset η = FALSE. 6. R y forwards D j towards the U u using its PIT. Content Message Forwarding and Caching Mechanism This section elaborates Content message forwarding and caching mechanism which is summarized in Algorithm 3: (Content message forwarding and caching mechanism). When requested content is found in the cache of router R m or the Interest message I j reaches the server (serv), then R m /serv prepares a Content message D j with the requested payload as shown in step-1 of Algorithm 3. Then, the content provider (R m /serv) replicates the values of Clus(I j ) and H(I j ) fields from I j to corresponding fields of D j and reset the value of H(D j ) to 0. Subsequently, the (R m /serv) write its unique cluster identification id in the Clus(D j ) field of D j and set the value of boolean variable (η) to "TRUE" which indicate that the caching is enabled for the content in the on-path routers step-1(d) to 1(e). The content provider then forward the message towards its requester (U u ). In the path, the intermediate router R y perform step-2 to 6 for content caching and forwarding operations. As illustrated in step-3, the on-path router R y increases the hop count value of H(D j ) field by 1. In the proposed caching scheme, at most one copy of the content is cached in those routers (R y ) which belong to the cluster that has generated the request ((Clus(I j ) = Clus(R y )). The routers that belong to other intermediate clusters perform content forwarding operations without its caching. This approach minimizes computational and caching delay as shown in step-4. Moreover, to reduce cache replacements and content redundancy, the content is not cached in the intermediate routers if the content provider (R m /serv) and the requester (U u ) exists in the same cluster (Clus(D j ) = Clus(I j )) as shown in step-4. Otherwise, if the Interest message is generated from the different cluster than the content provider then, following steps are performed. For caching decisions in R y (Clus(R y ) = Clus(I j )), the popularity of D j is determined by counting the occurrences of requests for D j in the PT R y as mentioned in step-5(a). Then, the Caching_Gain is computed as the product of content popularity and the normalized hop count parameter (step-5(b)). The normalized hop count is determined as the ratio of H(D j ) and H(I j ). According to step-5(b), the Caching_Gain increases with an increase in the content popularity and the distance traversed by the content message D j . Therefore, the popular contents are placed near the edges of the network with a higher probability, and the excessive content redundancy is controlled using the proposed clustering-based mechanism. Once the cache of the intermediate router is full, the LFU replacement algorithm is used to substitute the least popular content with the incoming content that has Caching_Gain ≥ T R (Threshold). The content caching operation is performed only when the value of η is "TRUE" which indicate that the content (D j ) is not cached in the cluster (Clus(R y )). To ensure that at most one router cache the incoming content (D j ) in the requester's cluster, the value η is reset to "FALSE" after content caching. Finally, each intermediate router (R y ) forwards the Content message towards the requester (U u ), irrespective of the caching decision as defined in step-6. An Illustration of Proposed Content Message Forwarding and Caching Mechanism As discussed in Section 4, suppose the network is partitioned into three different clusters as shown in Figure 1 and an Interest message for "\pre f ix\xyz (represented as I i now onwards) is generated by U 3 and forwarded in the network through the route: It has also been shown in Section 4 that in the proposed caching scheme, the content caching decisions are taken by R 1 and R 2 based on the content popularity and hop count parameters as the request has been generated from Cluster C 1 . Suppose the size of the Popularity Table is 10 in R 1 and R 2 and the count of Interest messages for I i in the Popularity Table are 5 (PT R 1 (Name(D i ))) and 6 (PT R 2 (Name(D i ))), respectively. As the requested content is fetched from the server, the value of H(I i ) would be 5. The value of H(D i ) would be 4 and 3 at router R 1 and R 2 , respectively. Then, the Caching_Gain would be computed for router R 2 using step-5(b) of Algorithm 3 as follows: Suppose, the value of T R is 0.4, then according to step-5(c), the content would not be cached in R 2 because (T R > Caching_Gain). Then, the content message D j would be forwarded towards R 1 with η = TRUE. On receiving D j , R 1 would compute the Caching_Gain as follows: In this case, the value of T R ≤ Caching_Gain. Therefore, the content would be placed in the cache of R 1 and then it would be forwarded to end-user device U 3 . On the other side, if the content is cached in R 2 after computation of Caching_Gain, then the value of η become FALSE and the router R 1 does not cache the content. Therefore, the proposed caching scheme ensures that at most one copy of the incoming content message is cached in the routers of requesting cluster to increase content diversity in the network. As the proposed scheme does not consider the router's importance (such as degree centrality, betweeness centrality etc.) during content placement decisions, the network load is not concentrated on a few network routers. Moreover, the proposed caching scheme does not require cluster heads for Interest/Content message forwarding and caching operations. Thus, the network traffic and computations are distributed among the network routers and the scheme does not suffer from the load balancing and bottleneck issues. Performance Evaluation This section first discusses the simulation environment and the values of its parameters. After this, the performance of the proposed caching scheme is evaluated in terms of the cache hit ratio, average network hop count, delay, and network traffic metrics. Then, the obtained results are compared with the peer caching schemes such as traditional caching strategy (LCE) [27], DC-based [15], FGPC [13], and recently proposed CPNDD [17] and PDC [34] schemes. Simulation Environment The ndnSIM simulation tool [45] is used to examine the performance of the proposed and the peer caching schemes in the CCN environment. For simulation setup, a network topology is build based on the Abilene network [46]. The Abilene network topology is implemented in the United States for connectivity among the academic institutions, Universities and other affiliated organizations across the District of Columbia and Puerto Rico. The performance of most of the existing and recent caching schemes have also been examined on the Abilene network topology such as DC-scheme [15], PDC [34] and CPNDD [17] strategies. Therefore, this topology is used for performance evaluation of the caching solutions. The network topology connects the nodes using up to 10 Mbps (bandwidth of network connections ranges between 1 and 10 Mbps) connections having a link delay of 10 ms. It contains 167 nodes which comprise 133 end-user devices (requesters), 33 routers, and 1 content server. The topology has 11 core routers and 22 edge routers. The edge routers are directly connected with the end-user devices and each end-user is connected with just one of the edge routers. The server (serv) stores 5000 contents altogether that can be requested in the network and hence, the content catalogue size |Ctlg| is 5000. The payload size of each content message is 1 KB. The cache size of in-network routers is set to 1% (|CS(R i )| = 50) and 2% (|CS(R i )| = 100) of the content catalogue size to obtain realistic results under different simulation configurations. The content access pattern follows Zipf distribution with skewness parameter α = 0.7 [34]. The Interest message generation frequency (λ) is 50/s for each end-user device and nearly 1 million content requests are generated in 1000 STU (Simulation Time Unit) during performance evaluation of the content caching strategies. One of our prior work [34] suggested that the size of the Popularity Table is directly proportional to content catalog size. Hence, for reliable and accurate determination of the content popularities, the size of Popularity Table is set to 1% of the content catalog for effective content caching decisions, which is (Max(|PT R i |) = 0.01 × |Ctlg| = 500) for each router. It has also been observed that increasing the size of Popularity Table beyond this value, does not increase the QoS for requesters in a linear manner and increases the computational overhead in the network routers. Therefore, the Popularity Table is implemented with 500 slots in each network router to determine the content access frequencies reliably. Before performance evaluations, the Abilene network topology is clustered into different number of non-overlapping clusters (k = {1, 2, 3, . . . , 7}) using the proposed clustering mechanism. To determine the appropriate number of clusters ("k"), the cache hit ratio has been computed with |Ctlg| = 5000, |CS(R i )| = 50, α = 0.7, λ = 0.7, |PT R i | = 500 on k = {1, 2, 3, . . . , 7}. The average cache hit ratio(%) obtained for different number of clusters is illustrated in Figure 3. As shown in Figure 3, the optimal cache hit ratio is achieved when k = 5, and thus, the network is partitioned into 5 clusters. To determine the optimal threshold value (T R ) for caching decisions, the simulation executions are performed for different values of threshold ranging between (T R = {0.1 − 10.0}) with above mentioned network configurations. The average network delay metric is used to examine the optimal value of T R and the minimum value of this metric is achieved with T R = 1.5. Hence, this value is used during the comparison of the proposed caching scheme with peer strategies. Although the threshold value and the number of clusters have been selected based on the empirical study on a standard network topology and may change for other CCN topologies, it provides a good foundation to evaluate the performance of the proposed caching scheme on large-scale CCN-enabled networks. Performance Evaluation of Caching Schemes: Cache Hit Ratio ( %) A cache hit occurs when the incoming Interest message is satisfied using the cached copy from the network routers. Contrarily, if the requested content is not found in the CS of the router, then the cache miss happens. The network cache hit ratio (%) is the percentage ratio of the number of cache hits and the total number of Interest messages received by all the routers in the network. The increase in the cache hit ratio decreases the content retrieval delay and the load from servers. The cache hit ratio represents the effectiveness of caching scheme to reduce the redundant traffic in the network. The gain in cache hit ratio is computed as the difference between the average cache hit ratio achieved by the proposed scheme and the existing caching schemes. Figure 4 shows the average hit ratio obtained by various caching schemes when caching capacity of in-network routers is 50 (1% of Ctlg). In the beginning, the cache hit ratio of all the schemes is low because the in-network cache are initially empty and the required contents are retrieved from the server. In this scenario, the traditional LCE caching scheme, FGPC, and DC-based schemes show poor hit ratio due to their underlying heuristics and the proposed scheme outperforms them by achieving up to 4.1%, 4.5%, and 3.7% gain from them, respectively. The proposed scheme also shows up to 1.5% and 2.3% gain from recently proposed CPNDD and PDC caching strategies, respectively. Figure 5 illustrates the average cache hit ratio when the caching capacity of network routers increases to 100 (2% of |Ctlg|). In this case, the proposed and existing caching schemes shows significant improvement in the cache hit ratio from the previous simulation scenario where |CS(R i )| was 50. With larger caching capacity, the proposed scheme shows up to 5.0%, 4.3%, 5.4%, 3.2%, and 1.8% gain in hit ratio from the LCE, DC-based, FGPC, PDC, and CPNDD caching schemes, respectively. This gain is achieved as the proposed clustering-based caching scheme places popular contents near the edge routers with reduced intra-cluster content redundancy and more space is allocated for the content caching. Thus, the available cache space is fairly used by popular contents in the network. Figure 6 shows the average network hop count observed in the proposed and peer caching schemes under identical simulation conditions with |CS(R i )| = 50. As the proposed scheme places popular contents in the routers and evicts less-popular contents during cache replacement decisions, more requests are served by the intermediate routers than the server. Hence, the content retrieval path is shortened and the QoS for the end-user devices improves. During simulations, the proposed caching scheme reduces the average network hop count up to 13.2%, 12.0%, 13.4%, 7.7%, and 6.2% from the LCE, DC-based, FGPC, PDC, and CPNDD caching strategies, respectively. Figure 7 shows the average network hop count experienced by the end-user devices when caching capacity of in-network routers is increased to 100 contents with keeping other simulation parameters remain unchanged. During executions, similar to previous results, the proposed scheme shows a 7.1-15.1% reduction in average hop count metric from the peer caching schemes. These results prove that the proposed strategy effectively reduces the number of hops in retrieving the required content as compared to other schemes. Performance Evaluation of Caching Schemes: Average Network Delay (in Microseconds) The average network delay is determined as the total time (in microseconds) between preparing the Interest message and receiving the requested content. It also includes the request retransmission delay, if the content is not received within the defined duration. This metric represents the performance of the network from the perspective of end-user devices. The reduction in average network delay signifies improved network performance as the content is retrieved from the nearby routers. Figures 8 and 9 show the average network delay observed under different caching schemes for the caching capacities of 50 and 100, respectively. As expected, the proposed caching scheme shows the least average network delay as it focuses on caching the popular contents near the edges of the network with reduced content duplications. The average network traffic is computed as the total amount of data on network connections in per unit time and represented in terms of KB/s. This metric is used to examine the efficiency of the caching schemes and content transmissions in the network. The proposed clustering-based caching scheme does not flood the Interest messages in the network and supports efficient caching decisions using the network clusters, content popularity and distance parameters. Therefore, the network traffic is reduced for identical content transmissions and more diverse contents are accessed from the nearby devices. The percentage reduction in average network traffic is determined using Equation (5). In this equation, the variables %T_reduc, T(P.S.), and T(E.S.) define the percentage reduction in average network traffic, and average network traffic observed under proposed scheme and existing peer scheme, respectively. %T_reduc = (T(E.S.) − T(P.S.)) × 100 T(E.S.) (5) Figure 10 shows the simulation results for average network traffic with |CS(R i )| = 50. The results display how the proposed caching mechanism effectively reduces the traffic and load from the network connections. In this scenario, the proposed caching scheme shows up to 8.3%, 8.1%, 9.5%, 5.6%, and 4.9% reduction in the network traffic from the competing LCE, DC-based, FGPC, PDC, and CPNDD caching schemes, respectively. It has also been observed that a direct correlation exists between the average traffic and the average network delay metrics. The smaller average network delay implies that the requested contents are found near the end-user devices and thus, a lesser number of hops are traversed to retrieve the content. This leads to decreased network traffic and increases the use of network resources. As the |CS(R i )| increases to 100, the average network traffic reduces for all the caching schemes because more contents are cached in the intermediate routers. In this scenario also, the proposed caching scheme outperforms the existing strategies by achieving up to 11.2% reduction in the average network traffic from LCE and peer caching strategies as shown in Figure 11. Conclusions This paper starts with presenting various existing content placement schemes for the CCN environment in the literature. Then, a novel network clustering-based content caching scheme is proposed in which the intra-cluster routers cooperate with each other during content placement decisions. The proposed scheme considers the cluster information, content popularity, and hop count parameters to effectively use the available cache resources. In the proposed strategy, the network routers are clustered based on the joint consideration of hop count and the bandwidth parameters. Using the network clustering mechanism, the excessive cache replacement operations and the computational latency reduces significantly without additional communication overhead. Using proposed caching heuristics, the scheme increases the probability to cache the popular contents close to the end-user devices. Finally, the widespread simulations are performed with realistic network configurations and the performance of the proposed caching scheme is examined on cache hit ratio, average network hop count, network delay, and traffic metrics. The results showed that the proposed scheme outperforms the traditional CCN caching scheme along with peer heuristic-based DC-based, FGPC, PDC, and CPNDD caching strategies. In future works, the performance of the proposed strategy will be analyzed in mobilitybased networks and the recent network topologies such as Geant, Tiger2, DTelekom and Internet2 etc. Additionally, more parameters can be integrated with the existing solution for further improvement in network performance.
12,531.4
2021-10-29T00:00:00.000
[ "Computer Science", "Engineering" ]
The Semantics of Grammatical Elements: A New Solution This article is an extremely brief introduction to a new theory in the philosophy of language, called Operational Linguistics (OL). OL deals mainly with the semantics of grammatical elements (adpositions/cases, conjunctions, verbs such as to be and to have, modal verbs, numerals, quantity-related, demonstrative and interrogative-relative pronouns/adjectives, main adverbs, negative, interrogative etc.) and terms (“subject”, “object”, “noun”, “verb” etc.), and is based on the fundamental presupposition that their meaning is mainly given by operations within cognitive functions, amongst which those of attention play a key role. Therefore, the meaning of grammatical elements and terms is defined in extra linguistic terms, i. e., based on something other than language. The theory is unitary, in that it accounts for all the grammatical elements and terms on the basis of the same (few) theoretical presuppositions. Introduction Semantics is a fundamental aspect in the study of language, and a fundamental part of semantics is surely that of grammatical elements, since these are essential for the very existence of language. This article deals mainly with the semantics of grammatical elements, i. e., adpositions/cases, conjunctions, verbs like to be, to have, modal verbs, numerals, quantityrelated, demonstrative and interrogative-relative pronouns/adjectives, main adverbs, negative, interrogative, etc.. It must be stressed that this subject is dealt with here with a reference to language, not single languages. We assume that the fundamental grammatical elements in the various languages indicate abstract grammatical meanings (such as the genitive, negative, interrogative, for example) that are common to all or almost all languages (in our opinion, the existence of shared meanings is demonstrated by the fact that translation from any language into any other language is almost always substantially possible). In this article, when referring to a particular grammatical meaning (for example, the genitive), we do not intend to refer to the meaning of a particular linguistic element in a language (for example, the morphological marks of the Latin, Greek, Russian etc. genitive, or the English preposition "of", or the French preposition "de"), but to an abstract meaning, which is probably present in all languages. Therefore, the problem of the meaning of grammatical elements is dealt with here from the standpoint of the philosophy of language. Obviously, both traditional and modern linguistics have tackled the problem of the meaning of grammatical elements. What can be said about the results that have been achieved? In some cases, such as certain prepositions that are strictly related to space, the definitions seem, or may seem, rather satisfactory (see, for example: [1], [2], [3], [4], [5], [6], [7], [8], [9]) 1 . But in many other cases, such as the genitive, negative, verbs such as "to have" and "to be", etc., things seem to be very different. Actually, the results that traditional linguistics achieved in attempting to account for these meanings seem to be unsatisfactory. These results are essentially of the two following kinds: 1) The attempt to account for a meaning leads to tautological or circular definitions: for example, "not" is defined as "negative", "all" is defined as "totality". Clearly, definitions of this kind are totally unsatisfactory. 2) The linguistic element being considered is said to have different meanings according to the context, and the supposed meanings can be many. An emblematic example is the genitive, which would indicate various kinds of possession and association, the relationship indicated by the noun being modified, belonging to a group, composition, containing, participation in an action (as an agent or as a patient), origin, cause, purpose, etc.. Other typical examples are verbs such as "to have", "to get" and "to make", which are commonly defined by means of synonyms for each supposed meaning (e. g., to have: to possess, to own, to keep, to get, to obtain, etc.). Such are essentially the results of traditional linguistics, which can be found in dictionaries and grammar books. Modern linguistics, as we will see, does not seem to have led to a radical change. This article introduces a new theory that provides a unitary solution to the problem of the meaning of the fundamental grammatical elements. This theory is called Operational Linguistics (OL) (in his former works, the author used the name Operational Semantics, which probably is too restrictive; furthermore, there was a problem of homonymy with a concept in computer science, which has nothing to do with OL). OL is based on a conception of the human mind that can be considered a moderate form of constructivism. Indeed, although OL explicitly acknowledges the existence of a reality independent from the mind (unlike idealistic philosophy and radical forms of constructivism, e. g., Glasersfeld's constructivism, [14], [15]), OL conceives the mind as having a strong active or constructive character (unlike the more passive conception of the mind as a "reflection" of reality, a conception that is rather widespread in the philosophic tradition). According to OL, languagewhich is a fundamental and distinctive feature of the human mind-is not a mere "labeling" of objects and their reciprocal relationships, but also has a constructive character. In order to account for grammatical meanings, it is therefore necessary to not only consider the objective situation, but also (or, in some cases, above all) what the subject actively does with his/her mind. According to OL in fact, these meanings are mainly made up of sequences of mental operations, amongst which those of attention play a key role. Therefore, this theory accounts for grammatical meanings in extra linguistic terms, i. e., based on something that is outside language, i. e., operations (the name "Operational Linguistics" derives from this) within cognitive functions. Not only does OL deal with the meaning of grammatical elements, it is a general theory of language and linguistic thought that, as we will see, also offers solutions to other general problems in the philosophy of language (such as the reasons for the difference between human language and animal communication, if language has an innate or acquired origin etc.). The exposition of this theory (which accounts for the meaning of all the fundamental grammatical elements) requires the space of a book. Therefore, in this article we will consider very few meanings only, in order to give a quick idea of the theory and its novelty and difference from existing theories. Interested readers can find a broader exposition in [16], [17], [18]. In this article, which aims at being as brief as possible, the comparison between OL and other theories has been kept to the minimum. After these general considerations, we can start to expound the theory. The best way to go about this is not to first expound its principles and then provide concrete examples of their application, but to use a concrete example as the starting point. The Most Emblematic Case of a Supposed Extensive Polysemy: The Genitive The most emblematic case of a supposed extensive polysemy is surely the genitive-which can be expressed by means of a case mark, an adposition ("of", in English), word order (genitive-noun order, in English: e. g., "safety belt"), etc.. Grammar books and dictionaries contain long lists of the following kind (Table 1). Whether explicitly stated or not, these would be the meanings of the genitive. This solution has probably often been considered unsatisfactory, since in the history of linguistics there have been various attempts to account for the meaning of the genitive (in one language or in general) in a monosemic, or at least, in a less polysemic, way. We cannot examine these proposals in depth here. Therefore, we will only mention them, also because they have no analogy with OL's proposal. The Byzantine grammarian Maxime Planude (13th-14th century) was the first to develop a so-called "localistic" theory of (Greek) cases, i. e., a theory (also) based on "spatial" concepts, such as "movement to" and "movement from" (the term "spatial" is used in its most abstract sense, because it can refer to both real spatial relationships and grammatical relationships, such as the fact that the genitive is said to indicate the origin of the action in relation to the verb) ( [19], [20]). The so-called "Modists" or "speculative" scholastic grammarians (12th-14th century) founded grammar epistemologically on an Aristotelian basis, as a discipline that was abstract and valid for all languages, and described cases in semantic terms only (that is, without using the concept of grammatical relationship): Peter Helia, Simon of Dacia and Martin of Dacia accounted for the Latin cases by using the concepts of "substance" and "action" and the localistic concepts of "origin" (principium) and "end" (terminus) ( [19], [21], [22], [23]). In the rationalistic and universalistic approach that predominated in the 17th and 18th centuries, Sanctius and Scioppius defined cases syntactically, i. e., on the basis of the dependence relationships of nouns with the verb, noun, and preposition (the genitive was defined as the case that depends on an expressed or understood substantive) ( [21]); Port-Royal grammatical theory ( [20]) also considered cases (which it stated to be universal, even if each language expresses them in a specific formal way) as related to syntax, even if it often defined them semantically in a rather traditional way. Structuralism accounted for cases in terms of relationships of opposition to each other: within this approach, Hjelmslev ([20]) defined cases (which he considered abstract and general universal entities, which are expressed in various ways in the various languages) on a semantic basis, by modifying the localistic theory by Maxime Planude; Jakobson ([25]) defined the Russian cases by using a combination of semantic features; de Groot ([26], [27]) and Kuryłowicz ([28], [29]) defined the Latin and Indo-European cases respectively, both in semantic and syntactic terms; Rubio ([30]) defined the Latin cases by using a distinction between the semantic and functional character of the noun (the genitive is said to be semantically a noun, but functionally an adjective); Benveniste ([31]) accounted for the meaning of the Latin genitive in terms of a syntactic transposition of a verb phrase into a noun phrase. Fillmore ([32]) introduced the concept of "deep case", which is a syntactic-semantic relationship of the noun phrase with the verb, which is expressed at the surface level in various ways (morphological cases, adpositions and other ways) in the various languages. The "abstract cases" by Chomsky ([33]) are instead pure syntactic relationships, which any noun phrase is provided with. Anderson ([30], [31], [32]) described cases (which he considered in a universalistic way, like Hjelmslev) semantically on a cognitive basis (by resorting to a combination of spatial concepts). Another attempt with a semantic basis was made by Perret ([33], p. 477), according to whom the genitive is the case of lax determination (as opposed to the accusative, which would be the case of strict determination). As a general consideration, none of the aforesaid theories has been so successful as to widely substitute the traditional idea that the genitive is very polysemous. Therefore, this solution continues to be substantially accepted in almost all the works where the problem of the meaning of the genitive is somehow involved (see, for example, [38], [39], [40], [41], [42], amongst the various quotable works). Is it credible that the genitive has all these meanings, i. e., is the solution to the problem of the meaning of the genitive such an extensive polysemy? In order to give an answer to this question, a number of things should be considered. 1) In English, the preposition that expresses the genitive, i. e., "of", is the fourth most-used lexeme (Oxford English Dictionary). Moreover, the genitive is also expressed by means of the possessive case or word order. 2) The only well-ascertained polysemy is when a word has one meaning plus very few other meanings, namely the figurative, extended etc. ones, that derive from the first meaning for easily understandable reasons (e. g., the term "nose" means a part of the face, but also snout, muzzle, shrewdness, the opening of a tube etc., a spy). In the case of the genitive, its (supposed) polysemy is very different: there is not a main meaning plus some other meanings that derive from the first for easily understandable reasons, but there would be many different meanings that have nothing to do with each other. 3) The supposed meanings of the genitive are extremely heterogeneous. Why should relationships that are so different from each other be expressed by the same linguistic element? Homonymy definitely does not come into play here. 4) The relationships are so many that one could say that no relationship seems to be excluded. In fact, this seems exactly the case. What relationship does not fall into any of these categories? 5) The supposed meanings of the genitive are substantially the same in many languages. This is a very strong argument against the thesis that the genitive is polysemous. Indeed, in commonly-found polysemy, the polysemy of a given word is generally not the same across the various languages. For example, in English, as mentioned, the word "nose" can also mean a spy, but this does not happen in Italian. If the answer to the problem of the meaning of the genitive were really the polysemy that is supposed, why should this (moreover, such extensive) polysemy be substantially the same for many languages? In brief, the situation is the following. An extremely important element of language is supposed to have a huge amount of meanings, which would be unrelated to and completely different from each other (unlike the kind of polysemy that is commonly found). The polysemy is extremely extensive (no relationship seems to be excluded) and substantially the same in many languages (while in commonly-found polysemy, the polysemy of a given word is generally different across the various languages). Well, bearing these considerations in mind, can the right solution to the problem of the meaning of the genitive really lie in this huge polysemy? Our answer is no, by no means. The traditionally proposed solution implies a situation that is really too paradoxical. Let us examine the (completely different) solution suggested by OL to the problem of the meaning of the genitive. According to OL, the solution to this problem should not be searched for at the level of the particular relationships between the things that are related by the genitive, i. e., the relationships in Table 1. These are not the meanings of the genitive. These are the cases where the genitive can be used, which is a very different thing. The genitive can be used in all cases where there is a relationship (any relationship) between two things. Therefore, the relationships between things related by the genitive are all the possible relationships (hence, this seeming huge polysemy). Yet the function of the genitive is not to designate all these relationships. Designating such a big variety of relationships by means of the same linguistic element makes no sense. The function of the genitive (i. e., its meaning) is to induce the listener's attention to focus on something, A, by means of the relationship that A has with something else, B, and to bear in mind the existence of this relationship. In other words, the genitive indicates the attentional focalization of something, A, while bearing in mind that A has been previously focused on together with something else, B. Examine the examples in Table 1. One can probably sense that the meaning of the genitive is all in this focusing the attention on something while keeping present that this something has some relationship with something else. For example, the phrase "John's car" does not simply and specifically express the relationship of possession. If we want to do this, we say "John has a car". If we say "John's car", we want the addressee to focus his/her attention on a certain car (while keeping present the fact that the car is possessed by John), in order to say something about this car (for example, that it "is red"). The same can be said of the phrases "marble statue", "glass of water", "Bob's wife", etc.. With the genitive we are not simply and specifically designating the relationship of composition, containing, the conjugal relationship, etc., respectively. These things are indicated by the expression as a whole or the context, not by the genitive. The best proof of this is that an expression such as "my friend's picture", if it is isolated, is ambiguous as regards these relationships, because it can indicate a picture possessed by, or painted by, or that shows, a friend of the speaker (moreover, one should note that, in particular contexts, this expression may indicate other kinds of relationships too: for example, amongst pictures that are chosen, indicated, sold, restored etc. by different persons, the expression "my friend's picture" may indicate these relationships). But it is not at all ambiguous that we want to talk about a "picture", while bearing in mind that it is in some way associated with "my friend", that is, we want to talk about something, while bearing in mind that that something is in some way associated with something else. This is the meaning of the genitive. Only and simply this. Therefore, a phrase such as "my friend's picture" does not mean "the picture possessed by my friend" or "the picture painted by my friend" or "the picture that shows my friend". It means "the picture that has some relationship (relationship that is known on the basis of the general knowledge or the context) with my friend". The same can be said of all the phrases with the genitive. The reason for the existence of the genitive is its huge practical usefulness. Indeed, indicating something, A, while bearing in mind the relationship that A has with something else, B, is used for at least two very important purposes: a) identifying A amongst the various possible items of the same class ("John's car"); b) speaking about A together with something else we are interested in, such as a quality of it ("marble statue"), its function ("safety belt"), cause ("to die of tuberculosis"), agent or patient, if A is an activity ("John's arrival", "the discovery of America"), etc.. As we can see, OL changes the traditional approach radically, since OL investigates the meaning of the genitive at a completely different level from the other approaches. a) have sought to account for this meaning by providing a list of the possible relationships between things that are related by the genitive, or looking for something so general as to include all these relationships; or else, b) have considered the genitive a mere syntactic relationship. In other words, the meaning has been searched for, so to say, "in the things", i. e., in the objective situations where the genitive is used. OL uses a completely different approach: it mainly investigates the meaning of the genitive at the level of the mental operations performed by the speaker, i. e., the subject. As a result, OL reduces the hardly believable wide polysemy of the genitive to absolute monosemy, in agreement with the fact that the linguistic element that expresses the genitive is unique (of course, the fact that some languages can express the basic meaning of the genitive in more than one way, as happens in English-possessive case, preposition "of", word order-does not matter: here we are not interested in the possible secondary differences of these forms, but in their common basic meaning). Operational Linguistics in Brief I have introduced my analysis of the genitive before outlining its underlying theory. At this point however, the most general outlines of the theory should be presented. The Origins of OL OL derives from Silvio Ceccato's (1914Ceccato's ( -1997 thought, of which it preserves several theses. Nevertheless, OL is a broad and innovative development of Ceccato's thought and noticeably different from it in part. Ceccato's thought started developing in the 1950s and reached its full maturity in the 60s and 70s ( [43], [44], [45], [46], [47], [48], [49], [50]). Ceccato used various names for his theory. The name Operational Methodology (OM) is the one that has prevailed in his School, the Scuola Operativa Italiana (SOI) [Italian Operational School]. Ceccato was well-known in Italian philosophical circles since the 40s and directed important projects involving the application of his theories, namely: a) one of the very few machine translation projects in Europe and the only one in Italy in the first phase of research in this field (funded by the U. S. Air Force, 1959-66; described in [42]); b) the so-called "mechanical reporter" project, i. e., a machine that had to be able to observe and describe a scene made up of seven objects arranged in various ways on a stage (Italian National Research Council, 1958-66; described by [42]). Nevertheless, his thought has not received much attention. This can be due to various reasons, which cannot be examined here. Yet we believe that the work of Ceccato, while requiring an in-depth critical revision, includes many original and valuable ideas and intuitions, which deserve to be taken into consideration again and developed. This is precisely where the author has focused his work ever since the second half of the 90s ( [51], [52], [53], [54], [55], [56], [57], [58], [59], [60]). In this article, there is the problem of distinguishing Ceccato's original theses from those of the author. Therefore, in the text which are Ceccato's main original theses and which are the author's is indicated. When this is not provided, the thought exposed is the author's own, with influences from Ceccato. The above exposed analysis of the genitive is entirely the author's own, as well as the way of exposing the subject, which differs entirely from Ceccato's. The Fundamental Theses of OL As mentioned, the fundamental thesis of OL is that grammatical elements designate sequences of mental operations amongst which the ones of attention play a key role (this thesis is Ceccato's own). Therefore, we may say that grammatical elements are "tools to pilot attention" ([61], [62], [63]) and other cognitive functions of the listener. Ceccato called these sequences of mental operations "mental categories", because they have some analogies with the categories of Kant's philosophy. OL has adopted this name as well 2 . We call the mental operations that make up the mental categories elemental mental operations (EOMC). Therefore, defining the meaning of a linguistic element that designates a mental category means, according to OL, identifying the structure of that mental category, i. e., the sequence of elemental mental operations that make it up. We call this task "analysis of a mental category". The system of EOMC we propose, which is very different and much more complex than Ceccato's, is the following. 1) Operation of attentional focalization (AF) -This operation has the fundamental property of "selecting", or "highlighting" its object with respect to all the rest 2 We must point out that the meaning OL gives to the term "category" is completely different from the meaning that cognitive psychology and linguistics give to the same term. Typically, cognitive psychology and linguistics use the term "category" to highlight the fact that, since many objects of the physical world share common features, but are not identical, we create classes (that is, categories) by means of a mental process of abstraction ( [64], [65], [66], [67]). On the contrary, OL uses the expression "mental categories" to indicate the meanings of the linguistic elements that do not designate physical (or psychical) things. ( [68]). Inside AF we can distinguish various suboperations. a) AF can widely vary in extension (AFext): it may concern an object, or a part of it, or several objects. b) The focus of attention can move (AFmov) from one object to another, or from a part of the field to which it is applied to another. c) AF can last for variable, though limited, amounts of time (AFdur [dur = duration]). d) The extension, movement and duration of attentional focalization can be estimated in quantitative terms (AFext-estim, AFmov-estim and AFdur-estim, respectively). e) AF can vary in intensity (AFint-var), that is, we can pay more attention to one object instead of another. 2) Presence keeping (PK) -This is the term we will use for the fundamental operation of "bearing in mind" something that has been focused on by attention, A, while the attention focuses on something else, B. If, for example, we hear the expression "bottle and glass", we keep the meaning "bottle" present when we add the meaning "glass", which we would not do if these two words were isolated, i. e., not related by the conjunction "and". The operation of presence keeping is surely strictly related to the well-known concept, developed by cognitive psychology, of "working (or active) memory", whether in the classic Baddeley-Hitch model ( [69], [70]) or in more recent models, such as Cowan's or Oberauer's ( [71], [72]; [73]; [74]), which highlight the tight interaction between working memory and attention. 3) Operation of attentional discarding (AD) -If we say "glass or bottle", we can sense that both objects are focused on by attention and kept present, but when our attention focuses on the bottle, we must exclude, discard the glass. This operation is completely different from simply stopping to focus our attention on an object in order to pass on and focus on another object. In our case, we must bear an object in mind while somehow excluding it. We call this operation "attentional discarding". 4) Operation of representation (R) -The operation of representation is the act of thinking about something that is not present at the moment. This is what we do when, for example, hearing a word, we pass on to its meaning, which was previously memorized. Obviously, attention is also involved in the operation of representation (which is proven by the fact that when we imagine something it is difficult to pay attention to something else), but in representation the attention focuses on what this operation produces (that is, attention is not alone, but accompanies the other operation). 5) Operation of comparison (C) -Our mind performs comparisons very frequently. Every time we use typically relative words, which concern properties of an object (like "high/low", "strong/weak", "heavy/light" etc.) or express a judgement (like "good/bad", "normal/abnormal" etc.), we make comparisons. Obviously, when we perform this operation, we focus our attention on the objects compared and we bear them in mind. Even though comparison implies operations of attentional focalization and presence keeping, we believe that it has to be considered a separate function. 6) Operations of memory (MO) -Memory surely plays a key role in our mental life: by means of it, we fix and recall memories continuously. Apart from all of this, we think that memory operations are part of the structure of some mental categories ( [54], [56]). Therefore, we list memory operations amongst the basic mental operations that make up mental categories. Almost all of the operations that we consider EOMC have been repeatedly described in cognitive psychology 3 . The new idea we are putting forward is that by means of these operations we can account for the meaning of grammatical elements. Another Case of Supposed Extensive Polysemy: Preposition "with"/Verbs "to Have" and "to Get" The preposition "with" and the verbs "to have" and "to get" (these three meanings are based on the same core of operations, as we will soon see) are other examples of words that are traditionally believed to be polysemous. Indeed, grammar books and dictionaries state that the preposition "with" "indicates several relationships" (or similar expressions), and provide lists that are similar to that in Table 2. Things are not very different in modern linguistics. Prepositions are generally said to be polysemous (see, for example: [88], [89], [90], [91], [92], [93], [94]) and, whether explicitly stated or not, these would be the meanings of the preposition "with". Can such a frequently used and essential word have so many different meanings? Is not it much more convincing to think that this word has only one, more general meaning (which is why it is so difficult to determine) and as such lends itself to express the many relationships grammar speaks about? This meaning is so general because it does not lie at the level of the aforesaid more particular relationships grammar speaks about, but at a much more abstract level, i. e., the level of operations within cognitive functions that the described situation induces or allows to be performed. According to OL, the preposition "with" means that we focus our attention (AF) on something, A, then, keeping it present (PK), our attention is also extended (AFext) to something else, B, because B is related to A in such a way that our attention tends to include A and B in a single focalization (for example, we say "bottle with cork" if the cork is in the neck of the bottle; we cannot use this expression if the cork is far from the bottle; this analysis is the author's own). This analysis clearly explains why in many languages this preposition is used to express two very different relationships i. e., the relationship of company or union between two things and the relationship of means or instrument between an activity and an object. Indeed, whether we say, for example, "cup with handle" or "to write with a pen", what appears to our attention are two things that are related in such a way that our attention, when focused on A, tends to include B in the same focalization as well. In fact, the handle is joined to the cup and therefore as long as we look at the cup we also see the handle; and as long as we watch the action of writing we see the pen. The analysis also clearly explains why the preposition "with" can be used in cases where the other aforesaid relationships (manner, cause, quality, time, opposition etc.) are involved. In all the above-quoted examples the attention, while focusing on something, is also extended to something else (from an activity to the way this activity is performed, from an event to another one that happens at the same time, from the act of opposing someone/something to that someone/something etc.). Therefore, the preposition does not designate the many relationships that are listed in grammar books and dictionaries, that is, these relationships are not its meanings (which would be too many). The preposition designates a much more general relationship, i. e., A is in such a relationship with B that attention, when focused on A, is also led to "embrace" B. This very general relationship can include various more specific relationships (company or union, means or instrument, manner, simultaneousness, cause, etc.), which depend on the two related things, but the meaning of the preposition is only the first relationship, not the second ones. Therefore, there is only one meaning for the preposition, in agreement with the fact that there is only one corresponding word. Similarly to the preposition "with", the two verbs "to have" and "to get" are traditionally believed to be highly polysemous. In fact, dictionaries usually try to capture their meanings by defining each verb with a long list of other verbs (Table 3). she's having a baby in the autumn to get = to obtain she got a degree in economics » » = to purchase he used to get "The Times" » » = to catch the dog got the ball in his mouth » » = to receive he got a bicycle for his birthday » » = to understand he didn't get the joke » » = to become you'll get wet without an umbrella » » = to arrive how long does it take to get to Liverpool? However, one can easily note that these lists are nothing else but collections of more "specialized" verbs, whose meanings are included in the more general meanings of "to have" and "to get". The meanings of "to have" and "to get" are so general because both these verbs designate the same relationship as the one designated by the preposition "with", i. e., that two distinct things, A and B, are related in such a way that our attention, when focusing on A, tends to include B in the same focalization as well. The difference with the preposition "with" is that, in the case of these two verbs, as in all verbs, we see the situation from the temporal point of view, which entails focusing our attention continuously or repeatedly on the same situation (according to OL, a meaning of a verbal kind is something that requires a prolonged or repeated attentional focalization to be acknowledged, also see further on). In the case of the verb "to have", the result is something static. For example, "that man has a moustache" means that when we focus our attention on his face we also see a moustache and this remains constant throughout time. On the contrary, in the case of the verb "to get", the result is something dynamic. For example, "to get the pen" means that our hand enters into such a relationship with the pen that, if we look at the hand, we also see the pen (the pen is in the hand), while this relationship did not exist before. A Grammatical Concept Difficult to Be Defined: "noun" A grammatical concept that has proved difficult to be defined is the concept of "noun". OL offers a simple and clear definition of this concept. In order to give this definition, some other general outlines of the theory should be introduced however. According to OL, linguistic thought is made up of two fundamental kinds of elements: 1) correlators 2) correlata Correlators are elements whose specific function is to tie the other elements of thought. They are the mental categories designated by adpositions (or the corresponding cases) and conjunctions. Correlata are elements that are "tied" by a correlator: these are nouns, adjectives, pronouns, articles, verbs and adverbs. According to OL, even though the meanings of isolated words (such as "apple") are a kind of thought, actual linguistic thought occurs only when we "tie" or "correlate" more than one meaning to each other, i. e., when we say, for example, "apple and pear", "red apple", etc.. The two correlata that are tied by a correlator are called "first correlatum" and "second correlatum", respectively, according to the temporal order in which attention focuses on them. We call the whole structure that is thus formed correlation or correlational triad and we represent it graphically in the following way: in order to visually suggest the idea that a correlation is a whole where two meanings (the correlata) are tied together by the mental operations that make up the correlator. In the case of the example "pear and apple", we will have this correlation: Besides adpositions (or the corresponding cases) and conjunctions, there is another extremely important correlator. Its structure is the same as for the conjunction "and" (attention focuses on A and A is borne in mind while attention focuses on B), with the difference that A and B do not remain separate, but are "combined" together. This is due to the fact that the attentional focalization does not stop in the passage from A to B because A and B are in some way complementary. For example, A is an object that can exist on its own and B a possible feature of it (correlation substantiveadjective); or B is what may happen to A in time (correlation subject-verb); or A is a verb and B its object (correlation verb-object 4 ); etc.. We call this correlator presence keeping and we represent it graphically by means of a horizontal bar: Since this correlator is, as we can easily understand, the most used of correlators, it is convenient not to express it with a word and to indicate its presence either by simply putting the two words that it correlates one after the other (when this is possible) or using marks of the words (English has very few marks of this kind, but many languages have several of them: for instance, in the Italian sentence "bottiglia di vino nuova", which means "new bottle of wine", the two "a" that are underlined are marks of the feminine gender, which indicate that the adjective nuova, "new", is related to bottiglia, "bottle", not to vino, "wine"). According to OL, correlation is the minimal and basic unit of linguistic thought. "Minimal unit" means that a linguistic thought is made up of at least one correlational triad (this implies that even in a clause or phrase made up of two monomorphemic words, such as "I run" and "yellow flower", the elements are not two, but three, namely, the two words and the "presence keeping" correlator, which is expressed by putting the two words one after the other 5 ). "Basic unit" means that linguistic thought is generally a "network" formed by various correlations (correlational network) in which a correlation acts as a correlatum of another correlation. Therefore, the sentence "John reads books and magazines", for instance, has the following structure of thought: (the dotted line starting from the line that separates the two lower boxes of a correlation and ending with the symbol "•" placed in one of the two lower boxes of another correlation indicates that the first correlation is one of the correlata of the second correlation). This graphic representation (in Ceccato's original form, where the correlational triads are not on the same line), when there are various correlations, resembles a network, hence the expression "correlational network". However, irrespective of the graphic representation, it must be very clear that the structure of linguistic thought is not a simple linear structure where the elements are added one after the other. The elements (that is, the meanings) that make up thought are surely loaded one after the other in working memory, and the previous elements are kept present while the next ones are added. The result, however, is a nonlinear structure, which can be different even when the words are spoken in the same order. For example, the two sentences "empty whisky bottle" and "Scotch whisky bottle" have the same word order (they are made up of a first word, which, albeit different, is in both cases an adjective, plus two identical words in the same order), but the two corresponding correlational networks are different: 5 The intuition that in such cases the elements are not two but three can be found in Tesnière ( [95]), who based his syntactic theory on the concept of "connection" (connexion). This concept is nevertheless very different from the concept of "correlator", because the "connection" referred to by Tesnière is: a) an implicit link, while OL's concept of "correlator" includes implicit links, links that are indicated by morphological marks, adpositions and conjunctions; b) something very hierarchical, unlike correlator (see further on). Moreover, in Tesnière an analysis in terms of cognitive operations is missing. The theory of the structure of linguistic thought that has just been outlined (which is Ceccato's own) is called correlational theory of thought. The fact that, despite the (necessarily) linear order of speech, all the elements of a sentence are kept mentally present was also pointed out by a 19th century scholar, Steinthal, even if not in the same cognitive terms as OL (he resorted to the concept of "vibrating representations" [schwingende], see [96], pp. [102][103][104][105][106][107][108][109][110][111][112]. The concept of difference between the linear order of speech and the non-linear order of thought was also proposed as early as the 1950's by Chomsky ([97]), Tesnière ([95]) and Guillaume ([98]). Ceccato formulated this same concept more or less in the same years, almost surely quite independently. Nevertheless, the conception of the structure of thought by Ceccato is noticeably different from the others, as we will see more clearly further on. At this point our definition of "noun" can be introduced. As mentioned, the definition of this concept has proved difficult. Nouns are traditionally defined in a semantic way by stating that nouns are the words that indicate "persons, animals, vegetables, unanimated objects". Some grammar books also add "qualities, quantities, ideas", or "places, events" and so on. The "verb" category (which is the main category in contrast with the "noun"; nevertheless, the infinite forms of the verb, i. e., the infinitive, the participle and the gerund, are commonly called "nominal forms") is also generally defined in a semantic way: verbs are said to designate "processes or states". Modern linguistics is perfectly aware that these semantic definitions are unsatisfactory: for example, a word such as "birth" designates a process, but it is a noun, not a verb; words such as "to be born" and "outside" are a verb and an adverb respectively, but they designate an "event" and a "place" respectively, which are among the things that nouns are supposed to designate. Modern linguistics has therefore tried to go beyond these semantic definitions. Often, it has tried to give functional definitions and/or definitions based on the relationships among the parts of speech. The noun, for example, is said to be what occurs with articles and attributive adjectives and can be the head of a nominal phrase. Nevertheless, these definitions are partially not applicable in some languages (for example, Russian and Latin do not have articles), are partially tautological ("nominal phrase") and easily end up being circular (the noun is defined in terms of its relationships with the article and/or adjective, and the latter two are defined, either directly or indirectly, in terms of their relationship with the noun). Apart from this, even if a definition of this kind works (i. e., it identifies words that are sensed as nouns), the two following objections are still valid: a) we can say that the definition works exactly because we already sense very well which words in a sentence are nouns; b) the fact that nouns occur with certain other parts of speech does not explain what nouns are, i. e., what their nature is. The fact is that the real problem is not giving a definition of "noun" that works, i. e., that always identifies which words in a sentence are nouns. The real problem is understanding why we sense very well that in speech there are words that all belong to the same class, which is called the class of "nouns". If we understand this, the definition of "noun" comes automatically. OL provides a simple and natural solution to this problem. We have to note that: 1) conjunctions, adpositions and the verb in the personal form are never nouns; 2) the verb in the infinitive forms is a noun instead (for example, "reading books"); 3) in linguistics, adjectives are commonly considered "nominal forms" as are substantives. According to OL, the grammar category of "noun" is based on the fundamental distinction between correlators and correlata, i. e., between elements of linguistic thought with the function of linking and elements that are linked by the former. Nouns are the mere correlata, i. e., the words that designate something that has no correlating function, unlike the linguistic elements that designate a correlator or also (see below) a correlator. Nouns are therefore the meanings that, in the graphic representation of the correlation triad we use, are exclusively placed in one of the two lower boxes, unlike the meanings that are placed or are also placed in the upper box (this definition is Ceccato's own). Therefore, according to OL the grammatical category of "noun" can be defined only by using the position the word has in the correlational network (i. e., its f u n c t i o n ) as a criterion of classification, not by basing ourselves on a semantic criterion. For example, the words "John", "piece", "glass", "doors" and "windows", which are mere correlata in the following correlations are nouns: The adjective (as a theme, i. e., apart from the marks of gender, number and case that some languages apply to it) also indicates a mere correlatum, as we can see in this example: Figure 7. The adjective is a mere correlatum. Instead, the verb in the personal form is never a "noun", because it does not simply indicate a correlatum (thus it is not a "mere correlatum") but designates that this correlatum (the "bare" meaning of a verb, i. e., the meaning of its theme) is related (as a second correlatum) to what grammar calls a "person" (that is, the agent or the addressee of a linguistic act, or another person/thing [ [92], p. 193]) by means of a correlator, presence keeping (therefore, the verb in the personal form indicates both a correlatum and a correlator). For example, the personal form of the verb "to laugh" laughs indicates that the (verbal) meaning "laugh" is related to a third person singular. Therefore, "laughs" is not a mere correlatum, but designates a whole correlation, i. e. the following: Instead, the verb in the infinitive mood is a mere correlatum, as in the following examples: Therefore, in this case the verb is a noun. Thus, the noun/verb distinction does not have a semantic basis, but depends on the function that the meaning at stake has in the correlational network. Now, it is worthwhile to consider OL's definitions of "noun" and "personal verb" once again, and compare these to each other and to some others. noun: as just stated, the concept of "noun" cannot be defined on a semantic basis, but only with a functional criterion, that is, on the basis of the position that a word has in the correlation network: nouns are the mere correlata, that is, the words that designate something that has no correlating function, unlike the linguistic elements that designate a correlator or also a correlator. Nouns include substantives, adjectives (for the definition of these two categories, see below) and the infinitive forms of the verb (the infinitive, the participle, the gerund), the latter which are indeed also called "nominal forms" of the verb. verb: what requires a prolonged or repeated attentional focalization to be acknowledged (i. e., is not instantaneously recognizable, as instead happens for the substantive, see below) is a meaning of a verbal kind. This is clear for "processes" (the first of the two things that the verb is traditionally said to be), but is also true for the second, i. e., "states" (it is not possible to say that something, for example, "is still", without looking at it for a certain time). A good example to clearly sense the difference between substantive and meaning of a verbal kind (a dynamic or static one) is imagining a ship on the horizon: the ship (i. e., a substantive) is perceived instantaneously, while its moving or being still (i. e., verbs) only with a prolonged observation. Words with a meaning of a verbal kind are, for example, "(he/she/it) breathes", "breathing", "breath", "operation", "discussion", "development", "passage", "arrival" etc.. If a word designates a meaning of a verbal kind and the fact that this is related (as a second correlatum) with a grammatical person (therefore, the word designates both a correlatum and a correlator), it is a verb in a finite mood; if a meaning of a verbal kind is a mere correlatum, it is a nominal form of the verb (for example, "to breathe", "breathing") or a noun having a meaning of a verbal kind (for example, "breath") (we will not discuss what distinguishes the latter two, for example "breathing" and "breath", since this is a minor difference). It is not incorrect to say, as it has been traditionally said, that verbs designate "processes" or "states", but this is not a satisfactory definition of "verb". The traditional definition, instead of really defining verbs, simply lists the two main categories in which verbs can be distinguished (i. e., verbs that designate a process and verbs that designate a state). OL's definition instead is an extralinguistic definition, based on cognitive operations. But the main flaw of the traditional definition is that this definition cannot explain why certain words (for example, "breath"), even if they designate processes, are nouns. The traditional definition cannot explain this fact because it does not clearly distinguish, as OL does, "verb" from "meaning of a verbal kind", and does not grasp the fact that the real opposition is not between "noun" and "verb", but between "meaning of a verbal kind" and "meaning of a substantival kind" (or, simply, "substantive"). OL defines the substantive in the following way. substantive: the substantive designates something that is acknowledged in an instantaneous way (i. e., without any need to follow the situation over time, as instead happens for meanings of a verbal kind) and is acknowledged or considered independently from other things (unlike adjectives): for example, all of this applies to words such as "bird" and "flower" (i. e., substantives), but not to words such as "to fly" (i. e., a verb) and "red" (i. e., an adjective). Therefore, OL defines the adjective in the following way. adjective: the adjective designates something that is acknowledged in an instantaneous way (therefore, like substantives and unlike meanings of a verbal kind) by separating this something from something else (and therefore, not independently, as the substantive does). For example, the word "red" designates something that is instantaneously acknowledged and that does not exist independently, but is necessarily tied to something else (something red), from which it is isolated by means of the selective ability of attention. The definitions that have been just proposed, except that of 'noun', are the author's own. Once we have given our definition of "noun", we can add some considerations about the correlational theory of thought and the concept of "correlation". 1) The fact that OL conceives the structure of linguistic thought as made up of elements having an equal structure, i. e., the correlational triads (where, moreover, the correlator is often the same, i. e., the simple "presence keeping") should not make one think that the concept of correlation is too general or that OL does not accept traditional grammatical concepts such as predication, agreement etc.. On the contrary, OL, too, accepts these concepts (generally speaking, OL accepts all the traditional grammatical concepts-with only marginal modifications-and tries to account for them). Simply, OL maintains that many correlations are based on something common (i. e., the operation of presence keeping, which we believe to be substantially the loading of a meaning in working memory), and the difference amongst these correlations is determined not as much by the correlator as by the correlata. For example, the fact that "John reads" is a "subject-verbal predicate" correlation is determined not by a particular correlator that is different from the correlator of a "substantive-adjective" correlation, for example, but by the two correlata "John" (which is the first correlatum of a verb in the personal form, which, according to OL, makes it a grammatical subject) and "reads" (which is a verb in the personal form, which necessarily involves a subject). 2) It is instead true that the correlational theory of thought differs deeply from the other linguistic theories about sentence structure for at least two reasons. a) First, according to the correlational theory of thought the fundamental concepts of language are "correlation" and "correlator", while in many other theories the concepts of subject/predicate or nominal phrase/verbal phrase are central. This does not mean that OL rejects the latter. OL simply considers these less central than the concepts of "correlation" and "correlator". According to the correlational theory of thought, what is absolutely necessary in any phrase or sentence are correlators (which are expressed, as mentioned, by putting the words one after the other, or by adpositions, conjunctions, morphological marks, a particular word order-for example, the expression of the genitive by means of the inversion of the order of the two nouns-, or are implicit). Therefore, according to OL, the analysis of a phrase or sentence consists of identifying the correlators and the structure that these form when linking the various correlata. Once we have identified this structure, we can also speak of "subject" and "predicate", "noun phrase" and "verb phrase" etc., but this is less important than identifying the correlational network. Indeed, in some languages a finite verb is not always necessary in order to form a sentence ( [92], p. 176). Moreover, even in languages, such as English, where this is said to be necessary, linguistic expressions without a subject and a finite verb, such as certain exclamations, titles, labels, captions, are actually found. What instead cannot be missing in any phrase or sentence are correlators, and because of this correlators are considered central by OL. A consequence of this conception is that adpositions (or the corresponding cases) and conjunctions, i. e., parts of speech that have traditionally received less attention than the noun and the verb, become the central parts of speech. b) Secondly, the correlational theory of thought conceives the structure of linguistic thought as much less hierarchical than many other theories do. For example, the expression "scent of roses" is not described, in our theory, as a noun that governs a prepositional phrase, but as a correlational triad that is made up of a correlator ("of") that ties two correlata (the meanings of the two nouns), which are substantially in a condition of parity, except for the temporal order in which they are focused on by attention and loaded in working memory (therefore, the traditional tree structures or similar representations cannot absolutely be used to represent the structure of thought according to our theory: this subject cannot be addressed in depth here, but is addressed in [55], pp. 4-9, and [56], pp. [18][19]. 3) The correlational theory of thought easily explains why a certain sequence of words in a given language is grammatical or not, which is a central problem in generative grammar. This subject requires a great deal of space to be addressed and is therefore completely out of the scope of a brief article such as this. Here, we can only say that the correlational theory of thought uses the distinction between correlators and correlata, and the fact that two correlata are necessarily tied by an (explicit or implicit) correlator to decide if a string of words is grammatical or not (nevertheless, the syntactic rules of the language should also be considered). The Other Main Features of OL in Brief We have introduced a few analyses of mental categories only, and we will not add others, because these are sufficient to present our theory. Here we will instead illustrate the other main features of the theory very briefly. The ideas introduced in this section are all the author's own, except point 3. 1) OL provides, in a very natural manner, a new solution to a central question in the philosophy of language and psycholinguistics, i. e., whether language is an evolutionary product of increased human intelligence over time and social factors, or if language exists because humans possess an innate ability, an access to what has been called a "universal grammar"-the first view is well represented by the mentalistic theories of On the contrary, for example, making a distinction, by means of two different demonstrative adjectives, between when something is far from both the speaker and the addressee and when something is far from the speaker but close to the addressee, is not that essential, so that there can be languages that do this (such as Latin, with the ille/iste pair) and others that do not (such as English, which uses the demonstrative adjective "that" in both cases). The thesis of OL on the innate or acquired origin of language is simple and natural. In fact, the existence of a small innate component only (i. e., operations within cognitive functions) is a completely plausible hypothesis and one that avoids the difficulties that derive from hypothesizing the existence of an innate "deep" universal grammar, namely a) the little intrinsic plausibility of this hypothesis, and b) the need to reduce the differences found across the grammars of the various languages to a unique universal grammar. On the other hand, resorting to the cultural factor alone is probably insufficient to explain the analogies, which far exceed the differences, across languages, and the huge difference between human thought/language and animal thought/communication. 2) OL, with its description of linguistic thought in terms of operations of attention and other cognitive functions, makes it clearer what the essence of human/thought language is, and allows us to better account for the huge difference between human thought/language and animal thought/communication (which is another fundamental issue in psycholinguistics). In brief, according to OL human thought/language is based on two fundamental processes. The first process is a fragmentation of the experience, a fragmentation that is allowed by the selective ability of attention. This fragmentation leads to the formation of a large number of meanings (for example, the perception of an object with its color, say a green leaf or a red apple, is a unitary experience, but human attentional ability allows humans to isolate the shapes of the leaf and the apple from the color green and the color red, thus creating the four meanings "leaf", "green", "apple" and "red"; the same happens in countless other situations, such as the isolation of the action of "flying" from the object "bird", the meaning of the adjective "hard" from the object "stone", etc.). The second process is a recombination of these many single different meanings that is carried out thanks to the correlators and that leads to the formation of the correlational networks, i. e., the sentences. In this way humans, by means of a number of words that is limited (even if rather big: the words that designate the aforesaid many meanings that have been created, i. e., the lexicon of a language), can produce an unlimited number of utterances, that is, they can describe any experience. For instance, with the words of the aforesaid example, they can describe, besides a green leaf and a red apple, a green apple and a red leaf too. According to OL, the huge difference between human thought/language and animal thought/communication is due, among other things, to the very fact that: a) animals, even if some probably have perceptual abilities (hence, are able to have experiences) that are not very different from ours, probably have an attentional ability that is much less sophisticated than the human one, and does not allow the aforesaid process of fragmentation; b) animals are not probably provided with the ability to produce the mental categories of relationship, i. e., the correlators (hence, the correlational network), a task that definitely requires a big capacity of working memory. Therefore, OL ascribes the difference between human language and animal communication, among other factors that have been highlighted by previous literature (which OL does not reject at all), not to a substantial difference between the cognitive abilities of humans and those of animals, but to a different development of the same abilities, thereby recognizing that there is no fracture between human beings and animals, but only a different degree of evolution. 3) OL is an approach to the study of language, hence something strictly theoretical. Nevertheless, OL could also have at least one practical spin-off. In fact, the correlational theory of thought has led to conceive a device for the implementation of an innovative machine translation program, which might allow us to achieve a better translation quality than that of the programs available today (the references for the history and the state-of-the-art of machine translation are: [101], [102], [103], [104], [105], [106], [107]). This device is described in detail in [55]. This device was conceived by Ceccato and his collaborators in this project ( [46]; [108]). A Comparison Between OL and Other Approaches Since an in-depth comparison between OL and other approaches is well beyond the scope of a brief article such as this, here we will just mention the main approaches that can be considered. One can easily understand that OL is substantially incompatible with generative grammar. First of all, because in generative grammar syntax is central, while according to OL what is central is semantics and syntax is nothing else but an aspect of semantics. The other major difference with generative linguistics is the fact that, as mentioned in the previous section, OL conceives no innate ability or device specific for language. OL maintains that the only (even if fundamental) innate component of language are cognitive functions (amongst which attention plays a key role), which therefore are not at all specific for language itself. As for the rest, language is essentially a cultural product, and the fundamental factor that determines the meanings present in it is the usefulness of these meanings in satisfying the communicative will of humans. Therefore, OL's conception is radically different from that of language as an innate and universal ability of humans, which is typical of the generative tradition. In a certain sense, one can say that the two conceptions are opposite: according to generative linguistics, language originated from something, which is specific (i. e., the appearance, in an evolutionary sense, of a specific device), while according to OL language originated for something (i. e., a purpose, that of satisfying the communicative will of humans), based on preexisting nonspecific functions. Besides the fact that both theories, like others, are mentalistic, the only analogy can perhaps be the fact that OL, too, conceives a structure of linguistic thought, i. e., a deep level, which: a) is always different, as regards a certain aspect, i. e., its non-linear structure, from the surface structure, which is necessarily linear, and b) can be different in some cases, for example from the superficial SVO order, as mentioned in note 4. OL is also incompatible with logical-formal approaches originating from the work by Russell, Frege, Wittgenstein and Tarski, such as truth-conditional semantics, Montague grammar, etc. OL is also substantially different from the structuralist approach, where grammatical elements are often accounted for in terms of relationships of opposition to each other, or sometimes considered to substantially lack a meaning and take various meanings according to the context 6 . OL also substantially differs from distributional approaches, which account for linguistic elements in terms of relationships of occurrence with each other. On the contrary, OL has something in common with cognitive linguistics (that is, the theories proposed by authors such as Lakoff ([65]), Langacker ([109], [110], [111]), Talmy ([112], [113]), and others), such as the conception that language is not based on an ad hoc device but on pre-existing cognitive functions, and the recurrence of the concepts of "construction" and "operations", so that OL could even be considered a theory within cognitive linguistics, even if the two approaches originated in a completely independent way from each other. Nevertheless, there also are important differences. Cognitive linguistics, indeed, extensively deals with the lexical meanings (on the contrary, OL deals mainly with the grammatical meanings), and seems to focus more on the influence that the cognitive operations have on the whole sentence or the choice of a word inside the sentence, while OL provides an analysis in terms of "atomic" components of the meaning of the single grammatical elements. The idea that there is a close relationship between attention and meaning was already put forward about a century ago by Valéry ([114]) and Vygotskij ([115]), but just as a hint (that is, systematic attempts at analyzing meaning in attentional terms were not performed, as instead Ceccato and the SOI did). On the contrary, systematic attempts have been performed in the last past decades by various authors, such as Oakley, Carstersen, Talmy, and Lampert. It is important to note that these scholars came to put forward the hypothesis that attention plays a key role in the construction of meaning in a completely independent manner from the SOI. This basic presupposition is clearly present in Oakley's work ( [116], [117]). Indeed, even if Oakley bases his semantic analyses on the "Mental Spaces and Blending Theory" originally developed by Fauconnier and Turner, he conceives the operations relevant to such spaces as attentional phenomena. However, Oakley too, as the others cognitive linguists, generally analyzes the whole sentence or text, not the single linguistic elements (as instead OL does), because he considers the context as decisive for the construction of meaning, and as prevailing over the basic meaning of each single word. Undoubted analogies with OL's approach can be found in the semantic analyses of locative expressions by Carstensen ([11], [12], [13]), who resorts to the concept of attentional operations performed by the subject (such as "shift", "zooming", see the concept of movement of attentional focus included in the EOMC (section 3.2) and the author's analyses of locative terms in [56], [57], [58], [59], [60]). Talmy ([118], [119]) has investigated the role played by attention in meaning selection and construction by means of a specific research program (Linguistic Attention). This 6 A comparison with logical-formal and structuralist approaches can be found in [61], pp 191-192. program has been also maintained by Lampert ([120], [121], [122], [123]). The relationship between attention and meaning has been also investigated and proved by experimental researches (even if not related to the aforesaid specific research programs), such as, for example, those by Logan (which concerned spatial concepts, [131]), Taube-Schiff and Segalowitz (which showed that grammatical elements act as an attention-directing mechanism, [132]), and Tomlin (which concerned the relationships between the direction of attention and the choice of the grammatical subject of the sentence, [133]). The importance of some form of attention for some categories of words has been also partially acknowledged by linguists who have not developed theoretical frameworks of analysis of meaning in attentional terms, such as, for example, Diessel (according to whom demonstratives function to coordinate the interlocutors' joint focus of attention, [134], [135]). For an in-depth survey of the status of the research on the relationships between attention and meaning, see [136]. Something similar to the distinction between correlators and correlata can be found in the classification of the "grammatical concepts" made by Sapir, who divided them into two main categories depending on whether they concerned the "material content" or the "relationship", and also stated that two subcategories of these two categories are essential to any form of language ( [137]). In Sapir this distinction seems to refer more to the level of language than that of thought, is not expressed in cognitive terms, and is less central. Nevertheless, Ceccato and Zonta ( [50]) explicitly acknowledge that Sapir's approach to the classification of the parts of speech is "the closest" to OL's. As a general consideration, while some similarities between OL and the other approaches can be found, they are very limited and sporadic: if OL is compared with the other theories as a whole, OL proves to be something deeply different. Conclusion This article aims to introduce a new theory that deals mainly with the problem of the meaning of the fundamental grammatical elements of language (not single languages). We have introduced the general outlines of the theory only, in order to give a general idea in the space of an article. The theory is based on the fundamental presupposition that the meaning of grammatical elements has to be searched for not, or not only, at the level of the particular objective situations where these linguistic elements are used, but at the top level of abstractness, i. e. at the level of the cognitive operations performed by the subject, amongst which those of attention play a key role. The theory, albeit systematic and, from a certain point of view, complete, is nevertheless a first attempt in this direction and as such may contain mistakes or may need to be widened or modified. Nevertheless, this theory seems to take us, in a simple and natural way, towards a unitary solution of the problem as a whole. This leads me to suppose that this is at least the right direction to follow in order to solve this problem in the philosophy of language.
15,827
2016-01-13T00:00:00.000
[ "Linguistics", "Philosophy" ]
In situ XAFS of acid-resilient iridate pyrochlore oxygen evolution electrocatalysts under operating conditions Acid-stable electrocatalysts are sought after for application in polymer electrolyte membrane (PEM) devices, such as electrolysers for water splitting and in fuel cells where they can provide a buffer to counter the corrosion of the carbon support at extremes of potential. The acid electrolyte in these PEM devices provides advantages over alkali systems, with high charge density and no detrimental effects of carbonate contamination. The dioxides of ruthenium and of iridium, both with the rutile structure, have been proven to be both active and robust catalysts for the oxygen evolution reaction (OER) in these situations. The combination of the two precious metals in a ternary mixed oxide has been used to temper the high reactivity of ruthenium, which can be dissolved under operating conditions, with the greater stability of iridium. In the past few years a number of other ruthenium and iridium oxides have been studied as acid resilient electrocatalysts with the purpose of discovering new more active catalysts as well including partner base metals to lower the concentration of the precious metals, making more economically viable materials. This includes mixed rutile phases, such as Cr0.6Ru0.4O2, 6 perovskites, such as SrRuO3, 7 Sr1 xNaxRuO3, 8 Ba2MIrO6 (M = Y, La, Ce, Pr, Nd, Tb), and Sr2MIrO6 (M = Fe, Co), 10 and pyrochlores such as Y2Ru2O7, 11 Y1.85Zn0.15Ru2O7 d, 12 and A2Ru2O7 (A = Yb, Gd, Nd). In these materials the non-precious metal cation stabilises the crystal structures of the multinary compositions, allowing access to higher oxidation states of Ru and Ir than seen in binary oxides. In our own work we studied the pyrochlores Bi2Ir2O7 14 and (Na,Ce)2(Ru1 xIrx)O7 15 and showed them to be robust electrocatalysts, with the latter showing activity and stability modulated by the Ru : Ir ratio. The mechanism of action of the precious-metal oxide electrocatalysts is still under consideration, with many of the conclusions indirectly inferred from electrochemical data, rather than from direct probes of atomic structure. Hillman et al. used X-ray absorption fine structure (XAFS) spectroscopy to study the local iridium environment on deposited iridium oxide films upon redox cycling in neutral and alkaline aqueous condition. That work, carried out over a rather limited potential range short of OER conditions, proposed a scenario by which the iridium atoms respond by a two-site reaction, where two types of active sites, which have distinct local structure and electrochemical response, have distinct redox potentials. For (Na,Ce)2(Ru1 xIrx)O7 we used in situ XAFS to monitor change of Ir and Ru oxidation state upon application of potential into the OER regime and used the X-ray absorption near edge structure (XANES) to track metal oxidation state, revealing a cooperative response of the two metals under electrocatalytic conditions. In this communication we report application of this methodology to the pyrochlore system (Na,Ca)2 xIr2O6 H2O and consider the extended X-ray absorption fine structure (EXAFS) to quantify local atomic structure in situ with electrocatalysis at potentials associated with OER. We were interested to study a pure iridate that may offer longterm stability, since the loss of ruthenium remains an issue even in these multinary oxides. In addition, with only a single precious metal to consider we aimed to obtain definitive structural information as a function of applied potential. The pyrochlore material (Na,Ca)2 xIr2O6 H2O was prepared by a hydrothermal crystallisation directly from CaO2, Na2O2 a Department of Chemistry, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK. E-mail<EMAIL_ADDRESS>b Johnson Matthey Technology Centre, Blounts Court, Sonning Common, Reading, RG4 9NH, UK c Department of Chemistry, University of Southampton, University Road, Southampton SO17 1BJ, UK d Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK † Electronic supplementary information (ESI) available: Further materials characterisation data. See DOI: 10.1039/d0cp01378a Received 11th March 2020, Accepted 22nd April 2020 Acid-stable electrocatalysts are sought after for application in polymer electrolyte membrane (PEM) devices, such as electrolysers for water splitting 1 and in fuel cells where they can provide a buffer to counter the corrosion of the carbon support at extremes of potential. 2 The acid electrolyte in these PEM devices provides advantages over alkali systems, with high charge density and no detrimental effects of carbonate contamination. 3 The dioxides of ruthenium and of iridium, both with the rutile structure, have been proven to be both active and robust catalysts for the oxygen evolution reaction (OER) in these situations. 4 The combination of the two precious metals in a ternary mixed oxide has been used to temper the high reactivity of ruthenium, which can be dissolved under operating conditions, with the greater stability of iridium. 5 In the past few years a number of other ruthenium and iridium oxides have been studied as acid resilient electrocatalysts with the purpose of discovering new more active catalysts as well including partner base metals to lower the concentration of the precious metals, making more economically viable materials. This includes mixed rutile phases, such as Cr 0.6 Ru 0.4 O 2 , 6 perovskites, such as SrRuO 3 , 7 Sr 1Àx Na x RuO 3 , 8 Ba 2 MIrO 6 (M = Y, La, Ce, Pr, Nd, Tb), 9 and Sr 2 MIrO 6 (M = Fe, Co), 10 13 In these materials the non-precious metal cation stabilises the crystal structures of the multinary compositions, allowing access to higher oxidation states of Ru and Ir than seen in binary oxides. In our own work we studied the pyrochlores Bi 2 Ir 2 O 7 14 and (Na,Ce) 2 (Ru 1Àx Ir x )O 7 15 and showed them to be robust electrocatalysts, with the latter showing activity and stability modulated by the Ru : Ir ratio. The mechanism of action of the precious-metal oxide electrocatalysts is still under consideration, with many of the conclusions indirectly inferred from electrochemical data, 16 rather than from direct probes of atomic structure. Hillman et al. used X-ray absorption fine structure (XAFS) spectroscopy to study the local iridium environment on deposited iridium oxide films upon redox cycling in neutral and alkaline aqueous condition. 17 That work, carried out over a rather limited potential range short of OER conditions, proposed a scenario by which the iridium atoms respond by a two-site reaction, where two types of active sites, which have distinct local structure and electrochemical response, have distinct redox potentials. For (Na,Ce) 2 (Ru 1Àx Ir x )O 7 we used in situ XAFS to monitor change of Ir and Ru oxidation state upon application of potential into the OER regime and used the X-ray absorption near edge structure (XANES) to track metal oxidation state, revealing a cooperative response of the two metals under electrocatalytic conditions. 15 In this communication we report application of this methodology to the pyrochlore system (Na,Ca) 2Àx Ir 2 O 6 ÁH 2 O 18 and consider the extended X-ray absorption fine structure (EXAFS) to quantify local atomic structure in situ with electrocatalysis at potentials associated with OER. We were interested to study a pure iridate that may offer longterm stability, since the loss of ruthenium remains an issue even in these multinary oxides. In addition, with only a single precious metal to consider we aimed to obtain definitive structural information as a function of applied potential. The pyrochlore material (Na,Ca) 2Àx Ir 2 O 6 ÁH 2 O was prepared by a hydrothermal crystallisation directly from CaO 2 , Na 2 O 2 and IrCl 3 Á5H 2 O in 10 M NaOH solution at 240 1C (ESI †), and was characterised using powder X-ray diffraction (XRD) and scanning transmission electron microscopy (STEM), Fig. 1. Scherrer analysis of the diffraction profile gave an average crystallite domain size of 36.2 AE 2.4 nm, consistent with the STEM imaging, while lowering the synthesis temperature to 170 1C gave a second sample that consisted of smaller crystallite sizes (11.2 AE 1.9 nm by Scherrer analysis, ESI †). Surface areas, measured by nitrogen adsorption isotherms and the BET method for the two samples were 7.2 m 2 g À1 and 62.7 m 2 g À1 , respectively, consistent with the relative dimensions of crystallites assuming no significant agglomeration (as confirmed by STEM images). The Rietveld method was used to refine chemical composition and this gave empirical formula (Ca 0.70 Na 0.24 ) 2 Ir 2 O 6 ÁH 2 O, for the most crystalline sample with the crystal water content verified using thermogravimetric analysis (ESI †). Using the refined crystal structure model, the bond valence sum method gave an Ir oxidation state of +4.5, not inconsistent with the value expected by charge balance of +4.4. Prior to in situ EXAFS measurements, ex situ XANES spectra were recorded at the Ir L III -edge to determine the average oxidation state of Ir in the pyrochlore, by comparison to the reference materials IrCl 3 , IrO 2 and BaNa 0.5 Ir 0.5 O 3Àd , that contain iridium in oxidation state +3, +4 and 4.9, respectively, Fig. 2. This shows the iridium to be in average oxidation state of +4.5, consistent with the results from crystallography and also earlier work on a material with similar composition but prepared by a different method. 18 The XANES measurements made in situ were similarly analysed (see below) and an important, immediate conclusion from these data is that there is negligible loss of iridium into solution with applied potential in 1 M H 2 SO 4 solution, since the white line intensity of the raw data was not seen to diminish during the course of the experiment (see ESI †). The in situ EXAFS data were analysed to obtain quantitative information about the Ir local environment. A single-shell fit was performed to focus on the Ir-O contribution to the spectrum, and typical fits to the spectra, and their associated Fourier transforms are shown in Fig. 3 for data measured prior to application of potential and at the highest potential, 1.78 V vs. standard hydrogen electrode (SHE). Fitted EXAFS parameters from these spectra are presented in ESI. † For the highly crystalline sample of the pyrochlore, EXAFS spectra were successfully recorded and analysed in a sequence of applied voltage steps, up to 1.78 V, and then upon reversal of potential to a lower than initial starting value. Fig. 4 summarises the structural information obtained from fitting the spectra and full details are provided in the ESI. † These results highlight the gradual shortening of Ir-O distance up to 1.48 V, the onset of OER, reaching constant value during OER and then lengthening upon reversal of potential, Fig. 4a. A shorter average Ir-O distance implies oxidation of the iridium. Meanwhile, the Debye-Waller factor essentially shows no change in magnitude over the whole process, Fig. 4b. The iridium oxidation state can be quantified by the shift in edge position, as shown in Fig. 4c, and this measurement was carried out in a separate experiment on materials of two different particle sizes. We can thus conclude that the iridium, on average, increases in oxidation state by 0.5 units under OER conditions and that this is independent of the particle size (and surface area) of the material, despite the greater proportion of available iridium expected for the higher surface area sample. We also compared the oxidation state shift from XANES with that derived from the Ir-O bond distance using the bond valence sum method, and this showed essentially the same behaviour with applied potential, thus giving independent verification of the changes in average oxidation state seen in situ (ESI †). To understand the changing local structure of iridium in the pyrochlore during operation as an electrocatalyst we consider the models proposed by Hillman et al. in their study of iridium oxide films. 17 In that work it was observed that the Debye-Waller factor of the Ir-O shell showed an increase upon iridium oxidation (albeit below potentials needed to reach OER conditions), which was reconciled as due to a two-site bond model for redox, in which two iridium species responded differently to applied potential to give a greater static disorder contribution to the Debye-Waller factor. An alternative model is one in which all iridium atoms in the sample respond simultaneously to the applied potential such that the Ir-O bond distance of all are shortened by the same amount with no increase in static disorder. This can be explained by a single-site band model whereby electrons are removed not just from surface sites, but from the conduction band of the metallic particles. The observed behaviour of the pyrochlore is entirely consistent with this idea. Independent verification of this model comes from study of the sample with higher surface area, smaller crystallite domain size, which shows no greater extent of iridium oxidation, despite the ten-fold increase in specific surface area. Thus we propose that the iridate pyrochlores show redox behaviour different to the previously studied iridium oxide films: this could be due to the different local structure of the materials since the films were structurally disordered, being hydrated and low density forms of iridium oxide, 17 rather than the welldefined, crystalline particles that we have studied. Conclusions We have shown that in situ XAFS (XANES and EXAFS) can be recorded from electrocatalysts operating under realistic conditions to extract quantitative structural information that can be used to probe mechanism of their operation. This method could be applied to other families of oxide materials currently emerging as acid-stable catalysts in important contemporary energy-related applications. For the pyrochlore studied here, we have shown how the bulk metallic character of the oxide particles plays an important role in their mode of operation, with electrons from the conduction band of the oxide being extracted to bring about OER. Complementary cyclic voltammetry (see Fig. S3, ESI †) shows only one redox feature in the potential window of the in situ XAFS experiment that would correspond to oxidation/reduction or Ir 4+ /Ir 5+ . The surface reactivity must clearly also hold the key to catalysis properties since this is where the water interacts with the catalyst surface, and where the splitting of water and release of oxygen takes place, but the transport of charge from the bulk particle is an important aspect of the overall mechanism. In multinary oxides, such as the pyrochlore studied here, the role of the partner non-precious-metal cations may also be important in providing charge balance at electrocatalysis conditions, which may explain their different mode of operation compared to binary precious-metal oxides. More mechanistic studies on other members of the pyrochlore family would give a greater insight into this idea. Further work is also needed to understand the long-term stability of the pyrochlores under operating conditions, and their application in real devices, which will be topic of forthcoming publications. Conflicts of interest There are no conflicts to declare.
3,449.6
2020-05-18T00:00:00.000
[ "Physics" ]
Open Access book publisher Worldwide attention to environmental issues combined with the energy crisis force us to reduce greenhouse emissions and increase the usage of renewable energy sources as a solution to providing an efficient environment. This book addresses the current issues of sustainable growth and applications in renewable energy sources. The fifteen chapters of the book have been divided into two sections to organize the information accessible to readers. The book provides a variety of material, for instance on policies aiming at the promotion of sustainable development and implementation aspects of RES. Salix viminalis -Conventional applications 2.1 Energetic utilization In the recent decades among all Salix genotypes particularly Salix viminalis gained most attention as a agriculturally cultivated plant due to its application as a green fuel. Several positive and negative aspects of such an application are summarized in (Zabrocki & Ignacek, 2008). Many of the above mentioned arguments result in fact from economic, social and political background. Thus, the cultivation of Salix viminalis may definitely find numerous supporters but also opponents. However, positive features of Salix viminalis cultivation must be kept in mind since they may help to solve some critical environmental and social issues. In some countries the cultivation and usage of Energetic Willow is at very limited level (Poland). Some sources claim (Bio-energia, 2008) that overall Salix viminalis cultivation area does not exceed 6000 hectares. The share of Salix viminalis wood burned as a fuel in the total mass of burnable fuels is only ca. 2% high. The situation has not changed significantly in the last period despite of competitive energetic ( (Gradziuk, 1999). Environment protection -Phytoremediation Phytoremediation of soil and waters is a task of numerous research projects and technological undertakings. Such attempts base on an unique feature of Salix viminalis i. e. the ability to effective uptake, deactivation and accumulation of relatively high amounts of www.intechopen.com heavy metals without loosing its vitality. The efficiency of metal ion accumulation is extraordinarily high if compared to other plants and microorganisms. Therefore Salix viminalis is often called "hyper-accumulator". This point let to state that Salix viminalis is an unique plant among other energetic plants which mainly offer only a high growth rate and mass production but are poor metal ion accumulators. Memon et al., 2001 citing other authors stated that retention of heavy metals may be accounted to one the below mentioned technologies (Salt et al., 1995;Pilon-Smits & Pilon, 2000): 1. Phytoextraction, in which metal-accumulating plants are used to transport and concentrate metals from soil into the harvestable parts of roots and above-ground shoots (Brown et al., 1994;Kumar et al., 1995). 2. Rhizofiltration, in which plant roots absorb, precipitate and concentrate toxic metals from polluted effluents (Smith & Bradshaw, 1979); Dushenkov et al., 1995). 3. Phytostabilization, in which heavy metal tolerant plants are used to reduce the mobility of heavy metals, thereby reducing the risk of further environmental degradation by leaching into the ground water or by airborne spread (Smith & Bradshaw, 1979;Kumar et al., 1995). 4. Plant assisted bioremediation, in which plant roots in conjunction with their rhizopheric microorganisms are used to remediate soils contaminated with organics (Walton & Anderson, 1992;Anderson et al., 1993). In the case of Salix viminalis the process of metal ion accumulation proceeds through a root system and ion transport involving vascular tissues in stems and differentiated distribution in the whole plant body. Permeation of ions into roots is a typical way of efficient metal ion collection by Salix viminalis. This a basis for practical utilization of Salix viminalis for purification of various matrixes (soli, water, etc.) being in contact with roots of the plant. Planting of Salix viminalis on metal contaminated soils and/or bringing the plant in contact with contaminated waters lead to slow but constant removal of the metal impurities and finally remediation of soil and waters. According to Baker & Walker, 1990 plants may follow three pathways when they grow on metal contaminated soils. 1. Metal excluders: aerial parts of these plants are free from metal contamination despite of high concentration of them in the soil and in the roots. 2. Metal indicators: such plants accumulate metals in their aerial parts and the concentration of metals depends on the metal content in the soil. 3. Accumulators and hyperaccumulators: These plants concentrate metals in their aerial part but the metal content in the tissues exceeds metal content in the soil. A plant capable to accumulate more than 0.1% of Ni, Co, Cu, Cr or Pb or 1% of Zn (despite of differences in metal content in the soil) in its leaves (dry mass) is called a hyperaccumulator. Salix viminalis, according to our earlier studies, may be accounted to the accumulators / hyperaccumulators category. Figs 1 and 2 present (Łukaszewicz et al., 2009) some of our results on the concentration of selected metal ions (Zn 2+ , Cu 2+ , Cr 3+ ) in different parts of Salix viminalis rods after a certain time of contact with water solutions of the ions. Table 5 shows that example heavy metal ions (Cu 2+ ) penetrate all important parts of Salix viminalis. The ion penetration and the resulting copper accumulation increase with increasing concentration of Cu 2+ in an artificial soil. Plants were incubated in complete Knop's medium (Reski & Abel, 1985) containing copper salt at 0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 mM stabilized with quartz sand in hydroponic pots. It is also visible that roots and rods (stems) i. e. plant parts responsible for metal ion transportation accumulate copper ions more intensively that new shots and leaves. The latter parts are rather a final location of metal ions and do not participate significantly in the ion transportation. Table 5 informs about biometric changes of the plants exposed to Cu 2+ infiltration. The plants were still living but shots, leaves and roots underwent a gradual degradation consisting in reduction of mass and/or dimensions. Table 6 (Mleczek et al., 2009) considers the dependence between the kind of metal ion and its accumulation in different tissues. No strict correlation is visible except general tendency to intensive accumulation of cadmium and chromium. During the experiment young shoots of Salix viminalis after defoliation were put into vessels with water and left until fresh leaves and root sprouted. Selected plants were moved to glass vessels filled with Cu, Cr and Zn salts solutions (0.01 M each). Additionally, chelation agent i. e. EDTA in water solution was added in the amount calculated basing on the assumption that EDTA was capable to form bichelates. According to some authors (Blaylock et al., 1997) plants should be more tolerant to chelated metal ions since after complexation their toxicity is lower. However, there is no common agreement about the positive influence of chelation on the metal ion uptake by Salix viminalis. After 7 days the plants were taken form the vessels and appropriate parts of stems (wood samples) were cut and subjected to elemental analysis ( fig. 2). It is visible that metal ions enter the aerial part of Salix viminalis but the concentration of metals depends on the height above the ground level. Some studies point out differentiated distribution of metal ions in roots, stem, leaves etc. Tables 5 and 6 present such a kind of data (Gąsecka et al., 2010). Table 7. Concentration of heavy metals in young shoots of 12 Salix genotypes before and after experiment (hydroponic estimation of heavy metal accumulation). Table cited after with no changes. Mechanism of heavy metal intrusion, transportation, deactivation and accumulation has been investigated intensively over many years (Shah & Nongkynrih, 2007;Memon et al., 2001;Lasat, 2001, Clemens, 2006. Fig. 3 illustrates the complex nature of the processes. Shah & Nongkynrih, 2007 recall several basic mechanism of metal ion assimilation among which chelating plays a crucial role. Many substances (chelators) occurring in plant cells contain typical chelating (ligand) atoms like oxygen, nitrogen and sulfur ones. Chelators contribute to metal ion detoxification. Other functional compounds called chaperones specifically deliver metal ions to organelles and metalrequiring. The principal metal chelators in plants are phytochelatins, metallothioneins, organic acids and amino acids. Shah & Nongkynrih, 2007 after some other authors state that phytochelatins are small metal-binding peptides which formation involves glutathione, homoglutathione, hydroxymethyl-glutathione or gammaglutamylcysteine. Metallothioneins are low molecular mass cysteine (cys)-rich proteins, that bind metal ions in metal-thiolate clusters. Over 50 metallothioneins has been identified so far in plants. Organic acids and amino acids because of N and O content may chelate intensively various metal ions. Shah & Nongkynrih, 2007 claim that "citrate, malate, and oxalate have been implicated in a range of processes, including differential metal tolerance, metal transport through xylem and vacuolar metal sequestration". Salicylic acid and its derivatives which are definitely present in Salix viminalis tissues, has been also identified as chelating agent in some plants. For Salix viminalis naturally high concentration of the latter species is probably the key factor providing hyperaccumulating properties of the plant. Fig. 3. A model of the mechanisms that occur in plant cell upon exposure to metals: metal ion uptake, chelation, transport, sequestration, signalling and signal transduction. The diagram shows the uptake of metal ions by K+ efflux and transporter proteins, their sequestration by formation of PCs by enzyme PC synthase and GSH in vacuoles, the subsequent degradation of PC-peptides by peptidases to release GSH, the generation of ROI species, the contribution of Ca 2+ towards activation of Ca 2+/ calmodulin kinase(s) and MAP kinase(s) cascade leading to defense gene activation in nucleus, the effect of ROI on natural plant defense pathways like octadecanoid pathway (JA) and phenyl propanoid pathway (SA) biosynthesis that lead to defense and cell protectant gene activation is also included to correlate the induced metal stress defense with natural plant defense mechanism. AOS -allene oxide synthase; APXascorbate peroxidase; BA -benzoic acid; BA-2H -benzoic acid 2-hydroxylase; CAT -catalase; GSH -glutathione; JA -jasmonic acid; M 2+ -metal ions; MAPK -mitogen activated protein kinase; 12-oxo PDA reductase -12-oxo-cis-10,15-phytodienoic acid reductase; PCphyochelatin; PL -phospholipase; POX -peroxidase; SA -salicylic acid; SOD -superoxide dismutase. The figure and the caption cited with no changes after Shah & Nongkynrih, 2007. Memon et al., claim that the application of biological metal-accumulators and metalhyperaccumulators for purification of soils and waters has several positive features like "low cost, generation of a recyclable metal-rich plant residue, applicability to a range of toxic metals and radionuclides, minimal environmental disturbance, elimination of secondary air or water-borne wastes, and public acceptance". The latter statement applies in full to Salix viminalis, too. Table 8 proves that Cd removal from soil is extraordinarily high (217 g/ha) if compared to other phytoaccumulators tested in the study (Porębska & Ostrowska, 1999). Also the concentration of the metal in dry Salix viminalis wood was very high ( Fabrication of adsorbents and catalysts The above described proved efficiency in metal ion accumulation by Salix viminalis led to a novel concept of non-energetic use of the plant. In some earlier studies (Łukaszewicz & Wesołowski, 2008) authors have discovered that thermal treatment (oxygen free conditions) of dry Salix viminalis wood yields charcoals of a very original and potentially useful pore structure. Usually a two step procedure was applied: -1 hour long preliminary carbonization in an inert gas atmosphere at 600 0 C, -1 hour long secondary carbonization in an inert gas atmosphere at a desired temperature ranging from 600 to 900 0 C. The pore structure of such obtained charcoals is characteristic because of a very narrow pore size distribution function (PSD) i. e. only pores of not differentiated dimensions contribute to the total pore volume (figs 4, 5 and 6). The calculated dimensions of pores let to call such fabricated charcoals "nanoporous Carbon Molecular Sieves" (CMS). (Horvath & Kawazoe, 1983). The Salix viminalis originated CMSs proved their sieving properties in gas mixture separation to single components in chromatographic conditions (Gorska, 2009). For example table 9 contains separation coefficients determined for N 2 /CH 4 binary gas mixture over two example Salix viminalis originated CMSs of similar surface area. The separation is of industrial importance since natural gas resources are often contaminated by nitrogen which high content may reduce commercial value of methane. The values are dramatically bigger than 1 at all investigated temperatures i. e. 30, 40, 50, 60 and 70 0 C. It is to emphasize that separation is very efficient at highest temperature of 70 0 C. It is particularly important regarding a potential application of such carbons as and adsorbing bed in a PSA (Pressure Swing Adsorption) installation. In the PSA method, the first step consists in the compression of a gas mixture to be separated in the adsorbing chamber (filled with CMS). Gas compression is an exothermal process leading to the warming of gases and carbon adsorbent what is an undesired phenomenon since separation at high temperature is generally much worse since PSA separation of air is very temperature sensitive (Japan EnviroChemicals Ltd., 2011). The described fabrication of CMSs does not exploit both unique features of Salix viminalis i. e. the unique ability of Salix viminalis biomass transformation into a CMS and the Salix viminalis ability to heavy metal ion accumulation. Both feature were exploited in the case of a series of hybrid carbon-metal oxide catalysts obtained according to the fabrication procedure proposed recently by Łukaszewicz et al., 2007. The novelty of the method consists in the exploitation of natural phenomenon of metal ion transportation in living plants for the introduction of a metal-based catalytic phase. Metal ions, after introduction to transport-responsible tissues in a living plant (Salix viminalis), are transported to the plant cells. The process was efficient since Salix viminalis was highly tolerant to the presence of heavy metal ions in its body. Freshly cut ca. 20 cm longs sections of a stem (rootless) of Salix viminalis were immersed (vertical alignment) in a water solution containing equimolar quantities of La(NO 3 ) 3 and Mn(NO 3 ) 2 (example concentrations: 0.001M, 0.01M, 0.1M). The stems were fresh enough to preserve intensive metal ion transport resulting in a gradual rise of the solution along the treated stems. A contrast dye was added to the solutions in some experiments to provide eye observation of the capillary rise of solutions along the treated stems. One the other hand, the length of stems was short enough to avoid differentiated distribution of metal ions in the stem what might be expected regarding some former tests (see figs 1 and 2). After the contact with La 3+ and Mn 2+ ion solutions, the metal saturated stems were dried, diminished and carbonized (600-800 °C, a two-step procedure) in an inert gas atmosphere (N 2 ). The first carbonization let to expel volatile species and to transform the wood (lignin-cellulose matrix) into carbon matrix (CMS resembling), consisting mainly of C, O, N and H atoms (Gorska, 2009). The next heat treatment (1 h, N 2 flow) at the temperature of 800 °C did not destroy already developed pore structure (preliminary carbonization) and, what is the most important, it enabled the transformation of introduced metal ions into the corresponding metal clusters. XPS and XRD analysis (Cyganiuk et al., 2010) proved that a complex oxide LaMnO 3 was synthesized from introduced ions. SEM and HRTEM investigations proved that the provskite-type oxide is present in such obtained samples in form of inorganic nano-crystallites suspended in carbon matrix, which in general was an amorphous material with few graphite nano-crystallites (figs 7 an 8). Such obtained hybrid materials were tested as catalysts for n-butanol conversion to a 4heptanone according to the reaction: The catalysts exhibited very good catalytic performance despite very low concentration of the active component i. e. a perovskite-type oxide LaMnO 3 (atomic content below 1%). The noticed high activity i. e. yield and selectivity (Cyganiuk et al., 2010) resulted from very high dispersion of the active phase understood as: --reduced size of LaMnO 3 crystallites (10-100 nm), --uniform distribution of both metals in the carbon matrix ( fig. 9). Similarly, titanium and cerium based hybrid materials were obtained by exploitation of metal ion transportation in living parts of Salix viminalis (ca. 20 cm long stem sections). Figs 10 and 11 depict uniform distribution of Ce and Ti atoms in a carbon matrixes. Their occurrence is accompanied by oxygen atoms however the latter are a usual constituent of carbon matrixes and can not be exclusively associated with Ce and Ti in the form of metal oxides. Elemental analysis data definitely prove (figs 10 and 11) that Ce and Ti are present in investigated hybrid www.intechopen.com samples and their presence result only from the performed fabrication procedure. The elements are relatively rare and have not been found in the samples of non-impregnated but carbonized Salix viminalis wood. Also in this case the atomic content of the metals is very low i. e. definitely below 1% despite of the concentration of impregnating solution. Thus, the proposed exploitation of metal ion transport in living parts of Salix viminalis ensures rather low level of impregnation but of very high dispersion. The Ti and Ce containing hybrid materials were tested as catalysts, too. Both materials despite of the same properties of carbon component of them, exhibited dramatically different catalytic activity: -Ti/C hybrids towards dehyration of n-alcohols (n-butanol conversion to butane, ca. 55% selectivity at 460 0 C), -Ce/C hybrids towards ketonization of n-alcohols (n-butanol conversion to heptanone-4, ca. 75% selectivity at 460 0 C). The differences must by attributed to different catalytic properties of the active components of the hybrid materials i. e. to Ce and Ti derivatives (mixed oxides) which presence was proved by XRD, XPS and HRTEM measurements. In summary, the proposed hybrid catalysts fabrication method is basing on two important and exclusive features of Salix viminalis: -high vitality preserving some living functions like metal ion transportation in fragments of a complete plant (single rod cut into 20 cm long pieces), -high tolerance of still living parts of Salix viminalis to heavy metal ions which enter the plant structure. We assume that toxic influence of the heavy metal ions is considerably reduced in the plant cells otherwise transportation of metal ions could be severely disrupted and finally terminated. During impregnation in most Salix viminalis samples (sections of rod) no visible morphological changes were observed and the 20 cm long sections retained their original olive-green color characteristic for its bark. Visible bulge and shrinkage did not occur. The originality of the above presented concept let to submit patent applications (Łukaszewicz et al., 2006; Łukaszewicz et al., 2007). Dry distillation of Salix viminalis wood Fabrication of charcoals from Salix viminalis consists in the a heat treatment of the biomass in oxygen free conditions. In fact this process can be also called dry distillation w wood. However, usually distillation is run aiming at the collection of volatile products which evolve during heat treatment. Looking at charcoal fabrication (described above) from such a point view authors has decided to cool down (liquefaction) volatiles leaving heating zone of stove along with the stream of inert gas (nitrogen) passing through the stove. The condensate in form of a dark brown viscous liquid was collected in a glass beaker and subsequently subjected to several analysis. We assumed that the condensate is a mixture of numerous organic compound as in the case of wood tar obtained by dry distillation of other sorts of wood i. e. pine (Egenberg et al., 2002). At the beginning we assumed that the collected tar must contain phenols and polyphenols which are created during thermolysis of lignin (de Wild et al., 2010). Salix viminalis wood contains ca. 20-24 % (by weight) of lignin in dry mass of wood . The distillate called biooil was subjected to some separation measures like extraction to isolate several fractions containing polyphenols. Polyphenols are a precious group of compounds mainly because of their antioxidant properties. Polyphenols and other antioxidants Free radicals play important role in the functioning of human organism (Grajek, 2007). However, their presence may be the reason of oxidative stress. The stress often results from disrupted balance between peroxidants and antioxidants in an organism. It is proven that high activity of free radicals and prolonged influence of oxidative stress are responsible for pathogenesis of nearly 100 diseases (Wolski, 2007) including Alzheimer and Parkinson diseases (Bartosz, 2008;Fitak & Grzegorczyk-Jaźwińska, 1999). During ageing oxidative damages in cells become more frequent with parallel reduction the activity of antioxidative enzymes. The situation becomes worse due to UV irradiation, environmental pollution, permanent mental stress and bad nutrition habits. Oxygen being the base of human existence is mainly available in it triplet form O 2 ** . The electron configuration results in moderate chemical activity in contrast to other forms like (O 2 *) -, HO 2 * and OH*. The latter form is considered as the most reactive. Proper enzymes ensure control over 98-99% of all oxygen in a human body. However, the remaining amount of oxygen may undergo transformation (Fenton reaction, Haber-Weiss reaction) into the most reactive forms i. e. oxygen derivatives being free radicals. Daily up to 10 thousand DNA oxygen-related damages occur in a human body. The damages may be repaired by some specific enzymes but the introduction of antioxidants should reduce the threat. Therefore everyday diet has to be supplemented by natural antioxidants. Antioxidation properties of polyphenols may involve the three general mechanisms: -direct expunge of reactive form of oxygen and nitrogen by two possible pathways: Single Electron Transfer (SET) or Hydrogen Atom Transfer (HAT). In such processes a polyphenol molecule transforms into a phenoxyl radical which after reaction with a next oxygen radical stabilizes as chinone like structure ( fig. 12) -chalation of transition metal ions (particularly copper and iron) which participate in the reactions leading to the formation of reactive radicals like the Fenton reaction involving Fe 2+ ions and yielding dangerous hydroxyl radical OH, -increasing of concentration of endogenous antioxidants and/or inhibition of enzymes stimulating the formation of free radicals. Such positive chemical features of polyphenols turns peoples attention towards intensive search for sources of them and the development of methods of polyphenols separation from their natural matrixes for further enrichment of some products like pharmaceuticals, food, cosmetics etc. This way of thinking involves investigations on appropriate plants i. e. candidates for a subsequent chemical treatment like polyphenol extraction. According to some extended studies (Makowska-Wąs & Janeczko, 2004) polyphenols occur in many plants and plant originated products like herbs, needles of coniferous plants, algae, green tea leaves, eucalyptus wood, byproducts of olive, wine, yeast production. It is obvious that chemical exploitation of one source plant yields a limited number of polyphenols and search for other polyphenols needs a selection and a proper treatment of another source plants. It has to be stated that the polyphenol content in source plants is very differentiated but also very low. Table 10 informs about the antioxidant activity determined for 100 g of example fruits and vegetables. The highest activity is noticed for pure vitamins and synthetic antioxidants. However, the mentioned products owe their antioxidative activity not only due to the presence of polyphenols since other type of antioxidants may be present, too. Obviously the above list is not closed and other natural and synthetic products may be addend and therefore search for other effective products is fully justified. Authors attention has turned towards chemical processing of some easily accessible and renewal resources. Our primary idea was to involve chemical processing not limited to the separation of already existing polyphenols (a passive approach) but also on treatments that transform original matter of low polyphenol content into a new product of high polyphenol concentration (active approach). Such a concept focused our attention on Salix viminalis again due to its inexpensiveness, renewal cultivation and high content of lignin which thermal treatment releases polyphenols. As the matter of fact Salix viminalis as a living plant contains some amounts of different polypneols like flavonoides (flavanols, flavones, flavonones, flawonone dimers, chalcones), phenolic acids, lignans, catechin and its derivatives as well as tannins (procyanidins, prodelfinidins) being derivatives of flavan-3-ols. Particular Salix species differ much regarding the total content of polypheneols (Nyman & Julkunen-Tiitto, 2005) and their type (Landucci et al., 2003). For example Salix caprea contains variety of flavonoids and the lack of lignans (Pohjamo et al., 2002). Contrastly, for Salix viminalis characteristic are relatively low concentrations of flavonoids (Harborne & Baxter, 1999), moderate concentrations of lignans (Pohjamo et al., 2003) and high concentrations of tannins (Nikitina & Orazov, 2001). Red Grapes 1350 Red Cabbage 1000 Broccoli Flowers 500 Spinach 500 Green Grapes 400 Table 10. Antioxidant activity of selected food products, vitamins and synthetic antioxidants. Selected points cited after (Prakash et al., 2010). TRILOX The concentration of polyphenols in Salix viminalis depends also on the season of the year. t maximal concentration of flavonoids is reached during blossom while tannins concentration is highest in Autumn (Nikitina & Orazov, 2001). Long exposure of Salix viminalis to sunshine (UV radiation) additionally increases the content of compounds capable to neutralization of free radicals (flavonoids, phenolic acids, proantocyanidynes) and reduces the content of salicylic acid and its derivatives (Tegelberg & Julkunen-Titto, 2001). Thus, a proper cultivation of Salix viminalis and well planned collection of polyphenols by extractive methods may result in a better efficiency of the whole attempt. However, as mentioned earlier, the total contents of polyphenols is relatively low and therefore the mass of isolated antioxidants in relation to the mass of raw material is dramatically low. Thus, the contemporary chemical technology should not only rely on the Nature's productivity but also search for more effective methods of polyphenols fabrication instead of collection. The heat-treatment of Salix vimanlis wood yields three basic products (charcoal, biooil, biogas) but yield of each depends on heating rate as depicted in fig. 13). As mentioned biooil formation is a result of lignin pyrolysis. Lignin is biopolymer ( The such collected fraction was considered as a raw biooil subsequently subjected to separation (extraction, chromatography) procedures and chemical characterization. Several instrumental methods were applied: gas chromatography GC-MS (Autosystem XL -MS Turbomass), nuclear magnetic resonance 1 H and 13 C NMR (700 MHz Bruker Avance) and infrared spectroscopy FT-IR (Perkin Elmer Spectrum 2000). Additionally the isolated fractions were tested as antioxidants according to ASTM 4871 standard (www.astm.org/Standards/D5770.htm, 2011). The latter procedure consists in the oxidation of a standard substance DBS (dibuthyl sebacate) in liquid phase at relatively severe conditions (150 0 C, constant flow of air 100 cm 3 /min). The oxidation may proceed in the presence (1000 ppm) of different protective antioxidants including separated fractions of raw biooil of Salix viminalis origin (dry distillation) and some commercially distributed antioxidants like BHT (2,6-di-tert-buthyl-p-cresol, buthyl hydroxy toluene, buthylated hydroxy toluene). BHT is widely used for the protection and stabilization of cosmetics and food products. The most promising results were achieved so far for two extracts called A and B (ethyl ether and dichloromethane extracts respectively). The results of a complex analysis (GC-MS, NMR, FTIR, UV-VIS) of both extracts confirm that: -each extract contains ca. 50 different compounds which may exhibit antioxidative properties, -most of the potential antioxidats are in fact derivatives of three organic structures (cumarol alcohol, coniferyl alcohol, synapine alcohol) which are claimed to be units of a the biopolymer occurring in Salix viminalis wood i. e . l ig nin (see tex t above); the compounds are released from the wood sample due to thermolysis of the biopolymerlignin, -the extract B contains more furan derivatives while the extract A contains more oxygen heterocyclic compounds. The determination of the composition of the two preliminary extracts A and B has a certain chemical value but more important is to confirm if the extract theoretically consisting of antioxidant species can exhibit efficient antioxidant activity, what is the main motivation for this research. The absence of such activity could question the whole research attempt which from early beginning was focused on a practical aspects i. e. on the applicability of all products of the dry Salix viminalis wood pyrolysis. The preliminary hypothesis was confirmed by the performed controlled oxidation tests ( fig. 17). It is visible that the addition of 1000 ppm of a commercial antioxidant i. e. BHT protects the test substance DBS for ca. 50 hours. After this time one observes increasing concentration of some oxidation products in the reaction chamber. In the same experimental conditions pure DBS undergoes instant oxidation without any significant protection time. The addition of the biooil extract B in the same proportion of 1000 ppm extends threefold the protection time. Thus, DBS was protected nearly for one week despite sever experimental conditions. It has to be stated that the protection times for much lower temperatures like room temperature must by very long. Summary The performer research program prove that Salix viminalis is a precious raw material for chemical treatment and it could not be seen only as a fuel for energetic utilization. Its practical value increases regarding that it is easily accessible as agriculture product. It grows fast with a good yield. Salix viminalis cultivation has a positive influence on the environment since their high mass productivity per hectare is definitely associated with CO 2 absorption from the atmosphere. The proposed elaboration method is nearly complete since all major products of the wood thermolysis i. e. solid (active carbon of CMS properties) and liquid (biooil containing antioxidants) may find a wide application in practice.
6,830.6
0001-01-01T00:00:00.000
[ "Biology" ]
Creating small circular, elliptical, and triangular droplets of quark-gluon plasma The experimental study of the collisions of heavy nuclei at relativistic energies has established the properties of the quark-gluon plasma (QGP), a state of hot, dense nuclear matter in which quarks and gluons are not bound into hadrons. In this state, matter behaves as a nearly inviscid fluid that efficiently translates initial spatial anisotropies into correlated momentum anisotropies among the produced particles, producing a common velocity field pattern known as collective flow. In recent years, comparable momentum anisotropies have been measured in small-system proton-proton ($p$$+$$p$) and proton-nucleus ($p$$+$$A$) collisions, despite expectations that the volume and lifetime of the medium produced would be too small to form a QGP. Here, we report on the observation of elliptic and triangular flow patterns of charged particles produced in proton-gold ($p$$+$Au), deuteron-gold ($d$$+$Au), and helium-gold ($^3$He$+$Au) collisions at a nucleon-nucleon center-of-mass energy $\sqrt{s_{_{NN}}}$~=~200 GeV. The unique combination of three distinct initial geometries and two flow patterns provides unprecedented model discrimination. Hydrodynamical models, which include the formation of a short-lived QGP droplet, provide a simultaneous description of these measurements. The experimental study of the collisions of heavy nuclei at relativistic energies has established the properties of the quark-gluon plasma (QGP), a state of hot, dense nuclear matter in which quarks and gluons are not bound into hadrons [1][2][3][4].In this state, matter behaves as a nearly inviscid fluid [5] that efficiently translates initial spatial anisotropies into correlated momentum anisotropies among the produced particles, creating a common velocity field pattern known as collective flow.In recent years, comparable momentum anisotropies have been measured in small-system proton-proton (p+p) and proton-nucleus (p+A) collisions, despite expectations that the volume and lifetime of the medium produced would be too small to form a QGP.Here, we report on the observation of elliptic and triangular flow patterns of charged particles produced in proton-gold (p+Au), deuterongold (d+Au), and helium-gold ( 3 He+Au) collisions at a nucleon-nucleon center-of-mass energy √ s N N = 200 GeV.The unique combination of three distinct initial geometries and two flow patterns provides unprecedented model discrimination.Hydrodynamical models, which include the formation of a short-lived QGP droplet, provide a simultaneous description of these measurements. Experiments at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) explore emergent phenomena in quantum chromodynamics, most notably the near-perfect fluidity of the QGP.To quantify this behavior, the azimuthal distribution of each event's final-state particles, dN dφ , is decomposed into a Fourier series as follows: where p T and φ are the transverse momentum and the azimuthal angle of a particle relative to the beam direction, respectively, and ψ n is the orientation of the n th order symmetry plane of the produced particles.The second (v 2 ) and third (v 3 ) Fourier coefficients represent the amplitude of elliptic and triangular flow, respectively.A multitude of measurements of the Fourier coefficients, utilizing a variety of techniques, have been well-described by hydrodynamical models, thereby establishing the fluid nature of the QGP in large-ion collisions [5]. The LHC experiments were first to observe similar features in small-system collisions [6][7][8][9], followed closely by reanalysis of previously recorded d+Au data from RHIC [10,11].These unexpected results highlighted the need to explore whether these smallest hadronic systems still form QGP. Alternatively, a number of physics mechanisms that do not involve QGP formation have been proposed, including those which attribute final-state momentum anisotropy to momentum correlations generated at the earliest stages of the collision, hence referred to as initial-state momentum correlation models (see Refs. [12] and [13] for recent reviews). A projectile geometry scan utilizing the unique capabilities of RHIC was proposed in Ref. [14] in order to discriminate between hydrodynamical models that couple to the initial geometry and initial-state momentum correlation models that do not.Varying the collision system from p+Au, to d+Au, to 3 He+Au changes the initial geometry from dominantly circular, to elliptical, and to triangular configurations, respectively, as characterized by the 2 nd and 3 rd order spatial eccentricities, which correspond to ellipticity and triangularity, respectively.The n th order spatial eccentricity of the system, ε n , typically determined from a Monte Carlo (MC) Glauber model of nucleon-nucleon interactions (see e.g.Ref [15]), can be defined as where r and φ are the polar coordinates of participating nucleons [16].The eccentricity fluctuates event-by-event and is generally dependent on the impact parameter of the collision and the number of participating nucleons.The mean ε 2 and ε 3 values for small impact parameter p/d/ 3 He+Au collisions are shown in Fig. 1a.The ε 2 and ε 3 values in d+Au and 3 He+Au are driven almost entirely by the intrinsic geometry of the deuteron and 3 He, while the values in p+Au collisions are driven by fluctuations in the configuration of struck nucleons in the Au nucleus, as the proton itself is on average circular.Hydrodynamical models begin with an initial spatial energy-density distribution with a given temperature that evolves in time following the laws of relativistic viscous hydrodynamics using an equation of state determined from lattice QCD [17].Examples of this evolution are shown for p/d/ 3 He+Au collisions in Fig. 1b using the hydrodynamical model sonic [18].The first panel of each row shows the temperature profile at time t = 1.0 fm/c for typical p+Au, d+Au, and 3 He+Au collisions.The following three panels show snapshots of the temperature evolution at three different time points.The initial spatial distribution also sets the pressure gradient field, which translates into a velocity field, which in turn determines the azimuthal momentum distribution of produced particles.The relative magnitude and direction of the velocity is represented in the figure by arrows.At the final time point, t = 4.5 fm/c, the mostly circular (top), elliptical (middle), and triangular (bottom) initial spatial eccentricities have been translated into dominantly radial, elliptic, and triangular flow, respectively.Given these different initial geometries, as characterized by the ε 2 and ε 3 values shown in Fig. 1a, hydrodynamical models provide a clear prediction for the ordering of the experimentally accessible v 2 and v 3 signals, following that of the ε n , namely x This ordering assumes that hydrodynamics can efficiently translate the initial geometric ε n into dynamical v n , which in turn requires a small value for the specific shear viscosity. There exist a class of alternative explanations where v n is not generated via flow, but rather is created at the earliest time in the collision process as described by socalled initial-state momentum correlation models.They produce a mimic flow signal where the initial collision generates color flux tubes that have a preference to emit particles back-to-back in azimuth [19,20].These color flux tubes, also referred to as domains, have a transverse size relative to the collision axis less than the colorcorrelation length of order 0.1-0.2fm.In the case where individual domains are resolved, a collision system with a larger overall area but the same characteristic domain size (for example d+Au and 3 He+Au compared with p+Au and p+p) should have a weaker correlation because the different domains are separated and do not communicate [21,22].An instructive analogy is a ferromagnet with many domains: if the domains are separated and disconnected, the overall magnetic field is weakened by the cancellation of effects from the random orientation in the different domains.The RMS diameter of the deuteron is 4.2 fm, and so in d+Au collisions the two hot spots are much further apart than the characteristic domain size.A straightforward prediction is then that the v 2 and v 3 coefficients should be ordered in contradistinction to the hydrodynamic flow prediction.An experimental realization of the proposed geometry scan has been under way since 2014 at RHIC.Collisions of 3 He+Au, p+Au, and d+Au at GeV were recorded in 2014, 2015, and 2016, respectively.The PHENIX experiment observed elliptic anisotropies in the azimuthal distributions of the charged particles produced in all three systems [23][24][25], as well as triangular anisotropies in 3 He+Au collisions [25].This Letter completes this set of elliptic and triangular flow measurements from PHENIX in all three systems and explores the relation between the strength of the measured v n and the initial-state geometry.The v n measurements reported here are determined using the event plane method [26] for charged hadrons in the midrapidity region covering |η| < 0.35, where η is the particle pseudorapidity, and θ is the polar angle of the particle.The 2 nd order event plane is determined using detectors in the Augoing direction covering −3.0 < η < −1.0 in p/d+Au and −3.9 < η < −3.1 in 3 He+Au.The 3 rd order event plane is determined using detectors in the Au-going direction covering −3.9 < η < −3.1 in all cases.The pseudorapidity gap between the particle measurements and the event plane determination excludes auto-correlations and reduces short-range correlations arising from, for example, jets and particle decays-typically referred to as nonflow correlations.Estimates of possible remaining nonflow contributions are included in the systematic uncertainties.Additional uncertainties related to detector alignment, data selection, and event plane determination are also included in the systematic uncertainty estimation. In these small collision systems the event plane resolution is low, meaning that v n {EP} = v 2 n [27] and the results are therefore equivalent to measurements using two-particle correlation methods. Measurements of v n as a function of p T are shown for all three systems in Fig. 2. The measurements are performed in the 0-5% most central events, an experimentally determined criterion which selects the 5% of events with the largest number of produced particles (hereafter referred to simply as "multiplicity") in the region −3.9 < η < −3.1.A detailed description of the centrality determination in small systems is given in Ref. [28].The vertical bars on each point represent the statistical uncertainties, while the shaded boxes represent the systematic uncertainties.The flow coefficients follow the prediction of hydrodynamical models shown in equation (3).These relationships suggest that the primary driver of azimuthal momentum anisotropies in particle emission is initial spatial anisotropy. While Fig. 2 offers qualitative support for the hydrodynamic theory, Fig. 3 directly compares these data to predictions from two hydrodynamical models, sonic [18] (used in Fig. 1) and iEBE-VISHNU [29].The core structure of the two models is similar: the initial conditions are evolved using viscous hydrodynamics, the fluid hadronizes, hadronic scattering occurs, and the v n coefficients of the final-state hadron distributions are determined using two-particle correlation methods.However, the detailed implementations are different, including the use of different fluctuations in the initial energy deposited, as well as different hadronic rescattering packages.Both calculations in Fig. 3 use a ratio of the shear viscosity η to entropy density s of η/s = 0.08 ≈ 1 4π , the conjectured lower limit in strongly-coupled field theories [30]. Figure 3 shows that the hydrodynamical models are consistent with the v 2 data in all three systems.Both models capture the magnitude difference of v 3 compared to v 2 , the collision system dependence, as well as the general p T dependence of v 3 .The models tend to diverge at higher p T in the case of v 3 , which may be more sensitive to the hadronic rescattering.To quantify the agreement, we calculate p-values following the procedure of incorporating data systematic uncertainties and their correlations into a modified χ 2 analysis laid out in Ref. [31] (See Methods for details).We find that sonic and iEBE-VISHNU yield combined p-values across the 6 measurements of 0.96 and 0.061 respectively.The large difference in p-values is driven by the effect of the dominant nonflow uncertainty, which is asymmetric and anti-correlated between v 2 and v 3 .sonic tends to underestimate the v 2 and overestimate the v 3 , particularly in p+Au and d+Au, which is more in line with the uncertainty correlations than iEBE-VISHNU, which tends to yield a poorer description of the p T slope.Overall, the simultaneous description of these two observables in three different systems using a common initial geometry model and the same specific η/s strongly supports the hydrodynamic picture. The hydrodynamic calculations shown in Fig. 3 use initial conditions generated from a nucleon Glauber model.However, initial geometries with quark substructure do not significantly change the ε 2 and ε 3 values for high multiplicity p/d/ 3 He+Au collisions [32,33] and thus the hydrodynamic results should be relatively insensitive to these variations. While we have focused on hydrodynamical models here, there is an alternative class of models that also translate initial spatial eccentricity to final state particle azimuthal momentum anisotropy.Instead of hydrodynamic evolution, the translation occurs via partonparton scattering with a modest interaction cross section.These parton transport models, for example A Multi-Phase Transport (ampt) Model [34], are able to capture the system ordering of v n at low-p T in small systems [35], but fail to describe the p T dependence and overall magnitude of the coefficients for all systems resulting in a p-value consistent with zero when compared to the data shown here.We have additionally analyzed ampt following the identical PHENIX event plane method and find even worse agreement with the experimental data. While the initial geometry models for the d+Au and 3 He+Au are largely constrained by our detailed understanding of the 2-and 3-body nucleon correlations in the deuteron and 3 He nuclei, respectively, the distribution of deposited energy around each nucleon-nucleon collision site could result in an ambiguity between the allowed ranges of the η/s and the broadening of the initial distribution, as pointed out in Ref. [13].However, a broader distribution of deposited energy results in a significant reduction of the ε 2 values and an even greater reduction of ε 3 , with by far the largest reduction in the p+Au system.Here again, the simultaneous constraints of the elliptic and triangular flow ordering eliminates this ambiguity. Our experimental data also rule out the initial-state correlations scenario where color domains are individually resolved as the dominant mechanics for creating v 2 and v 3 in p/d/ 3 He+Au collisions.After our results became publicly available, a new calculation was presented in Ref. [37], hereafter referred to as MSTV, where the ordering of the measured v n values matches the experimental data.This calculation posits that gluons from the Au target do not resolve individual color domains in the projectile p/d/ 3 He and interact with them coherently, and thus the ordering does not follow Eq. 4. The calculations are shown in Fig. 3, and yield a p-value for the MSTV calculations of v 2 and v 3 for the three collision systems of effectively zero, in contradistinction to the robust values found for the hydrodynamic models.Another key statement made by MSTV -that in the dilute-dense limit the saturation scale Q 2 s is proportional to the number of produced charged particles -is questionable [38], but also leads the MSTV authors to make a clear prediction that the v 2 will be identical between systems when selecting on the same event multiplicity.Shown in Fig. 4 are the previously published d+Au (20-40%) and p+Au (0-5%) v 2 where the measured mean charged particle multiplicities (dN ch /dη) match [36].The results do not support the MSTV prediction of an identical v 2 for these two systems at the same multiplicity, while the differences in v 2 [36].Blue and red curves correspond to sonic predictions for d+Au and p+Au, respectively.The green curve corresponds to MSTV calculations for 0-5% central p+Au collisions, which the authors state are also applicable to d+Au collisions at the same multiplicity. between the systems follow the expectations from hydrodynamic calculations matched to the same dN ch /dη. In summary, we have shown azimuthal particle correlations in three different small-system collisions with different intrinsic initial geometries.The simultaneous constraints of v 2 and v 3 in p/d/ 3 He+Au collisions definitively demonstrate that the v n 's are correlated to the initial geometry, removing any ambiguity related to event multiplicity or initial geometry models.We find that initial-state momentum correlation models where color domains are individually resolved are ruled out as the dominant mechanism behind the observed collectivity.New calculations where the domains are not resolved are unable to simultaneously explain the v 2 and v 3 in high multiplicity collisions, and are further unable to explain the difference in v 2 between p+Au and d+Au when the multiplicity selections are matched.Further, we find that hydrodynamical models which include QGP formation provide a simultaneous and quantitative description of the data in all three systems.tor (FVTX) covering 1.0 < |η| < 3.0 and composed of high efficiency silicon mini-strips [40] provides an independent event plane angle determination.A description of the PHENIX detector can be found in Ref. [41]. Event Selection: A minimum bias (MB) interaction trigger is provided by the BBC, which requires at least one hit tube in both the south (η < 0, Au-going direction) and north (η > 0, p/d-going direction), along with an online vertex within |z vertex | < 10 cm of the nominal interaction region.In addition to the MB trigger, a high multiplicity trigger requiring > 35 (> 40) hit tubes in the BBCS provided a factor of 25 (188) enhancement of high multiplicity events in p+Au (d+Au) collisions.A more precise offline collision vertex is determined using timing information in the BBC and is constrained to |z vertex | < 10 cm in order to be sufficiently inside the acceptance of the detector.Events containing more than one nucleus-nucleus collision, referred to as double interaction events, are rejected using an algorithm based on BBC charge and timing information described in Ref. [24].Event centrality is determined using the total charge collected in the south BBC, as described in Ref. [28].We require an event centrality of 0-5% to select events with the highest multiplicity, where the signal of interest is strongest.In total, 322 (636) million p+Au (d+Au) events are analyzed. Track Selection: Quality cuts are applied to reconstructed particle tracks requiring hits in both the DC and the outermost PC layer with a required 3σ level of agreement.This removes the majority of tracks that do not originate from the primary collision.Further details can be found in Refs.[23][24][25]. Event Plane Determination: The third-order symmetry plane angle, ψ 3 is measured using the south BBC via the standard method [42].Namely, where N is the number of particles and φ i is the azimuthal angle of each particle.The ψ 3 resolution, R(ψ 3 ), is calculated using the three-subevent method which correlates measurements in the south BBC, south FVTX, and central arms.The calculated resolutions are 6.7% and 5.7% in p+Au and d+Au collisions, respectively.Determination of v 3 : The v 3 values are measured using the event plane method [26,42] as where φ is the azimuthal angle of particles emitted at midrapitiy, |η| < 0.35. Systematic uncertainties: The systematic uncertainties reported are estimated according to the following methods for the measurements of v 3 in both p+Au and d+Au collisions. The effect of remaining background tracks due primarily to photon conversions and weak decays is estimated by comparing the v 3 values when requiring a tighter matching between the track projection and hits in PC3.We find that this increases the v 3 by < 1% and 7% in p+Au and d+Au collisions, respectively, independent of p T . The effect of double interaction event selection is estimated by comparing the v 3 values when requiring a tighter cut on the rejection.This yields a change in the v 3 of 3% and 2% in p+Au and d+Au collisions respectively, independent of p T . Uncertainty in the event plane resolution comes from two sources.The first is the statistical uncertainty inherent in the resolution calculation, which yields a ±13% and ±17% uncertainty in p+Au and d+Au collisions, respectively.Additionally, the resolution is calculated using central arm tracks over two different p T regions.This leads to an uncertainty of 7% and 34% in p+Au and d+Au collisions, respectively. We also include an uncertainty due to the choice of event plane detector.In p+Au collisions, this is determined by comparing the v 3 calculated using event planes determined by the south BBC and FVTX.We find that the results are consistent within uncertainties, as expected.In d+Au collisions, v 3 is also calculated using an alternative method utilizing two particle correlations.Based on a ratio of the v 3 values calculated using the two particle correlation and event plane methods, we assign a 16% systematic uncertainty. In v 3 , nonflow decreases the amplitude of the measured signal [25], and its contribution increases with increasing p T .To estimate the nonflow contribution we calculate a normalized correlation function between midrapidity tracks and BBC photomultiplier (PMT) tubes: where Q PMT is the charge on the PMT in the pair and N track(p T )−PMT same event is the number of track-PMT pairs from the same event.M (∆φ, p T ) is determined in the same way as S(∆φ, p T ) but with one particle in one event and another particle in a different event (the so-called mixed event technique).This normalization procedure accounts for acceptance effects and produces a correlation function of order unity.Next, we fit C(∆φ, p T ) with a Fourier expansion: We do this process for both systems in which we want to estimate the nonflow (p+Au or d+Au) and for p+p at the same collision energy.We take the Fourier coefficients c n to find the nonflow contribution to the v n values in a given system, where Q is the average BBC charge for the system.The ratio of average charges normalizes the c n by multiplicity. The assumption is that c p+p n is entirely due to nonflow such that the deviation of the nonflow ratio from one is taken as an estimate of the nonflow, and included as a p T dependent systematic uncertainty. A summary of the systematic uncertainties on v 3 in p/d+Au are given in Table I along with 3 He+Au uncertainties taken from Ref. [25]. Comparison of theory to data: The level of agreement between the different theoretical calculations and the data presented in this work is quantified by performing a least squares fit incorporating a careful treatment of various types of systematic uncertainties, following Ref.[31]. The nonflow uncertainty is the dominant source of systematic uncertainty in all six measurements.It is known to be point-to-point correlated as a function of p T , to contribute asymmetrically, and to be anti-correlated between v 2 and v 3 .Namely, the nonflow can only reduce the measured v 2 while simultaneously only increasing the v 3 . All remaining measurement uncertainties are assumed to be uncorrelated between v 2 and v 3 .The remaining uncertainties are assumed to contribute in the following ways: 1. as point-to-point uncorrelated uncertainties 2. as point-to-point anti-correlated uncertainties (e.g. a tilt in the p T dependence) as point-to-point correlated uncertainties The total systematic uncertainty (excluding the nonflow) is taken to contribute a fraction of its value to each of the above types.A conservative approach is taken, and these fractions are allowed to vary independently for each measurement within reasonable limits.The bands around the theoretical calculations shown in Fig. 3 indicate some subset of theoretical uncertainties which differs between the models.We make the assumption that the dominant contribution is a point-to-point correlated uncertainty which is additionally correlated between v 2 and v 3 .Given their small uncertainties, the inclusion of this treatment has little effect on the results for either sonic or MSTV.It has the largest effect with iEBE-VISHNU, however its inclusion does not affect the relative ordering of the agreement discussed below. We calculate a p-value from the least squares minimization in the standard way, where the number of degrees of freedom is simply the total number of data points, as there are no free parameters in the comparison.The total p-values, along with the p-values for each collision system, are given in Table II for sonic, iEBE-VISHNU, MSTV, and ampt.The ampt calculations are taken from Ref. [35], which calculate v 2 and v 3 relative to the initial participant nucleon plane, utilizing the so-called string melting mechanism, and a parton interaction cross section of σ = 1.5 mb.sonic provides a very good description of the data, with a rather close to unity value of 0.96, which may indicate a modest overestimate of the statistical or systematic uncertainties.iEBE-VISHNU yields a worse p-value of 0.061.The larger p-value for sonic compared to iEBE-VISHNU is driven by the nonflow uncertainty.The fact that sonic tends to underpredict the v 2 while over-predicting the v 3 is mitigated by the nonflow uncertainty, while iEBE-VISHNU's worse description of the p T dependence in p+Au and d+Au is not compensated for by the relatively small remaining uncertainty Both MSTV and ampt yield a very poor description of the data with p-values of 8.83 × 10 −17 and 1.71 × 10 −46 respectively. FIG. 1 . FIG. 1. | Average system eccentricities from a Monte Carlo Glauber model and hydrodynamic evolution of small systems.a, Average second (third) order spatial eccentricities, ε2 (ε3), shown as columns for small impact parameter p+Au (red), d+Au (blue), and 3 He+Au (black) collisions as calculated from a MC Glauber model.The second and third order spatial eccentricities correspond to ellipticity and triangularity respectively as depicted by the shapes inset in the bars.b, Hydrodynamic evolution of a characteristic head-on p+Au (top), d+Au (middle), and 3 He+Au (bottom) collision at √ s N N = 200 GeV as calculated by sonic, where the p/d/ 3 He completely overlap with the Au nucleus.From left to right each row gives the temperature distribution of the nuclear matter at four time points following the initial collision at t = 0.The arrows depict the velocity field, with the length of the longest arrow plotted corresponding to β = 0.82. 2 FIG. 2 . FIG. 2. | Measured vn(pT ) in three collision systems.a, Measurements of v2(pT ) in the 0-5% most central p+Au, d+Au, and 3 He+Au collisions at √ s N N = 200 GeV.A d+Au event from a MC Glauber model is inset with the elliptic symmetry plane angle, ψ2, depicted.b, Measurements of v3(pT ) in the 0-5% most central p+Au, d+Au, and 3 He+Au collisions at √ s N N = 200 GeV.A 3 He+Au event from a MC Glauber model is inset with the triangular symmetry plane angle, ψ3, depicted.Each point in a,b represents an average over pT bins of width 0.2 GeV/c to 0.5 GeV/c; black diamonds are 3 He+Au, blue squares are d+Au, red circles are p+Au.Line error bars are statistical and box error bars are systematic (Methods). FIG. 3 . FIG.3.| Measured vn(pT ) in three collision systems compared to models.a, Measured vn(pT ) in the 0-5% most central p+Au collisions compared to models.b, Measured vn(pT ) in the 0-5% most central d+Au collisions compared to models.c, Measured vn(pT ) in the 0-5% most central 3 He+Au compared to models.Each point in a-c represents an average over pT bins of width 0.2 GeV/c to 0.5 GeV/c; black circles are v2, black diamonds are v3.The solid red (dashed blue) curves in a-c represent hydrodynamic predictions of vn from sonic (iEBE-VISHNU).The solid green curves in a-c represent initial-state momentum correlation postdictions of vn from MSTV. FIG. 4 . FIG. 4. | Measured v2(pT ) in p+Au and d+Au collisions at the same event multiplicity.Measured v2(pT ) in the 0-5% most central p+Au collisions and 20-40% central d+Au collisions compared to sonic predictions and MSTV post-dictions.Each point represents an average over pT bins of width 0.2 GeV/c to 0.5 GeV/c; blue circles are d+Au, red circles are p+Au.Line error bars are statistical and box error bars are systematic (Methods).The quoted dN ch /dη values are taken from Ref.[36].Blue and red curves correspond to sonic predictions for d+Au and p+Au, respectively.The green curve corresponds to MSTV calculations for 0-5% central p+Au collisions, which the authors state are also applicable to d+Au collisions at the same multiplicity. TABLE I . Systematic uncertainties in the v3 measurements as a function of pT in 0-5% central p+Au, d+Au, and 3 He+Au [25] collisions at √ s N N = 200 GeV. TABLE II . Calculated p-values between model calculations and data.
6,531
2018-05-08T00:00:00.000
[ "Physics" ]
Aurora: a fluorescent deoxyribozyme for high-throughput screening Abstract Fluorescence facilitates the detection, visualization, and tracking of molecules with high sensitivity and specificity. A functional DNA molecule that generates a robust fluorescent signal would offer significant advantages for many applications compared to intrinsically fluorescent proteins, which are expensive and labor intensive to synthesize, and fluorescent RNA aptamers, which are unstable under most conditions. Here, we describe a novel deoxyriboyzme that rapidly and efficiently generates a stable fluorescent product using a readily available coumarin substrate. An engineered version can detect picomolar concentrations of ribonucleases in a simple homogenous assay, and was used to rapidly identify novel inhibitors of the SARS-CoV-2 ribonuclease Nsp15 in a high-throughput screen. Our work adds an important new component to the toolkit of functional DNA parts, and also demonstrates how catalytic DNA motifs can be used to solve real-world problems. Introduction Fluorescence makes it possible to detect, visualize, and track molecules with high sensitivity and specificity.It also facilitates analysis of dynamic interactions important for molecular function.Fluorescence-based techniques are widely used in microscopy , immunology , cell sorting, DNA sequencing, diagnostics, and microarrays, and new applications continue to be developed.Such techniques offer a number of advantages relative to those with other types of readouts: they are typically more sensitive than colorimetric assays, offer greater flexibility and control over experimental readouts that chemiluminescent ones, and are safer than radioactive assays.One of the most powerful fluorescent tools is the fluorescent protein GFP ( 1 ,2 ).Originally identified in the jellyfish Aequorea victoria , fluorescent proteins have now been discovered in a wide range of organisms.The properties of these proteins have been enhanced by engineering, and variants have been developed that fold more efficiently, function over a wide range of conditions, and generate fluorescent signals with different colors.Such proteins have greatly facilitated studies of protein expression and localization.More recently, SELEX has been used to identify fluorescent RNA aptamers with a wide range of functional properties ( 3 ,4 ).These aptamers provide a way to investigate the functions of cellular RNA molecules, and engineered variants can also be used to monitor metabolite concentrations in real time.Although extremely useful for studies of biological systems, such motifs are less suitable for in vitro applications such as high-throughput screening.In the case of proteins this is related to both time and money: proteins are generally expensive and time-consuming to produce, and more difficult to evolve than nucleic acids.On the other hand, a significant limitation of fluorescent RNA aptamers is that they are unstable under many conditions due to the ubiquitous presence of ribonucleases ( 5 ). DNA is another type of polymer capable of sophisticated functions, and functional DNA molecules such as aptamers and deoxyribozymes can be useful alternatives to functional protein and RNA motifs (5)(6)(7)(8).DNA can be chemically synthesized at low cost, is stable over a wide range of conditions (including in the presence of ribonucleases, which are often present in samples), can typically be denatured and refolded without losing activity, and can be readily engineered using artificial evolution.Motifs with new functions (such as deoxyribozymes allosterically regulated by ligands) can be constructed by rational design or selection ( 9 ).And powerful enzymatic methods such as the polymerase chain reaction (PCR) ( 10 ) and rolling circle amplification (RCA) ( 11 ) can be used to copy and therefore amplify the signals generated by functional DNA.Despite these advantages, few methods to generate fluorescent signals using functional DNA motifs have been developed (12)(13)(14)(15)(16)(17).One approach uses DNA aptamers that bind and enhance the fluorescence of ligands (12)(13)(14).The signal to noise ratios of these aptamers rarely exceed 100-fold, and tend to be significantly lower than those of their RNA counterparts ( 14 ).In addition, because the fluorophore must remain associated with the aptamer to generate a signal, this approach provides a less permanent and robust readout than a signal generated by a catalyst or enzyme.Another method utilizes a nonspecific peroxidase reaction catalyzed by DNA G-quadruplexes in the presence of hydrogen peroxide and a hemin cofactor ( 15 ,16 ).Although typically used to generate a colorimetric product, a fluorogenic signal can also be generated when this reaction is performed using phenolic substrates such as tyramine ( 17 ).Because the peroxidase reaction is also promoted by hemin itself, this method suffers from high background.Hydrogen peroxide is also incompatible with some types of assays, and high concentrations can inactivate the hemin cofactor.These limitations highlight the need for new and complementary methods to generate fluorescent signals using DNA. In this study we used in vitro selection to identify a deoxyribozyme that generates a fluorescent signal by converting the coumarin substrate 4-MUP into the fluorescent product 4-MU ( 18 ).In a complementary study, a similar approach was employed to identify a deoxyribozyme that generates a colorimetric signal by converting the colorless substrate pNPP into the yellow product pNP ( 19 ).Our deoxyribozyme, which we named Aurora, offers a number of advantages relative to existing methods.Aurora works under mild conditions and uses an inexpensive and commercially available substrate.It is small, label free, and can be rapidly synthesized at low cost.Aurora is a potent enhancer of fluorescence, and generates a signal in minutes with a signal to noise ratio of > 700.It is highly specific for its substrate and orthogonal to a chemiluminescent deoxyribozyme previously discovered in our group ( 20 ).This means that it could potentially be useful for multiplex applications (e.g. by first analyzing light production in the absence of excitation and then, after the signal has decayed, exciting the sample and analyzing fluorescence).Aurora can be modified to only generate fluorescence in the presence of an input of interest (such as a target molecule in a sample).It is also useful for real-world applications: an engineered variant can detect ribonuclease activity with a limit of detection of ∼100 pM, and was used to identify small molecule inhibitors of the S AR S-CoV-2 ribonuclease Nsp15 in a high-throughput screen.Our results provide a new and improved way to construct fluorescent sensors using DNA.They also show how such sensors can be used to solve real-world problems. Oligonucleotides Oligonucleotides were chemically synthesized by GENERI BIOTECH s.r.o., Sigma-Aldrich or IDT and purified by PAGE or HPLC.See Supplementary Table S1 for the sequences of all oligonucleotides used in this study. Pool design The library used in our initial selection (Pool 1 in Supplementary Table S1 ) was generated by randomly mutagenizing the H1 variant of Supernova (a chemiluminescent deoxyribozyme recently discovered in our group ( 20 )) at a rate of 21% per position.A 20-nucleotide long primer-binding site was also added to the 3 end.The library used for the reselection (Pool 2 in Supplementary Table S1 ) was based on the sequence of Hit10.This generated fluorescence with the highest signal to noise ratio of any of the deoxyribozymes we tested from the initial selection.The 85 positions in Hit10 were randomly mutagenized at a rate of 21% per position and a new 20-nucleotide long primer-binding site was added to the 3 end. Initial selection The single-stranded DNA pool (Pool1) and blocking oligonucleotide (REV1) were mixed in Milli-Q water.After heating at 65 • C for 2 min and cooling at room temperature for 5 min, 5 × selection buffer and then the disodium salt of the 4methylumbelliferyl phosphate substrate (4-MUP) were added.Final concentrations were 1 μM Pool1, 1.5 μM REV1, 1 × selection buffer (200 mM KCl, 1 mM ZnCl 2 , 1 μM Ce(NO 3 ) 4 , 0.1 μM PbCl 2 , 50 mM HEPES pH 7.4) and 1 mM 4-MUP.After incubating for 2.4 h, DNA was concentrated by ethanol precipitation.A short oligonucleotide (FWD1) was then ligated to library members containing a 5 phosphate.To increase the efficiency of the ligation, the reaction was performed in the presence of a splint oligo (Splint1) which was complementary to both FWD1 and the 5 end of Pool1.The ligation reaction was incubated for 5 min at 37 • C. Final concentrations were 2.5 μM Pool1, 2.5 μM FWD1, 2.5 μM Splint1, 1 × T4 DNA ligase buffer and 0.5 Weiss units of T4 DNA ligase per 1.0 μg of Pool1.DNA molecules were then separated by 6% urea-PAGE and DNA molecules that co-migrated with a 125-nucleotide marker were cut from the gel, eluted and ethanol precipitated.They were then amplified by PCR using Q5 HotStart DNA Polymerase and the FWD1r and REV1p primers.Final concentrations were 500 × diluted Pool1, 0.5 μM FWD1r, 0.5 μM REV1p, 1 × Q5 reaction buffer, 1 × Q5 high GC enhancer, 0.2 mM dNTPs and 0.02 U of Q5 HotStart DNA polymerase per 1 μl of the PCR reaction mixture.Double-stranded PCR products were isolated using a Macherey-Nagel PCR Clean-up kit.The reverse primer REV1p contained a 5 phosphate, and the strand synthesized using this primer (which was complementary to Pool1) was digested using λ-exonuclease.Final concentrations were 5 μg of the double-stranded PCR product, 1 × Lambda Exonuclease reaction buffer and 1 μl (5 U) of Lambda Exonuclease in a volume of 50 μl.The Lambda Exonuclease mixture was incubated at 37 • C for 1 h.The resulting singlestranded DNA molecules (of length 125 nucleotides) were purified using a Macherey-Nagel PCR Clean-up kit.The FWD1r primer used in the PCR contained a single RNA linkage at its 3 end.This made it possible to regenerate the 5 end of the Pool1 by base hydrolysis.To do this, DNA was heated at 65 • C for 2 min, cooled at room temperature for 5 min, and mixed with 10 × hydrolysis buffer (1 × hydrolysis buffer: 20 mM Trizma base, 400 mM KOH, 4 mM EDTA).The RNA linkage was then base hydrolyzed at 90 • C for 10 min.The resulting 105 nucleotide long DNA molecules (corresponding to single-stranded Pool 1 molecules with a 5 hydroxyl group) were then isolated by 6% urea-PAGE and ethanol precipitation.After the fifth round of selection the library was amplified by PCR, purified using a Macherey-Nagel PCR Clean-up kit, and sequenced by Eurofins Genomics using an amplicon paired-end sequencing run. Reselection Reselection conditions were the same as those used in the initial selection except for the following differences.First, Pool2 was used instead of Pool1.Second, the library was incubated with 4-MUP for 14.4 min rather than 2.4 h.Third, a new blocking oligonucleotide and reverse primer (REV2 / REV2p) was used ( Supplementary Table S1 ).This library was sequenced after the sixth round by Eurofins Genomics using an amplicon paired-end sequencing run. Analysis of fluorescence production Oligonucleotides corresponding to individual sequences from evolved libraries were ordered from GENERI BIOTECH s.r.o.Fluorescence production was measured as follows: oligonucleotides were re-suspended in Milli-Q water, heated at 65 • C for 2 min, and cooled at room temperature for 5 min.After adding 5 × selection buffer or 5 × Aurora buffer, samples were transferred to a white half-area 96-well plate (Corning).4-MUP was then added.In continuous assays, fluorescence was measured for 4 h using a Tecan Spark plate reader (Tecan Group).In discontinuous assays, after incubating for a specific time, samples were quenched with 20 μl of 1 M KOH and fluorescence was measured using a plate reader.In a typical experiment final concentrations were 15 μM of the tested oligonucleotide and either 1 × selection buffer (200 mM KCl, 1 mM ZnCl 2 , 1 μM Ce(NO 3 ) 4 , 0.1 μM PbCl 2 , 50 mM HEPES pH 7.4) or 1 × Aurora buffer (200 mM KCl, 1 mM ZnCl 2 , 50 mM HEPES pH 7.4, 5% (v / v) DMSO) and 30 μM 4-MUP.Fluorescence was measured in a white half-area 96-well plate (Corning) using a Tecan Spark plate reader with the following settings: excitation 358 ( ±5) nm, emission 455 ( ±5) nm, 97 nm wavelength gap, optimal gain, 30 flashes, Z position calculated from one well in the plate. Analysis of phosphorylation Oligonucleotides corresponding to individual sequences from evolved libraries were ordered from GENERI BIOTECH s.r.o., purified by 6% urea-PAGE or HPLC, and resuspended in Milli-Q water.Self-phosphorylation reactions were performed by first heating deoxyribozymes at 65 • C for 2 min and cooling at room temperature for 5 min.After mixing with 5 × selection buffer or 5 × Aurora buffer, the 4-MUP substrate was added.Final concentrations in a typical reaction were 1 μM deoxyribozyme, 1 × selection buffer (200 mM KCl, 1 mM ZnCl 2 , 1 μM Ce(NO 3 ) 4 , 0.1 μM PbCl 2 , 50 mM HEPES pH 7.4) or 1 × Aurora buffer (200 mM KCl, 1 mM ZnCl 2 , 50 mM HEPES pH 7.4, 5% (v / v) DMSO), and 1 mM 4-MUP unless stated otherwise.Reactions were incubated for specific times at room temperature and stopped by the addition of EDTA to a final concentration of 25 mM.Reactions were then concentrated by ethanol precipitation, and reacted deoxyribozymes (now containing a 5 phosphate) were ligated to a short oligonucleotide as described in the section 'Initial Selection.'Reacted and unreacted molecules were separated by 6% urea-PAGE.DNA was visualized by staining with GelRed using the protocol recommended by the manufacturer.Gels were scanned using a Typhoon laser scanner and the percentage of reacted and unreacted molecules was quantified using Image-Quant TL software. Calculation of signal to noise ratios Signal to noise ratios were defined as the fluorescence of a sample in the presence of deoxyribozyme divided by the fluorescence of the sample in the absence of the deoxyribozyme.The background signal was defined as the fluorescence of 1 × Aurora buffer (200 mM KCl, 50 mM HEPES, pH 7.4, 1 mM ZnCl 2 and 5% (v / v) DMSO) and was subtracted before calculating signal to noise ratios. Optimization of reaction conditions To maximize fluorescence we searched for optimal reaction conditions.The optimal DNA, 4-MUP, KCl, ZnCl 2 and HEPES concentrations were determined by titration experiments.We also tested the effects of different monovalent and divalent metal ions, an organic solvent (DMSO), and a molecular crowding agent (PEG 200) on activity.Titration experiments to determine the optimal pH during and after the reaction were also performed.Aurora 2 ( Supplementary Table S1 ) was used for these experiments if not stated otherwise.Activity was measured by analysis of fluorescence production (using a plate reader assay) and self-phosphorylation (using a ligation assay). Kinetic measurements and analysis Kinetic measurements were performed using a ligation assay.Deoxyribozyme (either Aurora 1 or Aurora 2; Supplementary Table S1 ) was mixed with Milli-Q water, heated at 65 • C for 2 min, and cooled at room temperature for 5 min.5 × Aurora buffer and 4-MUP were then added.Final concentrations were 1 μM deoxyribozyme, 1 × Aurora buffer (200 mM KCl, 1 mM ZnCl 2 , 50 mM HEPES pH 7.4, 5% (v / v) DMSO) and 1 μM to 300 μM 4-MUP.Reactions were incubated for specific times at room temperature and stopped by the addition of EDTA to a final concentration of 25 mM.Reactions were stopped at time points that corresponded to the linear phase of the reaction.After ethanol precipitation, reacted deoxyribozyme (containing a 5 phosphate) was ligated to a short oligonucleotide as described in the section 'Initial Selection'.Reacted and unreacted molecules were separated by 6% urea-PAGE.DNA was visualized by staining with GelRed using the protocol recommended by the manufacturer and gels were scanned using a Typhoon laser scanner.The percentage of reacted and unreacted deoxyribozyme was quantified using ImageQuant TL software.k cat and K m values were obtained using Prism 9 software.Curves were fit using the equations: Oligonucleotide detection using an engineered version of Aurora The oligonucleotide sensor was mixed with the target oligonucleotide in water, heated at 98 • C for 2 min, and immediately cooled on ice for 5 min.5 × Aurora buffer and DMSO were then added.Samples were transferred to a white half-area 96well plate (Corning), 4-MUP was added, and the reaction mixture was incubated for 4 h at room temperature.Final concentrations were 5 μM of the oligonucleotide sensor, 10 μM of the target oligonucleotide, 1 × Aurora buffer (200 mM KCl, 50 mM HEPES pH 7.4, 1 mM ZnCl 2 and 5% (v / v) DMSO) and 30 μM 4-MUP.After 4 h the reaction was stopped by adding 20 μl of 1 M KOH, and fluorescence was then measured using a Tecan Spark plate reader.Analysis of fluorescence production was performed as described below in 'Calculation of signal to noise ratios'. RNase A sensor based on Aurora The RNase A sensor was heated at 65 • C for 2 min, and cooled at room temperature for 5 min.Then 5 × Aurora buffer and DMSO were added.Samples were transferred to a white halfarea 96-well plate (Corning), and 4-MUP and either RNase A (Thermo Fisher Scientific) alone or RNase A and RiboLock (Thermo Fisher Scientific) were added.The reaction mixture was incubated for 4 h at room temperature.Final concentrations were 5 μM of the RNase A sensor, 500 nM RNase A or 500 nM RNase A plus 500 nM RiboLock, 1 × Aurora buffer (200 mM KCl, 50 mM HEPES pH 7.4, 1 mM ZnCl 2 and 5% (v / v) DMSO) and 30 μM 4-MUP if not stated otherwise.After 4 h the reaction was stopped by adding 20 μl of 1 M KOH to the reaction mixture.Fluorescence was then measured using a Tecan Spark plate reader.Analysis of fluorescence production was performed as described below in the section 'Calculation of signal to noise ratios'. Plasmid construction, expression, and purification of Nsp15 from SARS-CoV-2 Nsp15 cloning, expression and purification were performed as described in Kim et al. ( 21 ) with minor modifications.A synthetic DNA sequence encoding an Esc heric hia coli codon optimized version of Nsp15 was cloned into a pMCSG7 vector using Gibson assembly.Cloning was confirmed by Sanger sequencing.The final pS AR S-CoV-2-Nsp15_6 × His vector encoded the full-length Nsp15 protein fused to an Nterminal hexahistidine tag via a TEV protease cleavage site.E. coli NiCo21(DE3) cells (New England Biolabs) were transformed with this plasmid.For large-scale expression and purification, a 3 l culture of LB medium was grown at 37 • C in a LEX bioreactor (Epiphyte3) in the presence of 100 μg / ml ampicillin.Once the culture reached OD 600 ∼ 1.0, flasks were moved to an 18 • C bioreactor bath and supplemented with 0.1% glucose and 40 mM K 2 HPO 4 (final concentration).Protein expression was induced by the addition of 0.2 mM IPTG for 16 h at 18 • C. Bacterial cells were harvested by centrifugation at 7000g and cell pellets were resuspended in 40 ml lysis buffer (50 mM HEPES, 500 mM NaCl, 5% [v / v] glycerol, 20 mM imidazole, 10 mM β-mercaptoethanol, pH 8.0) per liter of culture and lysed using a CF1 high-pressure homogenizer.Cellular debris was removed by centrifugation at 25 000g for 40 min at 4 • C. The supernatant was filtered through a 0.45 μm filter, mixed with 2 ml of Ni 2+ Sepharose equilibrated with lysis buffer, and the suspension was added to a gravity-flow column.Unbound proteins were removed by washing with 40 ml of lysis buffer.Bound proteins were eluted with 10 ml of lysis buffer supplemented with 500 mM imidazole pH 8.0.A final purification was performed using a Superdex 200 column equilibrated in lysis buffer in which 10 mM β-mercaptoethanol was replaced by 1 mM TCEP.Fractions containing Nsp15 were collected.Lysis buffer was replaced with storage buffer (150 mM NaCl, 20 mM HEPES pH 7.5, 1 mM TCEP) via repeated concentration and dilution using a 30 kDa MWCO filter (Amicon-Millipore).The final protein sample was concentrated to 1 mg / ml, aliquoted, snap frozen in liquid nitrogen and stored at -80 • C until further use. High-throughput screen for Nsp15 inhibitors using an Aurora Nsp15 sensor Small molecules from a 1000-member fragment screen library (Maybridge) were transferred to the wells of 384-well plates using an Echo 550 liquid handler.Nsp15 protein in 1 × Nsp15 buffer (50 mM KCl, 20 mM HEPES pH 7.4, 5 mM MnCl 2 , 0.003% (v / v) Tween20) was then added using a CERTUS Flex liquid handler.After mixing, the Aurora Nsp15 sensor (in 1 × Nsp15 buffer) was added using a CERTUS Flex liquid handler.Reactions were mixed again.Final concentrations were 25 μM Aurora Nsp15 sensor, 400 nM Nsp15 protein, 1 × Nsp15 buffer (50 mM KCl, 20 mM HEPES pH 7.4 and 5 mM MnCl 2 ), 0.003% (v / v) Tween20, and 200 μM small molecule from the fragment screen library in a volume of 20 μl.After incubating at room temperature for 1 h to allow Nsp15 to cleave and activate the Aurora Nsp15 sensor, 80 μl of 1 × Aurora reaction mixture (50 mM KCl, 20 mM HEPES pH 7.4, 1.25 mM ZnCl 2 , 6.25% (v / v) DMSO and 18.75 μM 4-MUP) was added using a CERTUS Flex liquid handler.The ZnCl 2 in this buffer inhibited the Nsp15 protein while activating Aurora for catalysis.Final concentrations were 5 μM Aurora Nsp15 sensor, 80 nM Nsp15 protein, 40 μM small molecule from the fragment screen library, 1 × Aurora / Nsp15 buffer (50 mM KCl, 20 mM HEPES pH 7.4, 1 mM ZnCl 2 1 mM MnCl 2 ), 0.0006% Tween20, 5% (v / v) DMSO, and 15 μM 4-MUP in a volume of 100 μl.The reaction mixture was incubated at room temperature for 4 h, and fluorescence was measured using a Tecan Spark plate reader.Fluorescence was measured in a black flat bottom 384-well plate (Corning).Analysis of fluorescence production was performed as described in the section 'Calculation of signal to noise ratios'.A counter screen was also performed to confirm that the inhibitors identified in the initial screen inhibit the Nsp15 protein rather than Aurora.The counter screen was performed as described above, but Aurora 2 was used instead of the Aurora Nsp15 sensor. Analysis of data from high-throughput screens Wells containing 1 mM ZnCl 2 (which inhibited Nsp15 at this concentration) served as negative controls, and were used to determine background levels of fluorescence.The average value of this background was subtracted from the fluorescence values obtained from all other wells.Wells containing aliquots of DMSO alone rather than DMSO plus small molecule were used as positive controls.After subtraction of the background, the average value of these positive controls was defined as 100% Nsp15 activity.Activity of Nsp15 in the presence of small molecules from the fragment screen library was calculated relative to this positive control value.Z-factors ( 22 ) were calculated for each 384-well plate to determine the quality of the screen.Z -factors were calculated using the equation Zfactor = 1 -[3( σ p + σ n )] / ( μ p -μ n ) where μ p is the mean fluorescence of the positive control, μ n is the mean fluorescence of the negative control, σ p is the standard deviation of the mean fluorescence of the positive control, and σ n is the standard deviation of the mean fluorescence of the negative control. Calculation of IC50 values IC50 values were measured for small molecules that strongly inhibited Nsp15 in high-throughput screens.Solutions containing different concentrations of these inhibitors were transferred to the wells of 384-well plates using an Echo 550 liquid handler.To obtain the same volume in each well, the drops containing small molecules were backfilled with DMSO to 200 nl.For each inhibitor characterized, IC50 values were measured using both the Aurora Nsp15 sensor and using the FRET assay.After determining the relative activity of Nsp15 at each concentration of inhibitor, IC50 values were calculated using Prism 9 software (GraphPad). NMR experiments HPLC-purified DNA was purchased from GENERI BIOTECH s.r.o.DNA was re-suspended in Milli-Q water, heated at 65 • C for 2 min, cooled at room temperature for 5 min, and 5 × Aurora buffer was then added.Concentrations at this point were 15 μM DNA, 200 mM KCl, 50 mM HEPES, pH 7.4 and 1 mM ZnCl 2 .Samples were concentrated using Ultra-Amicon Centrifugal Filter Units (cutoff 3 kDa) to 500 μM DNA and a 1.5 molar excess of 4-MUP, D 2 O and DSS were added.Final concentration were 500 μM DNA, 200 mM KCl, 50 mM HEPES, pH 7.4, 1 mM ZnCl 2 , 750 μM 4-MUP, 10% (v / v) D 2 O and a trace amount of DSS.NMR experiments were performed on a Bruker Avance III HD 850 MHz system equipped with an inverse triple resonance cryo-probe.Spectral analyses were performed using TOPSPIN (Bruker) software ( 23 ). Discovery of the fluorescent deoxyribozyme Aurora Supernova is a deoxyribozyme recently discovered in our laboratory (Figure 1 A) ( 20 ).It transfers the phosphate group from the 1,2-dioxetane substrate CDP-Star (Figure 1 B) to its own 5 hydroxyl group, which triggers a chemically initiated electron exchange luminescence reaction and a flash of blue light (24)(25)(26).Deoxyribozymes that use substrates which generate orthogonal signals when they are dephosphorylated would bring new functionality to the toolkit of functional DNA parts.An example of such a substrate is the coumarin 4-MUP (Figure 1 C) ( 18 ).Dephosphorylation of 4-MUP yields the fluorescent compound 4-MU (Table 1 ) ( 27 ,28 ), and a deoxyribozyme that promotes this reaction could in principle be used to generate a fluorescent signal.To search for such a deoxyribozyme, a library was generated by randomly mutagenizing the sequence of Supernova at a rate of 21% per position (Figure 1 D).We used Supernova as the starting point for our library because this deoxyribozyme catalyzes a phosphoryl transfer reaction using a substrate with some similarities to 4-MUP (compare Figure 1 B and C).After incubating with 4-MUP, library members containing a 5 phosphate group were tagged by ligation, purified by PAGE, and amplified by PCR (Figure 1 E).After four rounds of selection activity was detected (Figure 1 F), and after one more round the library was characterized by high-throughput sequencing.Sequences from the evolved library with high read numbers could phosphorylate themselves in the presence of 4-MUP and also generate fluorescence ( Supplementary Figure S1 and Supplementary Table S1 ).However, most appeared to be structurally and functionally distinct from Supernova (see also reference ( 29 )).We initially appreciated this point by comparing the mutational distances of sequences from Supernova in a library separately challenged with two different substrates (Figure 1 D).When a selection was previously performed to identify variants in this library that used CDP-Star (i.e. the original substrate) with improved efficiency ( 20 ), the average mutational distance of sequences in the evolved pool from Supernova was 18.42 (Figure 1 G, blue peak).In contrast, the average mutational distance of variants in this library that used 4-MUP was 30.68 (Figure 1 G, orange peak; compare also to Figure 2 a of reference ( 29)), suggesting that deoxyribozymes that use CDP-Star and 4-MUP form different structures.Analysis of individual sequences from the evolved library provided additional support for this idea: most contained mutations that were not consistent with the sequence requirements of Supernova ( Supplementary Figure S2 ).The substrate specificities of these new deoxyribozymes also differed from that of Supernova.For example, the deoxyribozyme Aurora 1 could use 4-MUP but not CDP-Star as a substrate, whereas Supernova reacted efficiently with CDP-Star but not 4-MUP (Figure 1 H).Similar results were obtained in a complementary study in which we selected for library members that react with the colorimetric substrate pNPP ( 19 ).These results demonstrate that our method can be used to identify new fluorescent deoxyribozymes.They also suggest that, despite being isolated from a library based on Supernova, most variants in this library that use 4-MUP as a substrate form structures that are distinct. The catalytic core of Aurora is a 47-nucleotide bulged hairpin We chose the fluorescent deoxyribozyme from the initial selection with the highest activity for further characterization (Figure 2 A).We named this sequence Aurora 1 full-length, and the catalytic motif encoded by this sequence Aurora.One of the starting sequence were expected to be present in the library ( Supplementary Table S2 ) ( 32 ).Catalytically active variants were then identified by artificial evolution and characterized by high-throughput sequencing ( Supplementary Figure S3 and Supplementary Tables S3 -S5 ).Initial analysis of these sequences revealed two highly conserved regions (corresponding to nucleotides 1-10 and 43-79) separated by 32 less conserved positions (Figure 2 B).The catalytic activity of a 47 nucleotide deoxyribozyme in which the nucleotides at positions 11-42, 80-85, and in the 3 primer binding site were deleted (called Aurora 1) was similar to that of the full-length sequence ( Supplementary Figure S4 ).The proton NMR spectrum of the 17C 40G mutant of Aurora 2 suggests that Aurora forms a structure containing multiple Watson-Crick base pairs (Figure 2 C).Consistent with this observation, comparative sequence analysis ( 31 , 33 , 34 ) S6 ) are consistent with the idea that the 5 end of Aurora is also facing the asymmetric bulge.If this is the case, the overall architecture of Aurora is likely a bent hairpin in which nucleotides distant in both the primary sequence and secondary structure converge on this bulge. Aurora generates a robust fluorescent signal To evaluate the extent to which these artificial evolution experiments yielded improved variants of Aurora, we compared the catalytic activity of the initial isolate (Aurora 1; Figure 2 E) with that of the variant with the highest read number from the randomly mutagenized library (Aurora 2; Figure 2 F) (see also Figure 2 A for more information about the evolutionary lineage of these deoxyribozymes).Each variant was characterized in the context of the 47-nucleotide minimized catalytic core, and measurements were performed over a range of 4-MUP concentrations using a ligation assay (which measures the extent of self-phosphorylation).These experiments revealed that the catalytic activity of Aurora 2 was more than 100-fold higher than that of Aurora 1 at some substrate concentrations (Figure 2 G).Most mutations in Aurora 2 occurred in either the asymmetric bulge (positions 20-23 and 33-37) or the loop (positions 27-29; Figure 2 F), highlighting the importance of these parts of the deoxyribozyme.Surprisingly, 4-MUP concentration affected activity in a slightly cooperative way, and evidence for cooperativity was also observed in proton NMR experiments in which Aurora folding was characterized as a function of 4-MUP concentration ( Supplementary Figure S7 ).This could indicate that Aurora contains multiple binding sites for 4-MUP, or a single site that binds multiple 4-MUP molecules.At saturating substrate concentrations, the rate of Aurora 2 of 0.18 min −1 (Figure 2 G) was similar to the k cat of Supernova ( 20 ) of 0.15 min −1 .The concentration of substrate at which activity was half maximal (30 μM for Aurora and 150 μM for Supernova) was also comparable for these two deoxyribozymes. In a complementary series of experiments, we investigated the extent to which Aurora enhances fluorescence.When using an experimental setup in which 4-MUP and buffer was mixed with Aurora 2 and fluorescence was continuously monitored using a plate reader (Figure 3 A), signal to noise ratios of 10-fold were obtained in minutes and 100-fold in hours (Figure 3 B).A stable signal was observed over the course of this experiment, indicating that the product is relatively stable under these conditions.When using a discontinuous setup in which reactions were quenched with base before measurement (which can enhance fluorescence and also increase the stability of the fluorescent product ( 35 )), signal to noise ratios were about 6-fold higher, and values exceeding 700-fold could be achieved (Figure 3 C).In both assays, the fluorescent signal generated by Aurora 2 was at least 10-fold higher than that of Aurora 1 ( Supplementary Figure S8 ).Maximum signal to noise ratios in both the absence of base (continuous assay) and the presence of base (discontinuous assay) were similar to those obtained from samples containing synthetic 4-MU at the same concentration as that of 4-MUP used in our assays ( Supplementary Figure S9 ).This indicates that Aurora generates the maximum possible signal to noise ratio for 4-MUP in solution, although it is possible that higher signal to noise ratios could be achieved by deoxyribozymes that enhance the fluorescence of 4-MU when it is bound to the deoxyribozyme ( 3 , 4 , 12-14 ). Aurora requires multiple zinc ions for structure and function Although our selection experiments provided extensive information about the sequence requirements of Aurora, they revealed little about how external factors (such as components of the buffer) influence the reaction.Such factors can significantly affect signal to noise ratios, and can also provide clues about catalytic mechanisms.We were especially interested in the effects of metal ions on the reaction because they can play both structural and catalytic roles in ribozymes and deoxyribozymes ( 36 ,37 ).Our survey of reaction conditions revealed both differences and similarities between Aurora and Supernova ( Supplementary Figures S10 -S20 ) ( 20 ,38 ) as well as between Aurora and the colorimetric deoxyribozyme Apollon ( 19 ).An important difference was that Aurora appears to require monovalent ions for activity ( Supplementary Figures S10 -S12 ) while Supernova ( 20 ,38 ) and Apollon ( 19 ) do not.On the other hand, Aurora ( Supplementary Figure S13 -S14 ), Supernova ( 20 ,38 ) and Apollon ( 19 ) each require zinc.The dependence of catalytic rate on zinc concentration is also highly cooperative for these three deoxyribozymes ( Supplementary Figure S14 , ( 38 ,19 ), suggesting that multiple zinc ions are needed for function.This is intriguing because zinc ions play catalytic roles in some protein enzymes (such as alkaline phosphatase ( 39 ,40 )) that catalyze reactions similar to that promoted by Aurora.To determine whether these metal ion requirements in part reflect structural roles, proton NMR was used to directly monitor the effects of different metal ions on deoxyribozyme folding.The results of these experiments were similar to those that used catalytic activity as a readout.For example, chemical shifts consistent with canonical base pairs were observed in a buffer that contained zinc and potassium, but not in buffers that lacked either zinc or potassium ( Supplementary Figure S10 ).NMR experiments also provided additional evidence that zinc affects Aurora folding in a highly cooperative way ( Supplementary Figure S20 ).Zinc also plays an important role in deoxyribozymes that cleave DNA (41)(42)(43), suggesting a more general role for this ion in the context of nucleic acid enzymes that promote phosphoryl transfer reactions ( 44 ).Taken together, these experiments indicate that both zinc and a monovalent metal ion (but not necessarily potassium) are needed for Aurora folding and function.They also highlight possible mechanistic similarities among the chemiluminescent, fluorescent, and colorimetric deoxyribozymes recently identified in our group.In contrast, the catalytic activity of Aurora itself is not affected by either RNase A or this ribonuclease inhibitor.( G ) The Aurora sensor detects ribonuclease A with a limit of detection of 100 pM.R eactions w ere incubated f or 4 h in the presence of the indicated concentration of RNase A, and af ter quenc hing with base, fluorescence was measured using a plate reader.The green box indicates the average plus or minus three standard deviations of the background signal to noise ratio measured in the absence of RNase A. See Supplementary Figure S23 for more information about the detection limit of the sensor.Experiments shown in panels B and C w ere perf ormed using Aurora 2, which those in panels F and G w ere perf ormed using the sensor sho wn in Supplementary Figure S22 . Engineered forms of Aurora can detect ligands and enzymes in solution Variants of Aurora that only generate fluorescence in the presence of an input of interest could be useful for applications such as high-throughput screening and diagnostics.This is especially true for variants that can be activated in solution without the need for wash steps or biochemical purifications.To determine whether the catalytic activity of Aurora can be modulated by ligands, we used rational design to construct a programmable sensor that only produces fluorescence in the presence of specific oligonucleotide sequences ( Supplementary Figure S21 ).This sensor produced significantly more fluorescence in the presence of the target than in its absence, could be programmed to detect a range of targets, and was only activated by oligonucleotides that it was designed to detect ( Supplementary Figure S21 ).However, its sensitivity was low, with a limit of detection of approximately 1 μM of target ( Supplementary Figure S21 ).This is likely related to catalytic turnover because, unlike classical enzymes, a single molecule of Aurora can only generate one molecule of fluorescent product. To improve sensitivity, we investigated whether it was possible to link the single-turnover signal generated by Aurora to the catalytic activity of an enzyme that itself catalyzes a multiple turnover reaction.Because Aurora is made of DNA, we expected that this type of coupling would be easiest to achieve using enzymes that modify nucleic acids, and set out to develop a variant of Aurora that is activated by enzymes that cleave RNA.Our sensor was constructed by fusing a short DNA oligonucleotide containing a ribonucleotide at its 3 end to the 5 end of Aurora (Figure 3 d and Supplementary Figure S22 ).Because Aurora uses its 5 hydroxyl group as the nucleophile in the reaction, this modification was expected to abolish catalytic activity and eliminate the production of fluorescence.In the presence of a ribonuclease that cleaves RNA at internal sites to generate 3 phosphate (or 2 -3 cyclic phosphate) and 5 hydroxyl termini, however, the RNA linkage should be cleaved, which will regenerate the 5 end of Aurora and restore catalytic activity (Figure 3 D and Supplementary Figure S22 ).Because protein ribonucleases are generally capable of multiple turnover catalysis, this architecture was also expected to amplify the single-turnover signal generated by Aurora (Figure 3 E).We tested our sensor using ribonuclease A. This activated the sensor and enhanced fluorescence more than 10fold (Figure 3 F).Furthermore, the detection limit of the sensor under these conditions ( ∼100 pM; defined here as the minimum concentration of RNase A that gives a signal ≥ 3 standard deviations higher than that of the average background value measured in the absence of RNase A) was approximately 10 000-fold lower than our oligonucleotide sensor (compare Figures 3 G and Supplementary Figure S21 , and see also Supplementary Figure S23 ).This dramatic increase in sensitivity is likely due to the high turnover number of RNase A. To further probe the mechanism of this sensor, we investigated whether activation was affected by RiboLock, a commercially available inhibitor of RNase A. RiboLock had no effect on Aurora itself (Figure 3 F, right), but prevented the Aurora sensor from being activated by RNase A (Figure 3 F, left).This provided additional evidence that the sensor is activated by RNA cleavage.Taken together, these experiments show that assays which use a covalently blocked form of Aurora to detect a multiple-turnover enzyme can be orders of magnitude more sensitive than those that use unmodified Aurora.They also indicate that such a sensor can be used to detect the presence of ribonuclease inhibitors in a sample. Using Aurora to identify Nsp15 inhibitors in a high-throughput screen Because assays using Aurora sensors can be performed rapidly and inexpensively, they appear to be well-suited for applications such as high-throughput screens.To further investigate this idea, we investigated whether our Aurora sensor could be used to identify inhibitors of the S AR S-CoV-2 endoribonuclease Nsp15.This ribonuclease cleaves 3 of pyrimidines (preferentially uridines) to generate 2 -3 cyclic phosphate and 5 hydroxyl termini ( 45 ).It helps to prevent host recognition by degrading double-stranded viral intermediates ( 45 ), and inhibitors could potentially be useful as antiviral agents ( 46 ).Pilot experiments showed that, as was the case for RNase A, it was possible to construct a version of Aurora that was activated by Nsp15 ( Supplementary Figures S22 and S24 ).To perform a screen using this sensor, a master mix containing Nsp15 and buffer was aliquoted into the wells of 384 well plates, each of which contained a different compound from a 1000-member fragment-based small molecule library (Figure 4 A).A second master mix containing the Aurora sensor was then added to each well.After a short incubation to allow Nsp15 to cleave the RNA linkage and activate the sensor, zinc and 4-MUP were added to initiate deoxyribozyme catalysis.After another incubation, fluorescence was measured in a plate reader (Figure 4 B).In wells containing compounds that do not inhibit Nsp15, cleavage of the RNA linkage in the Aurora sensor by Nsp15 was expected to activate the sensor and lead to production of a fluorescent signal (Figure 4 B, black points).In contrast, RNA cleavage should not occur and fluorescence should not be produced in wells containing compounds that inhibit either Nsp15 or Aurora itself (Figure 4 B, orange points).To distinguish compounds that inhibit Nsp15 from those that inhibit Aurora, a counterscreen was performed using Aurora rather than the Aurora sensor.A graph comparing these two screens revealed that none of the hits identified in the initial screen inhibited Aurora in the counterscreen (Figure 4 C).This indicates that these hits are Nsp15 rather than deoxyribozyme inhibitors, and also that they do not quench fluorescence of 4-MU itself.To compare these results to those obtained using standard methods, the screen was repeated using a FRET assay in which Nsp15 was incubated with library members and a DNA substrate containing a fluorophore at one end and a quencher at the other (Figure 4 D).Cleavage by Nsp15 was expected to result in an increase in fluorescence, while the fluorescence in wells containing Nsp15 inhibitors was expected to remain at background levels.The results of this FRET screen were virtually identical to those obtained using the Aurora sensor (Figure 4 E).Several hits were further characterized as a function of concentration using both the Aurora sensor and the FRET assay.The most potent of these compounds inhibited Nsp15 with an IC50 of 12 μM in an assay that used the Aurora sensor and 11 μM in an assay that used a FRET assay (Figure 4 F).Other hits inhibited Nsp15 with IC50 values ranging from 7.9 to 121 μM ( Supplementary Figure S25 ).These experiments indicate that Aurora sensors can be used in combination with small-molecule libraries to rapidly identify inhibitors in high-throughput screens.with an IC50 value of 12 μM when measured using the Aurora sensor and 11 μM when measured using the FRET assay. Conclusions In this study, we developed a new way to generate fluorescence using a deoxyribozyme called Aurora and a coumarin substrate called 4-MUP.Our approach offers a number of advantages when compared to other methods of generating fluorescent signals.Both Aurora and 4-MUP are stable, inexpensive and widely available.The workflow is simple, and formation of the fluorescent product can be monitored in solution and in real time without the need for wash steps or biochemical pu-rifications.The signal to noise ratio of the fluorescent signal is also higher than that produced using widely used methods like molecular probes.A second goal of this study was to establish that these deoxyribozymes can be used for real-world applications.As an initial proof of concept for this idea, we showed that Aurora can be readily converted by rational design into a sensor that only generates fluorescence in the presence of an input.Our most sensitive sensor could detect ribonucleases with a limit of detection of approximately 100 pM, which compares favorably with the detection limits of many ho-mogenous assays that use aptamers in combination with more expensive signaling elements such as fluorophores, dyes, quantum dots, or gold nanoparticles (47)(48)(49)(50).Although selection was not used to optimize this sensor, it could in principle be utilized to improve its performance or to develop sensors that detect other target molecules ( 9 ,51-53 ).After verifying that this sensor worked, it was used to identify inhibitors of the Nsp15 ribonuclease from S AR S-CoV-2 in a high-throughput screen.Our assay could readily distinguish between reactions that contained active ribonuclease and those that did not ( Zfactor = 0.91).It did not produce false positives from compounds that inhibit Aurora rather than Nsp15, although we note that the frequency of such false positives will depend on the properties of the library.It also yielded results that were similar to those obtained when the library was screened in parallel using a more standard FRET assay ( 54 ,55 ).While our assay is comparable to those which use FRET in terms of both simplicity and workflow, reagent costs are several fold lower and signal to noise ratios are considerably higher.We anticipate that further optimization of Aurora using methods such as recombination ( 56 ) and secondary structure libraries ( 57 ) in combination with single-step ( 58 ) and conventional selections will continue to decrease costs and increase signal to noise ratios, which will in turn increase the utility of Aurora for applications such as high-throughput screening and diagnostics.In a more general sense, our work highlights the potential of functional DNA molecules as widely applicable fluorescent tools. Figure 1 . Figure 1.Identification of deoxyribozymes that generate fluorescence.( A ) Secondary str uct ure of Supernova, a chemiluminescent deoxyribozyme previously isolated in our group.( B ) Chemical str uct ure of CDP-Star, the substrate used by Supernova.( C ) Chemical str uct ure of 4-MUP, the substrate used in this study.( D ) Workflow of a previous selection (in which deoxyribozymes that react with the original CDP-Star substrate were isolated from a library of variants of Supernova) and the selection performed here (in which deoxyribozymes that react with the substrate 4-MUP were isolated from the same library).( E ) Artificial e v olution protocol to identify deoxyribozymes that phosphorylate themselves in the presence of 4-MUP.( F ) Progress of the selection for deoxyribozymes that can react with 4-MUP.( G ) Distribution of mutational distances of sequences in a library of Superno v a v ariants relativ e to Superno v a itself after selection f or the abilit y to react with CDP-St ar (blue) or 4-MUP (orange).( H ) T he substrate specificities of Superno v a and Aurora are orthogonal.Superno v a (labeled 'SN') and Aurora 1 (labeled 'Aur') were each incubated separately with CDP-Star (left) or 4-MUP (right).Time points w ere analyz ed using the ligation assa y.See Supplementary Table S1 f or the sequence of Aurora 1. Figure 2 . Figure 2. Sequence requirements and secondary str uct ure of the fluorescent deoxyribozyme Aurora.( A ) Evolutionary lineage of Aurora 1 (the minimized catalytic core of the initial isolate of Aurora) and Aurora 2 (an optimized variant isolated from a randomly mutagenized library based on Aurora 1 full-length).( B ) Sequence logo generated from analysis of variants of Aurora using high-throughput sequencing.( C ) Proton NMR spectrum of the 17C 40G variant of Aurora ( Supplementary Table S1 ) showing chemical shifts consistent with base pairs.( D ) Double-mutant cycle showing that positions 17 and 40 interact in a w a y that is consistent with base pairing.( E ) Secondary str uct ure model of Aurora 1. Base pairs are shown using solid black lines, interactions supported by mutual information analysis are shown in maroon, and the degree of conservation at each position is indicated by blue shading.( F ) Secondary str uct ure model of A urora 2. Positions that differ from A urora 1 are shown in orange.( G ) Cat alytic activit y of Aurora 1 and 2 o v er a range of 4-MUP concentrations as measured using a ligation assay. Figure 3 . Figure 3. Aurora generates a robust fluorescent signal.( A ) Workflow of continuous and discontinuous assays using Aurora.( B ) Example of a continuous assay in which the reaction is continually monitored in a plate reader.( C ) Example of a discontinuous assay in which time points are quenched with base before measuring fluorescence.( D ) Design of a ribonuclease sensor based on Aurora that is activated by RNA cleavage.(E ) Amplification of the single-turno v er signal generated by Aurora in the presence of a ribonuclease that promotes a multiple turno v er reaction.( F ) An Aurora sensor with the architecture shown in panel D is activated by RNase A, but not when a ribonuclease inhibitor is present.In contrast, the catalytic activity of Aurora itself is not affected by either RNase A or this ribonuclease inhibitor.( G ) The Aurora sensor detects ribonuclease A with a limit of detection of 100 pM.R eactions w ere incubated f or 4 h in the presence of the indicated concentration of RNase A, and af ter quenc hing with base, fluorescence was measured using a plate reader.The green box indicates the average plus or minus three standard deviations of the background signal to noise ratio measured in the absence of RNase A. See Supplementary FigureS23for more information about the detection limit of the sensor.Experiments shown in panels B and C w ere perf ormed using Aurora 2, which those in panels F and G w ere perf ormed using the sensor sho wn in Supplementary FigureS22. Figure 4 . Figure 4. Identification of small-molecule inhibitors of the SARS-CoV-2 ribonuclease Nsp15 using a fluorescent Aurora sensor.( A ) Workflow of high-throughput screen to identify Nsp15 inhibitors.( B ) Effect of each compound in the 10 0 0-member library on the fluorescence of the Aurora sensor.Potential inhibitors are shown in orange.( C ) Identification of inhibitors and false positives.The x -axis of the graph shows the fluorescent signal generated by the Aurora sensor in the presence of Nsp15 and different compounds in the library, while the y -axis shows the fluorescent signal generated by Aurora itself in the presence of Nsp15 and the same compounds.Points with high fluorescence values on both the x -axis and the y -axis (shown in black) correspond to wells containing compounds that inhibit neither Nsp15 nor Aurora.Points with low fluorescence values on the x -axis and a high fluorescence value on the y -axis (shown in orange) correspond to wells containing compounds that inhibit Nsp15 but not Aurora.( D ) Workflow of a FRET assay for ribonuclease activity.( E ) Comparison of the results of a high-throughput screen for Nsp15 inhibitors using the Aurora sensor ( x -axis) with a screen of the same library using a FRET assay ( y -axis).( F ) Example of an Nsp15 inhibitor identified in the screen.This compound inhibits Nsp15 Table 1 . Fluorescent properties of 4-MU, the fluorescent product generated by Aurora.Values were measured at pH10 ( 27 , 28 ) (29)(30)(31)characterize the secondary structure, sequence requirements, and minimized catalytic core of Aurora.Another was to identify variants with improved catalytic efficiencies.To address both of these goals, we synthesized a second library by randomly mutating the sequence of Aurora 1 fulllength at a rate of 21% per position(29)(30)(31).At this rate of mutagenesis, all possible variants within about four mutations acts with the conserved asymmetric bulge formed by positions 20-23 and 33-37 rather than extending into solution.Covariation analysis suggests that nucleotides at the 5 end of Aurora (which include the phosphorylation site) do not form canonical base pairs with one another or with the rest of the deoxyribozyme.However, a network of correlations among positions 4, 6 and 45 (including an AT to GA covariation between positions 4 and 45 which is one of the strongest in the dataset) is consistent with a tertiary interaction that anchors the 5 end of Aurora to the rest of the catalytic core. revealed four pairs of covarying positions in the deoxyribozyme (positions 11 and 47, 12 and 46, 17 and 40, and 26 and 30) with mutational patterns consistent guanine.The hairpin is capped by a three-nucleotide loop formed by positions 27, 28 and 29.One of the most highly enriched mutations identified in the selection (29 G to A) occurred in this loop ( Supplementary Figure S3 ).In addition, several correlations identified by mutual information analysis (including 22-29, 22-27, 23-26, 26-29 and 22-30; Figure 2 E-F and Supplementary Figure S6 ) suggest that this loop inter-
12,013.4
2024-06-11T00:00:00.000
[ "Chemistry", "Biology" ]
Correction to: Cycloartane triterpenoid (23R, 24E)-23-acetoxymangiferonic acid inhibited proliferation and migration in B16-F10 melanoma via MITF downregulation caused by inhibition of both β-catenin and c-Raf-MEK1-ERK signaling axis. The article Cycloartane triterpenoid (23R, 24E)-23-acetoxymangiferonic acid inhibited proliferation and migration in B16-F10 melanoma via MITF downregulation caused by inhibition of both β-catenin. Introduction Melanocytes are cells producing melanin pigments in the basal layer under the epidermis. Melanin formation starts from hydroxylation of l-tyrosine to l-DOPA, which is the rate-limiting step in melanin synthesis and is catalyzed by tyrosinase (TYR) located in melanosomes. Melanin pigments prevent cell damage from ultraviolet rays by covering the nucleus. Since excessive accumulation of melanin causes blemishes and freckles, a lot of studies have sought a component having whitening effects such as inhibiting melanin generation and/or promoting melanin decomposition. In the process of melanin synthesis in stimulated melanocytes, such as by ultraviolet rays and friction, microphthalmiaassociated transcription factor (MITF) is a master transcription factor, which promotes gene expression of TYR, tyrosinase-related protein-1 (TRP-1), and TRP-2. Mature melanosomes are transported to the keratinocytes by dendrites, and are moved to the skin surface with the differentiation of the keratinocytes [1]. The melanocytes produce melanin to protect somatic cells from ultraviolet rays, but these cells may be transformed to malignant melanoma by oncogenesis. Melanoma is a type of skin cancer whose worldwide incidence has steadily increased over the last several decades. Annual incidence has risen as rapidly as 4-6% in many fairskinned populations that predominate in regions like North America, Northern Europe, Australia, and New Zealand [2]. Melanoma is known as the most malignant skin cancer with a high fatality rate since its progressing state shows resistance to various treatments. It has been reported that MITF expression is elevated or mutated in melanoma [3]. Recently, MITF-M, one of the isoforms of MITF, was reported to be specifically expressed in melanoma cells [4]. Furthermore, the forced expression of MITF caused tumorigenesis of immortalized melanocytes. Apoptosis of malignant melanoma was induced by functional inhibition of MITF [5]. These reports indicated that MITF is a pathogenic factor in melanoma and a potential target molecule for therapy. We recently reported that (23R, 24E)-23-acetoxymangiferonic acid [(23R, 24E)-23-acetoxy-3-oxocycloart-24-en-26-oic acid] (23R-AMA), a cycloartane triterpenoid isolated from a methanol extract of Garcinia sp. bark, has inhibitory activity against melanin production via inhibition of TYR expression in the B16-F10 melanoma cell line [6]. Plants of the genus Garcinia are evergreen trees of the Clusiaceae family. Xanthones, such as α-mangostin and gambogic acid, and (−)-hydroxycitric acid have been isolated from plants of the genus Garcinia, and these compounds have been reported to have anti-inflammatory, antioxidant, and antitumor activity [7][8][9][10]. Cycloartane triterpenoids, such as euphonerin D [11] isolated from Euphorbia neriifolia, and combretic acid B and combretanone G [12] isolated from Combretum quadrangulare, have been reported to show an antitumor effect by apoptosis induction by increasing DR5 promoter activity. In addition, it has been reported that seven kinds of cycloartane-type triterpenoids, including cycloartenol, isolated from Amberboa ramosa have TYR inhibitory activity [13]. Thus, numerous compounds having a cycloartane skeleton with various bioactivities have been reported. However, no detailed mechanism of action has been investigated for cycloartane-type triterpenoids isolated from plants of the genus Garcinia. In this study, we investigated the detailed mode of action of 23R-AMA-induced inhibitory effects on cell proliferation and migration in B16-F10 melanoma, and found that these activities were caused by inhibitory regulation to both MITF expression and its transcriptional activity, and which were elicited by inhibition of β-catenin and c-Raf-MEK1-ERK signaling axis including FAK and c-Src. Materials The barks of Garcinia sp. were collected in Johor, Malaysia in August 2003. The botanical identification was made by Mr. Teo Leong Eng, Faculty of Science, University of Malaya. Voucher specimens (Herbarium No. 5044) are deposited in the Herbarium of the Chemistry Department, University of Malaya. Details of the structure elucidation of 23R-AMA extracted from this plant were described in a recent report [6]. Evaluation of cytotoxic effects by lactate dehydrogenase (LDH) activity and cell proliferation B16-F10 cells were seeded in a 96-well plate at a density of 1.0 × 10 4 cells/well and various concentrations of 23R-AMA (3.13-50 μg/mL; 5.8-93.4 μM) were added. After cultivation for 24 h, LDH activity of the culture medium was evaluated by Cytotoxicity LDH Assay Kit-WST (Dojindo, Kumamoto) according to the instruction manual. B16-F10 cells cultured under the same conditions as the LDH assay were collected by trypsin treatment and counted with a hemocytometer. Migration assay B16-F10 cells were seeded in a chemotaxis chamber (pore size 3 μm; BD Biosciences, Franklin Lakes, NJ, USA) set on 24-well plates. After 12 h cultivation with/without 23R-AMA (12.5 μg/mL; 23.4 μM), basic FGF (bFGF: Wako Pure Chemical Industries, Ltd.) was added in the lower compartment, and cells were cultivated for 24 h. The cells that moved through to underneath the membrane were washed with ice-cold PBS, fixed with 10% formalin, and stained with 3% Giemsa solution (MERCK KGaA, Darmstadt, Germany). For quantification of migrated cells, the cells remaining on the membrane of the upper layer were all removed with a cotton swab, and MTT assay was performed on the migrating cells remaining under the membrane. Formazan produced by adding MTT solution to the lower layer was dissolved with DMSO and its absorbance at 570 nm was measured [14]. Flow cytometry analysis Cells subjected to various treatments were washed with icecold PBS and then washed with assay buffer [10 mM Hepes (pH 7.4), 137 mM NaCl, 1 mg/mL glucose, 0.5 mM EDTA, 0.001% NaN 3 , 0.3% BSA] and collected. After cells were stained with various antibodies, the fluorescence of FITC and PE on cells was detected with a flow cytometer (FACS-Verse; BD Biosciences). When the biotinylated antibody was used, cells were stained with streptavidin APC-Cy™7 (BD Pharmingen, Franklin Lakes, NJ, USA) after the primary antibody binding, and its fluorescence was analyzed. Reverse Transcription-Polymerase Chain Reaction (RT-PCR) Total RNA (2 μg) extracted from cells in the culture was used as a template for cDNA synthesis. cDNA was prepared by use of a Rever Tra Ace (TOYOBO Co., Ltd., Osaka). Primers were synthesized on the basis of the reported mouse mRNA sequences for GAPDH, MITF, TYR, c-MET, TCF1, integrin α V , integrin α 4 , integrin α 5 , integrin β 1 , and integrin β 3 23R-AMA-induced inhibition of melanin production The addition of α-MSH and IBMX promoted melanin production in 3-day cultivation. Addition of 23R-AMA (Fig. 1a) completely suppressed melanin accumulation in B16-F10 with morphological change (Fig. 1b). 23R-AMA strongly inhibited melanin content to 15% at 12.5 μg/mL and to 12% at 25 μg/ mL, as compared with the induced group that was stimulated by α-MSH and IBMX (Fig. 1c, 100%). Melanin content in the 23R-AMA-treated group was less than in the non-treated group without α-MSH and IBMX. The positive control arbutin (750 μM), which is a versatile whitening agent, suppressed melanin content to 43% (Fig. 1c). Such inhibition by 23R-AMA shown in Fig. 1b, c was also observed in melanin induction by only IBMX. Therefore, only IBMX was used as melanin inducer in the following experiments to simplify analyses. The protein expression of TYR and MITF, a key enzyme and a transcription factor, respectively, in B16-F10 treated with 23R-AMA was examined by western blotting (WB). Protein expression of TYR and MITF is increased by 24 h after IBMX addition, but these expressions were completely suppressed by addition of 23R-AMA (Fig. 1d). Furthermore, 23R-AMA-induced reduction of protein expression of both TYR and MITF was observed in the samples without IBMX. In these experimental conditions, although 23R-AMA-inhibited cell proliferation with morphological change (Fig. 1b, c) was obvious, noticeable cytotoxic effects were not observed (Fig. 1b). Inhibition of cell proliferation by 23R-AMA and influence on apoptosisand autophagy-related protein expression 23R-AMA suppressed the production of melanin and reduced the total amount of protein (Fig. 1c). The LDH assay was performed to evaluate the cytotoxic action of 23R-AMA. As a result, high dose 23R-AMA (50 μg/mL) increased LDH activity to the same degree as doxorubicin (positive control, 1 μg/mL), which indicates cytotoxic effects. Intermediate concentration (25 μg/mL) of 23R-AMA also slightly elevated LDH activity, but not from 3.13 to 12.5 μg/mL. 23R-AMA (12.5 μg/mL) induced the morphological change of B16-F10 the same as in Fig. 1b (data not shown), and the cell number in a well was suppressed to 60% compared to the non-treated control group (insert in Fig. 2a). Furthermore, protein expression related to apoptosis and autophagy was investigated by WB. The change in expression of Bcl-2, Bcl-XL, and Mcl-1 preventing apoptosis was not observed in 23R-AMA-treated cells by WB. The expression of Bax, Bid, and Bad, which are factors promoting apoptosis, was also not altered by the addition of 23R-AMA (Fig. 2b). Furthermore, 23R-AMA did not influence LC3-II expression and mTOR phosphorylation, which are autophagy indicators (Fig. 2c). Therefore, the total protein reduction elicited by 23R-AMA is not the consequence of cell death, but it was related to its growth inhibition activity. Suppression of cell migration in B16-F10 cells by 23R-AMA Considering the possibility that the morphological change of B16-F10 induced by 23R-AMA affects adhesion, invasion, and metastasis of cancer cells, the effect of 23R-AMA on cell migration was investigated. The bFGF-dependent cell migration was inhibited by 23R-AMA according to microscopic observations after Giemsa staining (Fig. 3a). In the MTT assay to quantitatively measure the migrating cells, 23R-AMA (12.5 μg/mL) significantly suppressed bFGFinduced cell migration to a level comparable to that of the non-induced control group (Fig. 3b). 23R-AMA-induced phosphorylation of β-catenin and its inhibition of intranuclear accumulation The expression of MITF protein was suppressed by addition of 23R-AMA (Fig. 1d). Since the expression of MITF is known to be regulated by CREB and β-catenin [15,16], CREB protein expression and its phosphorylation after 23R-AMA treatment were analyzed by WB. As a result, no change was evident in the phosphorylation of CREB (data not shown). The activity of β-catenin as a transcription factor is adjusted according to the site to be phosphorylated. Phosphorylation in β-catenin ser45/thr41 at 24 h after 23R-AMA addition was enhanced compared with the control in the presence or absence of IBMX, while whole β-catenin content was not altered. On the other hand, phosphorylation at ser33/ser37/thr41, ser552, and ser675 was not affected by 23R-AMA treatments (Fig. 4a). Since phosphorylation in ser45/thr41 in β-catenin was reported to suppress its nuclear accumulation [16], β-catenin content in nuclear protein extracts was studied using WB in the same conditions. As a result, the content of β-catenin in the nucleus was obviously suppressed at 24 h from the addition of 23R-AMA (Fig. 4b, 12.5 and 25 μg/mL). Inhibition of β-catenin downstream gene expression by 23R-AMA treatments Since the suppression of MITF expression is suggested to be caused by 23R-AMA-induced suppression of β-catenin accumulation in the nucleus, the change in downstream gene regulated by β-catenin [17] was examined by RT-PCR. As a result, mRNA expression of MITF, TCF1, and c-Met was respectively reduced from 3 to 12 h after the addition of 23R-AMA (Fig. 5). Inhibition of phosphorylation of c-Raf, MEK, and ERK by 23R-AMA treatments Phosphorylation of the factors that regulate transcriptional activity of MITF was examined after 23R-AMA treatment by WB. Activated ERK is reported to phosphorylate MITF and stimulate its activity [18]. As a result, 23R-AMA suppressed phosphorylation of c-Raf, MEK1/2, and ERK1/2 (Fig. 6a). The same analysis was carried out for B-Raf, which is a c-Raf isoform and reported to be frequently activated in malignant melanoma [19,20], but 23R-AMA did not affect B-Raf phosphorylation (Fig. 6b). Inhibition of phosphorylation in c-Src/Fyn and FAK by 23R-AMA c-Src/Fyn and FAK, which regulate the phosphorylation of c-Raf via Ras [21,22], were examined as the upstream factors of the c-Raf-MEK1-ERK-MITF signaling axis. Consequently, 23R-AMA continuously suppressed c-Src/ Fyn and FAK phosphorylation at 3-24 h from the addition (Fig. 7a). Subsequently, in order to confirm whether c-Src/ Fyn phosphorylates c-Raf in B16-F10, phosphorylation of c-Raf was examined using PP2 as c-Src family inhibitor. As a result, phosphorylation of c-Raf was attenuated by inhibition of upstream Src family kinase with PP2 pretreatment. Therefore, it was suggested that the phosphorylation of c-Raf is regulated by Src family kinases in B16-F10 cells (Fig. 7b). Influence of 23R-AMA on integrin expression FAK phosphorylation is mainly controlled by an integrin known as cell adhesion factor. Thus, the effects of 23R-AMA on the integrin expression were investigated by RT-PCR and flow cytometry. Integrin α V , α 4 , α 5 , β 1 , and β 3 , which is reported to be expressed in B16-F10 cells was examined [23,24]. After 23R-AMA treatment, the gene expression of integrin α V is reduced at 9-24 h, but that of β 3 was increased at 3-24 h (Fig. 8). Significant changes in expression of integrin α 4 , α 5 , and β 1 were not observed. From these results, since 23R-AMA was expected to have some effects on the expression of integrins, the expression of integrins on the cell surface was examined by flow cytometry. However, contrary to the results in gene expression, the major integrins expressed on the cell surface in B16-F10 were not influenced by 23R-AMA (data not shown). Discussion We isolated 23R-AMA, a cycloartane triterpenoid, from a methanol extract of Garcinia sp. bark by activity-guided separation [6]. 23R-AMA completely suppressed α-MSH/ IBMX-induced intracellular melanin accumulation in the B16-F10 melanoma cell line at concentrations over 12.5 μg/mL (Fig. 1b); 23R-AMA was more potent than arbutin used as positive control (Fig. 1c). In examining the mechanism of action, protein expression of TYR and MITF, which plays a central role in melanin production, was suppressed by 23R-AMA treatments (Fig. 1d). Inhibition of expression of both proteins was also observed in B16-F10 without IBMX. Since many whitening cosmetics, such as arbutin, targeted TYR, 23R-AMA was thought to possibly be a candidate for a new whitening agent. However, 23R-AMA also induced morphological change and reduction of total protein content in B16-F10 (Fig. 1b, c), features making them unsuitable as cosmetics. Since recent studies reported that MITF is a crucial target in melanoma therapy [3][4][5], this study focused on the antitumor activity of 23R-AMA. Many cytotoxic effects of current anticancer drugs have been explained by the induction of apoptosis and cell death accompanied by autophagy. However, 23R-AMA-induced growth inhibition seemed to be different from apoptotic images generally seen in cell death. Therefore, apoptosis-and autophagyrelated factors were investigated after 23R-AMA treatment by WB. Significant change of apoptosis-related factor, Bcl-2 family protein, and indicators of autophagy such as LC3-II and mTOR was not observed. These results suggested that total protein reduction by 23R-AMA is related to inhibition of proliferation and is not caused by induction of apoptotic cell death and autophagy. Many compounds with a cycloartane skeleton have been reported to show cytotoxic action, the effects of which are due to induction of apoptosis [25][26][27]. Previous studies have reported cycloartane triterpenoids having cellular proliferation inhibitory action on cancer cells by affecting cell adhesion and migration [26,27]. Therefore, the effect of 23R-AMA on cell migration, which is involved in invasion and metastasis of cancer cells including melanoma, was investigated. Consequently, 23R-AMA significantly suppressed bFGF-induced cell migration compared to the control group (Fig. 3). Although MITF regulates melanin production as a transcription factor, it is also known to control cell cycle, proliferation, survival, and migration [28,29]. MITF activity is regulated by phosphorylation, and its expression is mainly controlled by CREB and β-catenin [15]. 23R-AMA did not affect phosphorylation of CREB. However, 23R-AMA increased phosphorylation Fig. 7 Inhibition of phosphorylation in c-Src/Fyn and FAK by 23R-AMA. B16-F10 cells were seeded in a 60-mm dish at 6.0 × 10 5 cells/ dish, and 23R-AMA (25 μg/mL) was added. Cells were collected over time 24 h, a after sample addition, and WB was performed using specific antibodies against p-FAK, pc-Src/Fyn, or c-Src. b B16-F10 cells were seeded in a 60-mm dish at 6.0 × 10 5 cells/dish, and 23R-AMA and PP2 (25 μM) as Src family kinase inhibitor were added. Cells were collected over time 24 h after sample addition and WB was performed using specific antibodies against p-c-Src/Fyn and p-c-Raf Integrin V (28) Integrin 4 (28) Integrin 5 (28) Integrin 1 (28) Integrin 3 (32) GAPDH (22) 23R-AMA -----+ + + + -+ 3 0 6 9 12 24 Time(hr) Fig. 8 Influence of 23R-AMA on integrin expression. B16-F10 cells were seeded in a 60-mm dish at 6.0 × 10 5 cells/dish with 23R-AMA (12.5 μg/mL). Total RNA was collected over time from sample addition and cDNAs were prepared by RT reaction. Using these as a template, RT-PCR was performed with various specific primers for integrins and the expression level of mRNA was semi-quantitatively examined. The numbers in parentheses indicate the number of PCR cycles of β-catenin at Ser 45/Thr 41 (Fig. 4a) and downregulated accumulation of β-catenin in the nucleus (Fig. 4b). MITF, c-Met, and TCF1 are transcripted as downstream genes of β-catenin [16,17], which were inhibited by 23R-AMA (Fig. 5). TCF1 binds to β-catenin and acts as a transcription factor, controlling not only the target gene including MITF but also the expression of TCF1 itself [15]. Since 23R-AMA suppresses gene expression of TCF1 at 3 h from addition, these results correspond to the inhibition of β-catenin accumulation in the nucleus in Fig. 4b. We demonstrated that 23R-AMA influenced β-catenin and TCF that regulate the expression of MITF. However, the regulation of activity of MITF by phosphorylation is mainly performed by the MAPK/ERK signaling pathway [18]. The MAPK/ERK signaling pathway is functionally enhanced in various cancers including melanoma, and many melanoma therapeutic drugs target these pathways. Therefore, influences of 23R-AMA on phosphorylation of c-Raf, MEK1/2, and ERK1/2, which plays a central role in the MAPK/ERK signaling pathway, was examined by WB. The results showed that 23R-AMA suppressed phosphorylation of c-Raf, MEK1/2, and ERK1/2 (Fig. 6a). On the other hand, B-Raf, an isoform of c-Raf, was examined under the same conditions as c-Raf, but the effect on B-Raf phosphorylation was not observed (Fig. 6b). From these results, it was suggested that 23R-AMA inhibits the signaling axis from c-Raf to ERK, and suppressed MITF activation. Phosphorylation of c-Raf is known to be regulated by Ras, c-Src, and others [30]. Subsequent analysis revealed that 23R-AMA suppressed phosphorylation of c-Src/Fyn (Fig. 7a). In addition, 23R-AMA suppressed phosphorylation of focal adhesion kinase (FAK) controlling the phosphorylation of c-Src/Fyn (Fig. 7a). Pretreatment of Src family kinase inhibitor PP2, which elucidated the relationship between c-Raf and Src family kinase, inhibited not only phosphorylation of c-Src/Fyn but also that of c-Raf (Fig. 7b). This result suggested that c-Src/Fyn is located upstream of c-Raf and was speculated to control c-Raf phosphorylation via Ras in B16-F10 cells. Taken together, 23R-AMA was thought to suppress the c-Raf-MEK-1-ERK signaling axis via inhibition of phosphorylation of FAK-c-Src/Fyn. In about 40-60% of melanoma, a mutation in B-Raf, 90% of which is a V600E mutation (valine (V) is substituted with glutamic acid (E) at the 600th amino acid), was reported [20,31]. Currently, a therapeutic agent targeting the V600E mutation has been studied and developed. Drug resistance is reported to occur while its antitumor effect is high at the initial administration [32]. Various research on resistance formation has been conducted, but the detailed mechanism has yet to be elucidated. The most promising theory about the mechanism for reactivation is transactivation of c-Raf [32]. In melanoma cells, B-Raf is responsible for much of the signaling of the MAPK/ERK signaling pathway, and inhibition of B-Raf is compensated for by c-Raf activation and downstream MEK and ERK [21,30]. Considering such reports, the effect of 23R-AMA that suppresses only the activity of c-Raf without affecting B-Raf activity seems to be interesting and significant. FAK and c-Src/Fyn are defined as mediators of the MAPK/ERK signaling pathway [33,34], but they are also deeply involved in cell adhesion and migration. In particular, FAK plays a central role in signal transduction mediated by integrins [34]. We preliminarily examined gene expression of integrin α V , α 4 , α 5 , β 1 , and β 3 , which are reported to be expressed in B16-F10, and confirmed their expression (Fig. 8). The expression of integrin α 4 , α 5 , and β 1 was not altered by the addition of 23R-AMA. The expression of integrin α V after addition of 23R-AMA was downregulated, but β 3 was upregulated (Fig. 8). Integrin forms a heterodimer on the cell surface and it has been reported that expression of integrin α V β 3 , which is a vitronectin receptor, is enhanced in melanoma and contributes largely to cell adhesion, migration, and invasion [35]. However, contrary to the results in Fig. 8, flow cytometric analysis revealed that 23R-AMA did not affect the expression of major integrins on the cell surface including α V and β 3 in B16-F10 (data not shown). Reasons for this discrepancy in results are unknown at present and should be elucidated in the future. When cells bind to extracellular matrix via integrin, phosphorylation of protein involved in cell adhesion such as FAK, paxillin, and talin, is observed, and Src family kinase subsequently is phosphorylated. These phosphorylations transmit the signals to the formation of cell adhesion plaques and the Rho family that is the main regulator of cell morphology, and cell proliferation and movement with cytoskeletal reconstruction are regulated. Previous studies reported that proteins associated with cell adhesion were not phosphorylated in the cells lacking FAK or Src family kinase, and cell expansion and mobility are markedly reduced [36]. Saracatinib, an inhibitor of c-Src, is a clinically used medicine which has been reported to inhibit cell migration and invasion of melanoma cells without inhibitory effects on proliferation [37]. These reports suggest that 23R-AMA-suppressed bFGF-dependent cell migration observed in Fig. 4 might be a consequence of inhibitory effects on FAK and c-Src phosphorylation. In conclusion, it is inferred that 23R-AMA inhibited growth and migration of B16-F10 melanoma by regulating both MITF expression and its activity via regulation of both β-catenin accumulation in the nucleus and the signaling pathway from FAK to ERK (Fig. 9). What should be emphasized in this study is that there has been no report that compounds with a cycloartane skeleton regulate β-catenin and the c-Raf-MEK1-ERK signaling axis including FAK and c-Src. Although we could not identify the target molecule of 23R-AMA in this study, if further examination reveals Fig. 9 Schematic diagram of the action of 23R-AMA in B16-F10 melanoma. Proteins/ genes painted black were factors in which downregulation was observed by the addition of 23R-AMA. FZD frizzled, MC1R melanocortin 1 receptor, RTKs receptor tyrosine kinases, LEF-1 lymphoid enhancerbinding factor 1
5,269.6
2018-08-06T00:00:00.000
[ "Chemistry" ]
gen3sis: the general engine for eco-evolutionary simulations on the origins of biodiversity Understanding the origins of biodiversity has been an aspiration since the days of early naturalists. The immense complexity of ecological, evolutionary and spatial processes, however, has made this goal elusive to this day. Computer models serve progress in many scientific fields, but in the fields of macroecology and macroevolution, eco-evolutionary models are comparatively less developed. We present a general, spatially-explicit, eco-evolutionary engine with a modular implementation that enables the modelling of multiple macroecological and macroevolutionary processes and feedbacks across representative spatio-temporally dynamic landscapes. Modelled processes can include environmental filtering, biotic interactions, dispersal, speciation and evolution of ecological traits. Commonly observed biodiversity patterns, such as α, β and γ diversity, species ranges, ecological traits and phylogenies, emerge as simulations proceed. As a case study, we examined alternative hypotheses expected to have shaped the latitudinal diversity gradient (LDG) during the Earth’s Cenozoic era. We found that a carrying capacity linked with energy was the only model variant that could simultaneously produce a realistic LDG, species range size frequencies, and phylogenetic tree balance. The model engine is open source and available as an R-package, enabling future exploration of various landscapes and biological processes, while outputs can be linked with a variety of empirical biodiversity patterns. This work represents a step towards a numeric and mechanistic understanding of the physical and biological processes that shape Earth’s biodiversity. Introduction Divergence between geographically isolated clusters of populations increases over time while (re-)connected clusters decrease down to zero; speciation happens when the divergence between two clusters is above the speciation threshold, but can also consider trait differences. geographic occupancy (species range), and determines when geographic isolation between 242 population clusters is sufficient to trigger a lineage-splitting event of cladogenesis. A species' 243 range can be segregated into spatially discontinuous geographic clusters of sites and is 244 determined by multiple other processes. The clustering of occupied sites is based on the 245 species' dispersal capacity and the landscape connection costs. Over time, disconnected 246 clusters gradually accumulate incompatibility (divergence), analogous to genetic 247 differentiation. Disconnected species population clusters that maintain geographic isolation for 248 a prolonged period of time will result in different species after the differentiation threshold Ϟ is 249 The computer model delivers a wide range of outputs that can be compared with empirical 305 data ( Figure 1, Table 2). Gen3sis is therefore suitable for analysing the links between 306 interacting processes and their multidimensional emergent patterns. By recording the time and 307 origin of all speciation events, as well as trait distributions and abundance throughout 308 evolutionary history, the simulation model records the information required to track the 309 dynamics of diversity and the shaping of phylogenetic relationships. The most common 310 patterns observed and studied by ecologists and evolutionary biologists, including species 311 ranges, abundances and richness, are emergent properties of the modelled processes (Table 312 2). All internal objects are accessible to the observer function, which is configurable and 313 executed during simulation runs. This provides direct simulation outputs in a format ready to 314 be stored, analysed and compared with empirical data. Given the flexibility of gen3sis, it is 315 possible to explore not only parameter ranges guided by prior knowledge available for a given 316 taxonomic group, but also variations in landscape scenarios and mechanisms ( Figure 3) We implemented one model for each of these hypotheses and simulated the spread, 348 speciation, dispersal and extinction of terrestrial organisms over the Cenozoic. We evaluated 349 whether the emerging patterns from these simulated mechanisms correspond to the empirical 350 LDG, phylogenetic tree imbalance and range size frequencies computed from data of major 351 tetrapod groups, including mammals, birds, amphibians and reptiles ( Figure 3). Table 356 S1), and model evaluation and test, based on multiple patterns including: LDG, range size 357 distributions and phylogenetic balance. Selection criteria were based on empirical data from 358 major tetrapod groups, i.e. mammals, birds, amphibians and reptiles (Table 3). 359 360 The Cenozoic (i.e. 65 Ma until the present) is considered key for the diversification of the 361 current biota [113] and is the period during which the modern LDG is expected to have been 362 formed [114]. In the Cenozoic, the continents assumed their modern geographic configuration 363 [24]. Climatically, this period was characterized by a general cooling, especially in the 364 Miocene, and ended with the climatic oscillations of the Quaternary [115]. We compiled two 365 global paleoenvironmental landscapes (i.e. L1 and L2) for the Cenozoic at 1° and ~170 kyr of 366 spatial and temporal resolution, respectively (Note S1, Animations S1 and S2). To account for 367 uncertainties on paleo-reconstructions on the emerging large-scale biodiversity patterns, we used two paleo-elevation reconstructions [116,117] Hypothesis implementation 376 We implemented three hypotheses explaining the emergence of the LDG as different gen3sis 377 models. The models (i.e. M0, M1 and M2) had distinct speciation and ecological processes 378 ( Figure 3, Note S1, Table S1). All simulations were initiated with one single ancestor species 379 spread over the entire terrestrial surface of the Earth at 65 Ma, where the temperature optimum 380 of each population matched local site conditions. Since we focused on terrestrial organisms, 381 aquatic sites were considered inhabitable and twice as difficult to cross as terrestrial sites. 382 This approximates the different dispersal limitation imposed by aquatic and terrestrial sites. 383 The spherical shape of the Earth was accounted for in distance calculations by using 384 haversine geodesic distances. Species disperse following a Weibull distribution with shape 2 385 or 5 and a scale of 550, 650, 750 or 850, resulting in most values being around 500-1500 km, 386 with rare large dispersal events above 2000 km. The evolution function defines the 387 temperature niche optimum to evolve following Brownian motion. Temperature niche optima 388 are homogenized per geographic cluster by an abundance-weighted mean after ecological 389 processes happen. We explored three rates of niche evolution, with a standard deviation 390 equivalent to ±0.1°C, ±0.5°C and ±1°C. 391 the species population abundance, where the abundance increases proportionally to the 394 distance between the population temperature niche optimum and the site temperature (Note 395 S1). Clusters of populations that accumulated differentiation over Ϟ = 12, 24, 36, 48 and 60 will 396 speciate, corresponding to events occurring after 2, 4, 6, 8 and 10 myr of isolation, 397 respectively. The divergence rate between isolated clusters was kept constant (i.e. +1 for 398 every 170 kyr of isolation). Model M0, assuming time for species accumulation, acted as a 399 baseline model. This means that all mechanisms present in this model were the same for M1 400 and M2 if not specified otherwise. 401 M1. In the implementation of the diversification rates, the speciation function applies a 402 temperature-dependent divergence between population clusters [61, 62]. Species in warmer 403 environments accumulate divergence between disconnected clusters of populations at a 404 higher rate (Note S1). The rate of differentiation increase was the average site temperature of 405 the species clusters to the power of 2, 4 or 6 plus a constant. This created a differentiation 406 increase of +1.5 for isolated clusters of a species at the warmest range and +0.5 at the coldest 407 range for every 170 kyr of isolation (Note S1, Figure S1). Using Ϟ = 12, 24, 36, 48 and 60, this 408 corresponds to a speciation event after 1.3, 2.7, 4.0, 5.3, 6.7 myr and after 4, 8, 12, 16, 20 409 myr for the warmest and coldest species, respectively. 3). In addition, we explored dispersal distributions and parameters ranging in realized mean 421 and 95% quantiles between less than a single cell, i.e. ~50 km for a landscape at 4°, and more 422 than the Earth's diameter, i.e. ~12'742 km ( Figure S2). Trait evolution frequency and intensity 423 ranged from zero to one. We ran a full factorial exploration of these parameter ranges at a 424 coarse resolution of 4° (i.e. M0 n=480, M1 n=720, M2 n=480) and compared these to empirical 425 data. Simulations considered further: (i) had at least one speciation event; (ii) did not have all 426 species becoming extinct; (iii) had fewer than 50'000 species; or (iv) had fewer than 10'000 427 species cohabiting the same site at any point in time (Note S1). After parameter range 428 exploration, we identified realistic parameters and ran a subset at 1° for high-resolution outputs 429 ( Figure 4). 430 Correspondence with empirical data 431 In order to explore the parameters of all three models and compare their ability to produce the 432 observed biodiversity patterns, we used a pattern-oriented modelling (POM) approach [23, 433 86]. POM compares the predictions of each model and parameter combination with a number 434 of diagnostic patterns from empirical observations. In our case, we used the LDG slope, tree 435 imbalance and range size frequencies as diagnostics patterns (Figure 3, Note S1 Simulations results and synthesis 458 We found that model M2 was the best match for all the empirical patterns individually, and the 459 only model able to pass all acceptance criteria (Table 3). Although all three models were able 460 to reproduce the LDG, M2 was superior in explaining the LDG, phylogenetic tree imbalance 461 and species range size frequencies simultaneously (Table 3). Most simulations of model M2 462 (67%) resulted in a decrease in species richness at higher latitudes, indicating that the LDG 463 emerged systematically under M2 mechanisms ( Figure S3, Tables S2, S3 and S4). Increasing 464 the spatial resolution of the simulations (n=12) resulted in an increase in  richness and 465 computation time and a slight decrease of the LDG (Figure S5), which was associated with a disproportionally larger number of sites towards higher latitudes, which also affects population 467 connectivity and therefore speciation rates [137]. We then selected the best matching 468 simulation of M2 in L1 at 1° (n=12) that predicted realistic biodiversity patterns (Figure 4, 469 Animation S4), The emerging LDG (i.e. 4.6% of species loss per latitudinal degree) closely 470 matched empirical curves, with good agreement for mammals (Pearson r=0.6), birds (r=0.57), 471 amphibians (r=0.57) and reptiles (r=0.38) (Note S1, Figure 4C, Figure S6). Finally, we found 472 that the support for M2 over M0 and M1 was consistent across the two alternative landscapes 473 L1 and L2 ( Figure S3, Table S4). 474 Our sensitivity analyses of parameters further provided information about the role of dispersal 475 and ecological processes in shaping the LDG (Note S1, 509 Understanding the emergence of biodiversity patterns requires the consideration of multiple 510 biological processes and abiotic forces that potentially underpin them [20,26,35,36]. We 511 have introduced gen3sis, a modular, spatially-explicit, eco-evolutionary simulation engine 512 implemented as an R-package, which offers the possibility to explore ecological and 513 macroevolutionary dynamics over changing landscapes. Gen3sis generates commonly 514 observed diversity patterns and, thanks to its flexibility, enables the testing of a broad range 515 of hypotheses (Table 4) Using a case study, we have illustrated the flexibility and utility of gen3sis in modelling multiple 522 eco-evolutionary hypotheses in global paleo-environmental reconstructions (Figures 3 and 4). 523 Our findings suggest that global biodiversity patterns can be modelled realistically by 524 combining paleo-environmental reconstructions with eco-evolutionary processes, thus moving 525 beyond pattern description to pattern reproduction [35]. Nevertheless, in our case study we 526 only implemented a few of the standing LDG hypotheses [20,34]. Multiple macroecological 527 and macroevolutionary hypotheses still have to be tested, including the role of stronger biotic 528 interactions in the tropics than in other regions [142], and compared with more biodiversity 529 patterns [20]. Considering multiple additional biodiversity patterns will allow a more robust 530 selection of models. Apart from the global LDG case study, we propose an additional case 531 study (Note S2, Figure S7) illustrating how gen3sis can be used for regional and theoretical 532 studies, such as investigations of the effect of island ontology on the temporal dynamics of 533 biodiversity [39,143]. Further, illustrations associated with the programming code are offered as a vignette of the R-package, which will support broad application of gen3sis. Altogether, 535 our examples illustrate the great potential for exploration provided by gen3sis, promising future 536 advances in our understanding of empirical biodiversity patterns. 537 Verbal explanations of the main principles underlying the emergence of biodiversity are 538 frequently proposed but are rarely quantified or readily generalized across study systems [20]. 539 We anticipate that gen3sis will be particularly useful for exploring the consequences of 540 mechanisms that so far have mostly been verbally defined. For example, the origins of 541 biodiversity gradients have been associated with a variety of mechanisms [7], but these 542 represent verbal abstractions of biological processes that are difficult to evaluate [20]. 543 Whereas simulation models can always be improved, their formulation implies formalizing 544 process-based abstractions via mechanisms expected to shape the emergent properties of a 545 system [144]. Specifically, when conveying models with gen3sis, decisions regarding the 546 biological processes and landscapes must be formalized in a reproducible fashion. By 547 introducing gen3sis, we encourage a standardization of configuration and landscape objects, 548 which will facilitate future model comparisons. This standardization offers a robust framework 549 for developing, testing, comparing, and applying the mechanisms relevant to biodiversity 550 research. 551 Studying multiple patterns is a promising approach in disentangling competing hypotheses 552 [20,86]. A wide range of biodiversity dimensions can be simulated with gen3sis (Table 2) to explore the formation of the LDG and account for uncertainties and limitations. For instance, 578 we represented Quaternary climatic oscillation using ~170 kyr time-steps, which correspond 579 to a coarser temporal scale compared with the frequency of oscillations, and thus do not 580 account for shorter climatic variation effects on diversity patterns [25, 26, 44]. We also did not 581 consider ice cover, that can mask species' habitable sites, which probably explains the the 582 mismatch between simulated and empirical LDG patterns below 50° ( Figure 4C). Moreover, 583 paleo indicators of climate from Köppen bands have major limitations, and the temperature 584 estimation derived in our case study can suffer from large inaccuracies. Lastly, extrapolation 585 of the current temperature lapse rate along elevation might lead to erroneous estimates, 586 especially in terms of the interaction with air moisture [155], which was not further investigated 587 here. Hence, the presented case study represents a preliminary attempt for illustrative 588 purposes. Further research is required to generate more accurate paleolandscapes, and 589 research in biology should improve empirical evidence and our understanding of mechanisms. 590 Table 4. A non-exhaustive list of expected applications of gen3sis. Given the flexibility and the 593 range of outputs produced by the engine, we expect that gen3sis will serve a large range of 594 purposes, from testing a variety of theories and hypotheses to evaluating phylogenetic 595 diversification methods. 596 Use Examples from Figure 1 Testing phylogenetic inference methods, including diversification rates in phylogeographic reconstructions. Infer diversification rate in gen3sis simulated phylogenies (E) and compare with a known diversification in gen3sis (A, B & G). Providing biotic scenarios for past responses to geodynamics. Based on model outputs (C-F) and comparisons with empirical data (H), select plausible models (B). Testing paleo-climatic and paleotopographic reconstructions using biodiversity data. Based on model outputs (C-F) and comparisons with empirical data (H), select plausible landscape(s) (A). Comparing expectations of different processes relating to the origin of biodiversity; generating and testing hypotheses. Compare models (A, B & G) with outputs (C-F) and possibly how well outputs match empirical data (H). Comparing simulated intra-specific population structure with empirical genetic data. Compare simulated divergence matrices with population genetic data. Forecasting the response of biodiversity to global changes (e.g. climate or fragmentation). Extrapolate plausible and validated models (A, B & G) on landscapes under climate change scenarios (A). Investigating trait evolution through space and time. Combine past and present simulated species traits (F) and distributions (C, D) with fossil and trait data (H). Modelling complex systems in space and time in unconventional biological contexts in order to investigate ecoevolutionary processes in fields traditionally not relying on biological principles. Model eco-evolutionary mechanisms (A, B & G) in an unconventional eco-evolutionary context. 598 Here we have introduced gen3sis, a modular simulation engine that enables exploration of the 599 consequences of ecological and evolutionary processes and feedbacks on the emergence of 600
3,829
2021-03-25T00:00:00.000
[ "Environmental Science", "Computer Science", "Biology" ]
Optoelectronic Properties and Structural Characterization of Gan Thick Films on Different Substrates through Pulsed Laser Deposition Approximately 4-µm-thick GaN epitaxial films were directly grown onto a GaN/sapphire template, sapphire, Si(111), and Si(100) substrates by high-temperature pulsed laser deposition (PLD). The influence of the substrate type on the crystalline quality, surface morphology, microstructure, and stress states was investigated by X-ray diffraction (XRD), photoluminescence (PL), atomic force microscopy (AFM), transmission electron microscopy (TEM), and Raman spectroscopy. Raman scattering spectral analysis showed a compressive film stress of −0.468 GPa for the GaN/sapphire template, whereas the GaN films on sapphire, Si(111), and Si(100) exhibited a tensile stress of 0.21, 0.177, and 0.081 GPa, respectively. Comparative analysis indicated the growth of very close to stress-free GaN on the Si(100) substrate due to the highly directional energetic precursor migration on the substrate's surface and the release of stress in the nucleation of GaN films during growth by the high-temperature (1000 • C) operation of PLD. Moreover, TEM images revealed that no significant GaN meltback (Ga–Si) etching process was found in the GaN/Si sample surface. These results indicate that PLD has great potential for developing stress-free GaN templates on different substrates and using them for further application in optoelectronic devices. Introduction Gallium nitride (GaN) and its related III-nitride materials are excellent wide direct band-gap (3.4 eV) semiconductors due to their potential properties of high saturation velocity in an electric field, high breakdown electric field, and electron mobility-all of which are necessary for the development of next-generation devices and applications that are high frequency, highly efficient, and can effectively power switching devices [1][2][3].However, due to the lack of suitable native or lattice-matched substrates, GaN epilyers are usually grown on sapphire, SiC, and Si substrates.This presents a serious problem, as a high defect density and a large biaxial stress in the heteroepitaxy of the GaN epilayers are generated by mismatches in the lattice structure and thermal expansion coefficients between the epilayers and the Si substrate.These growth-induced defects (such as threading dislocations, stacking faults, voids, and point defects) limit the performance and reliability of GaN-based devices [4][5][6].ZnO-related materials may be closely lattice-matched with GaN, but the drawback of the ZnO single crystalline wafer is that it is still expensive [7].Substrates that produce a low density of defects present the most effective possible approach for reducing defects in epitaxial films.The most widely used methods for growing GaN with low defect density are hydride vapor phase epitaxy (HVPE) and metalorganic chemical vapor deposition (MOCVD) [8,9].GaN thin films with high-quality and low-density of defects can also be grown by ion-beam assisted MBE [10,11].The reaction chamber in an HVPE system is often made of quartz, which is not operational under high temperature.An MOCVD system requires a high-temperature growth process, which consumes considerable electric power and thereby produces high running costs and the possibility of air pollution due to the toxicity of the metal-organic chemicals in the precursor gas.Pulsed laser deposition (PLD) is a promising technique that can address these problems [12][13][14].PLD is interesting, as it allows for in situ processing of the multilayer structure via multiple targets, stoichiometric transfer deposition from the target to the substrate, flexible doping options for complex compositions, and a highly directionally distributed energetic precursor produced by the laser ablation of a target.Most discussions on PLD focus on studying the influence of growth conditions on the properties of GaN films [15][16][17][18][19]. Several previous studies have reported how PLD enables the growth of high-quality III-nitrides on other substrates [20][21][22][23][24]. Since the considerable scale and production cost of native GaN substrates would be too much, GaN templates on foreign substrates are good choices for the heteroepitaxial deposition of GaN-based devices.In this study, the crystalline quality, surface morphology, optoelectronic and structural properties related to GaN thick film grown on different substrates as a GaN templates through high-temperature PLD are characterized and compared. Experimental All GaN film samples were deposited on different substrates by PLD at 1000 • C in a nitrogen plasma ambient atmosphere.The chamber was pumped down to 10 −6 Torr before the deposition process began, and N 2 gas (with a purity of 99.999%) was introduced.The working pressure once the N 2 plasma was injected was 1.13 × 10 −4 Torr.A KrF excimer laser (λ = 248 nm, Lambda Physik, Fort Lauderdale, FL, USA) was employed as the ablation source and operated with a repetition rate of 1 Hz and a pulse energy of 60 mJ.The average growth rate of the GaN film was approximately 1 µm/h.The laser beam was incident on a rotating target at an angle of 45 • .The GaN target was fabricated by HVPE and set at a fixed distance of 9 cm from the substrate before being rotated at 30 rpm during film deposition.In this case, ~4 µm-thick GaN films were grown on a GaN/sapphire template (sample A), sapphire (sample B), Si(111) (sample C), and Si(100) (sample D).For the GaN on sample A, a 2-µm GaN layer was firstly deposited on sapphire substrate by MOCVD.Scanning electron microscopy (SEM, S-3000H, Hitachi, Tokyo, Japan), transmission electron microcopy (TEM, H-600, Hitachi, Tokyo, Japan), atomic force microscopy (AFM, DI-3100, Veeco, New York, NY, USA), double-crystal X-ray diffraction (XRD, X'Pert PRO MRD, PANalytical, Almelo, The Netherlands), low-temperature photoluminescence (PL, Flouromax-3, Horiba, Tokyo, Japan), and Raman spectroscopy (Jobin Yvon, Horiba, Tokyo, Japan) were employed to explore the microstructure and optical properties of the GaN templates deposited on different substrates.The electrical properties of the GaN films were determined by Van der Pauw-Hall measurement under liquid nitrogen cooling at 77 K. Results and Discussion Figure 1 shows a low-temperature PL spectra (at 77 K) of GaN films grown on different substrates.PL spectra of GaN grown on different substrates are dominated by the near-band-edge emission at around 360 nm.The full width at half maximum (FWHM) of the GaN films produced on samples A (4 nm) and B (8 nm) are narrower than that of the films grown on samples C (10 nm) and D (13 nm), indicating the low defect density and high crystalline quality of the GaN films due to their lower lattice mismatch, which is consistent with the XRD results.Similar trends of the yellow band-emission peak on these samples were also observed (data not shown here).The yellow luminescence is related to deep level defects in GaN [25].Figure 2 shows a comparison of the typical XRD patterns of GaN (0002) films grown on different substrates.It can be seen that there is a variation in the FWHM value of the (0002) diffraction peak, and intensities of the GaN diffraction peak on the different substrates were Appl.Sci.2017, 7, 87 3 of 9 obtained at around 34.5 degrees.The intensity of GaN (0002) in sample A is the strongest among all samples, which indicates that the GaN films on the GaN/sapphire template are highly c-oriented and have better crystalline quality.The FWHM of GaN (0002) values for samples A, B, C, and D were measured at 0.19 • , 0.51 • , 0.79 • , and 1.09 • , respectively.However, the XRD peak intensity increases as FWHM decreases; this is attributed to the increase in the crystallite size due to either the aggregation of small grains or grain boundary movement during the growth process.Since the FWHM of the XRD diffraction peak is relative to the average crystallite grain size in the film [26], the grain size of GaN grown on the different substrates is calculated using the Debye-Scherer equation [27]: where D is the crystallite size, λ is the X-ray wavelength, and θ is the diffraction angle.The crystallite sizes of samples A, B, C, and D are estimated to be 57, 20, 13, and 9 nm, respectively.These results indicate that the crystalline quality of GaN films grown on samples A and B is better than that of the films grown on samples C and D. Appl.Sci.2017, 7, 87 3 of 9 GaN/sapphire template are highly c-oriented and have better crystalline quality.The FWHM of GaN (0002) values for samples A, B, C, and D were measured at 0.19°, 0.51°, 0.79°, and 1.09°, respectively.However, the XRD peak intensity increases as FWHM decreases; this is attributed to the increase in the crystallite size due to either the aggregation of small grains or grain boundary movement during the growth process.Since the FWHM of the XRD diffraction peak is relative to the average crystallite grain size in the film [26], the grain size of GaN grown on the different substrates is calculated using the Debye-Scherer equation [27]: where D is the crystallite size, λ is the X-ray wavelength, and θ is the diffraction angle.The crystallite sizes of samples A, B, C, and D are estimated to be 57, 20, 13, and 9 nm, respectively.These results indicate that the crystalline quality of GaN films grown on samples A and B is better than that of the films grown on samples C and D. Figure 3 shows plane-view SEM pictures of GaN films grown on various substrates.The surface morphologies show different features, as they are strongly dependent on the types of substrates used.The surface of GaN films in samples A and B was mirror-like, indicating less of a lattice mismatch between GaN and sapphire (Figure 3a,b).The smooth surface might be due to the high kinetic energy needed by PLD for GaN precursor migration and diffusion on the substrates' surface [28].A rough GaN film surface, meanwhile, was observed in sample C (Figure 3c).Sample D presented an incomplete island coalescence process with a hexagonal structure, as shown in Figure 3d.GaN/sapphire template are highly c-oriented and have better crystalline quality.The FWHM of GaN (0002) values for samples A, B, C, and D were measured at 0.19°, 0.51°, 0.79°, and 1.09°, respectively.However, the XRD peak intensity increases as FWHM decreases; this is attributed to the increase in the crystallite size due to either the aggregation of small grains or grain boundary movement during the growth process.Since the FWHM of the XRD diffraction peak is relative to the average crystallite grain size in the film [26], the grain size of GaN grown on the different substrates is calculated using the Debye-Scherer equation [27]: where D is the crystallite size, λ is the X-ray wavelength, and θ is the diffraction angle.The crystallite sizes of samples A, B, C, and D are estimated to be 57, 20, 13, and 9 nm, respectively.These results indicate that the crystalline quality of GaN films grown on samples A and B is better than that of the films grown on samples C and D. Figure 3 shows plane-view SEM pictures of GaN films grown on various substrates.The surface morphologies show different features, as they are strongly dependent on the types of substrates used.The surface of GaN films in samples A and B was mirror-like, indicating less of a lattice mismatch between GaN and sapphire (Figure 3a,b).The smooth surface might be due to the high kinetic energy needed by PLD for GaN precursor migration and diffusion on the substrates' surface [28].A rough GaN film surface, meanwhile, was observed in sample C (Figure 3c).Sample D presented an incomplete island coalescence process with a hexagonal structure, as shown in Figure 3d. Figure 3 shows plane-view SEM pictures of GaN films grown on various substrates.The surface morphologies show different features, as they are strongly dependent on the types of substrates used.The surface of GaN films in samples A and B was mirror-like, indicating less of a lattice mismatch between GaN and sapphire (Figure 3a,b).The smooth surface might be due to the high kinetic energy needed by PLD for GaN precursor migration and diffusion on the substrates' surface [28].A rough GaN film surface, meanwhile, was observed in sample C (Figure 3c).Sample D presented an incomplete island coalescence process with a hexagonal structure, as shown in Figure 3d.This result indicates that GaN films on Si(100) have a hexagonal phase.The different GaN film structure of the grains can be attributed to the different lattice structure of the Si substrate [29].The surface morphology and roughness of the GaN films grown on different substrates were carried out by AFM measurements with the scanning area of 10 × 10 µm 2 , as shown in Figure 4.In Figure 4, the root-mean-square RMS values for samples A, B, C, and D are 2.1, 3.4, 14.3, and 17.7 nm, respectively.The film grown in samples A and B exhibited quite a smooth surface, with the RMS roughness being 3.4 and 2.1 nm, respectively, and the RMS surface roughness of samples C and D was estimated as 14.3 and 17.7 nm, respectively.The large values for the surface roughness of the GaN films in samples C and D might be due to the large lattice mismatch between the film and the substrates.A decrease in surface roughness occurs with an increase in grain size, as mentioned in the XRD results.The electrical resistivity of the GaN films grown on different substrates is shown in Figure 5a.The electrical resistivity of the four samples was found to be in the range 16.2-32.8Ω•cm.The electrical resistivity of sample D was the largest, while that of sample A was the smallest.The electrical resistivity correlates with defect density, and the high defect density in the films may cause a decrease in the electrical resistivity [30].The values of electrical resistivity of samples C and D were very close, which is consistent with the structural features of the films grown on these substrates, as discussed above.As electrical resistivity is inversely proportional to the carrier concentration and carrier mobility, the electrical resistivity of the films grown on the different substrates can be determined from their measurements.Low-temperature Hall measurement data from GaN films The electrical resistivity of the GaN films grown on different substrates is shown in Figure 5a.The electrical resistivity of the four samples was found to be in the range 16.2-32.8Ω•cm.The electrical resistivity of sample D was the largest, while that of sample A was the smallest.The electrical resistivity correlates with defect density, and the high defect density in the films may cause a decrease in the electrical resistivity [30].The values of electrical resistivity of samples C and D were very close, which is consistent with the structural features of the films grown on these substrates, as discussed above.As electrical resistivity is inversely proportional to the carrier concentration and carrier mobility, the electrical resistivity of the films grown on the different substrates can be determined from their measurements.Low-temperature Hall measurement data from GaN films The electrical resistivity of the GaN films grown on different substrates is shown in Figure 5a.The electrical resistivity of the four samples was found to be in the range 16.2-32.8Ω•cm.The electrical resistivity of sample D was the largest, while that of sample A was the smallest.The electrical resistivity correlates with defect density, and the high defect density in the films may cause a decrease in the electrical resistivity [30].The values of electrical resistivity of samples C and D were very close, which is consistent with the structural features of the films grown on these substrates, as discussed above.As electrical resistivity is inversely proportional to the carrier concentration and carrier mobility, the electrical resistivity of the films grown on the different substrates can be determined from their measurements.Low-temperature Hall measurement data from GaN films grown on the different substrates are shown in Figure 5b,c.Sample A showed the lowest carrier concentration and highest carrier mobility, thereby resulting in an increased number of conductive paths.The carrier concentration in sample D was higher than that in the others, whereas its carrier mobility was the lowest.This can be attributed to the existence of a high intrinsic defect and several grain boundaries in the film.These defects trap and scatter moving electrons, thus decreasing their mobility in the GaN films [31,32]. Appl.Sci.2017, 7, 87 5 of 9 grown on the different substrates are shown in Figure 5b,c.Sample A showed the lowest carrier concentration and highest carrier mobility, thereby resulting in an increased number of conductive paths.The carrier concentration in sample D was higher than that in the others, whereas its carrier mobility was the lowest.This can be attributed to the existence of a high intrinsic defect and several grain boundaries in the film.These defects trap and scatter moving electrons, thus decreasing their mobility in the GaN films [31,32].To further clarify the stress behaviors among the four samples, Raman scattering spectroscopy was performed, and the results are shown in Figure 6.The E2-high phonon mode is very sensitive to biaxial strain, and is extensively used to characterize the in-plane stress state of the GaN epilayer [33].The relationship between biaxial stress and Raman shift can be shown by the formula: where σ is the biaxial stress, Δω is the Raman shift, and k is the Raman stress coefficient of 6.2 cm −1 •GPa −1 for GaN [34].Generally, a blue shift in an E2-high phonon peak indicates compressive To further clarify the stress behaviors among the four samples, Raman scattering spectroscopy was performed, and the results are shown in Figure 6.The E 2 -high phonon mode is very sensitive to biaxial strain, and is extensively used to characterize the in-plane stress state of the GaN epilayer [33]. Appl.Sci.2017, 7, 87 5 of 9 grown on the different substrates are shown in Figure 5b,c.Sample A showed the lowest carrier concentration and highest carrier mobility, thereby resulting in an increased number of conductive paths.The carrier concentration in sample D was higher than that in the others, whereas its carrier mobility was the lowest.This can be attributed to the existence of a high intrinsic defect and several grain boundaries in the film.These defects trap and scatter moving electrons, thus decreasing their mobility in the GaN films [31,32].To further clarify the stress behaviors among the four samples, Raman scattering spectroscopy was performed, and the results are shown in Figure 6.The E2-high phonon mode is very sensitive to biaxial strain, and is extensively used to characterize the in-plane stress state of the GaN epilayer [33].The relationship between biaxial stress and Raman shift can be shown by the formula: where σ is the biaxial stress, Δω is the Raman shift, and k is the Raman stress coefficient of 6.2 cm −1 •GPa −1 for GaN [34].Generally, a blue shift in an E2-high phonon peak indicates compressive The relationship between biaxial stress and Raman shift can be shown by the formula: where σ is the biaxial stress, ∆ω is the Raman shift, and k is the Raman stress coefficient of 6.2 cm −1 •GPa −1 for GaN [34].Generally, a blue shift in an E 2 -high phonon peak indicates compressive stress, while a red shift indicates tensile stress [35].It has been found that an E 2 -high peak position is substrate dependent, which implies that there are different stress states in those samples.In the present case, the GaN E 2 -high peaks of samples MGS (MOCVD-grown GaN on sapphire), A, B, C, and D were evaluated as 520.7, 569.7, 565.5, 565.7, and 566.3 cm −1 , respectively.Compared to the intrinsic value of 566.8 cm −1 for the stress-free GaN, samples B, C, and D were under tensile stress, while sample A was under compressive stress [36].This can be due to the rapid release of stress in the nucleation of GaN films during the initial growth by high-temperature (1000 • C) PLD.This observed result is also consistent with those reported by Wang et al. [37].Sample D had minimum stress, likely caused by the growth of polygonal island structures and defects generated in the films, which is consistent with the SEM results [38].There is a large difference in the lattice mismatch and thermal expansion between GaN and Si when compared to the GaN/sapphire template and sapphire.The calculated values of stress for GaN grown on different substrates are shown in Figure 7.The Raman spectra of the MGS sample is displayed for comparison, as shown in Figure 7.The GaN E 2 peak of MGS was evaluated at 570.2 cm −1 with a compressive stress value of −0.548 GPa, which is larger than the compressive stress value of −0.468 for sample B. It can be concluded that the PLD growth method is beneficial for the release of stress in the films. Appl.Sci.2017, 7, 87 6 of 9 stress, while a red shift indicates tensile stress [35].It has been found that an E2-high peak position is substrate dependent, which implies that there are different stress states in those samples.In the present case, the GaN E2-high peaks of samples MGS (MOCVD-grown GaN on sapphire), A, B, C, and D were evaluated as 520.7, 569.7, 565.5, 565.7, and 566.3 cm −1 , respectively.Compared to the intrinsic value of 566.8 cm −1 for the stress-free GaN, samples B, C, and D were under tensile stress, while sample A was under compressive stress [36].This can be due to the rapid release of stress in the nucleation of GaN films during the initial growth by high-temperature (1000 °C) PLD.This observed result is also consistent with those reported by Wang et al. [37].Sample D had minimum stress, likely caused by the growth of polygonal island structures and defects generated in the films, which is consistent with the SEM results [38].There is a large difference in the lattice mismatch and thermal expansion between GaN and Si when compared to the GaN/sapphire template and sapphire.The calculated values of stress for GaN grown on different substrates are shown in Figure 7. The Raman spectra of the MGS sample is displayed for comparison, as shown in Figure 7.The GaN E2 peak of MGS was evaluated at 570.2 cm −1 with a compressive stress value of −0.548 GPa, which is larger than the compressive stress value of −0.468 for sample B. It can be concluded that the PLD growth method is beneficial for the release of stress in the films.Cross-sectional TEM images were used to investigate the GaN-on-Si meltback-etching reaction with PLD operating at a high temperature of 1000 °C.Previously, it was reported that the meltback-etching process caused by alloying reaction Ga with Si leads to a rough GaN surface and deep hollows in the Si substrate [39,40].Figure 8a,b shows the TEM images of the GaN films grown on Si(111) and Si(100), respectively.From Figure 8a,b, it can clearly be observed that no significant Ga-Si meltback occurred at the GaN/Si surface; this is likely because of the suppressed interaction between the GaN epitaxy films and the Si substrates developed through PLD.Cross-sectional TEM images were used to investigate the GaN-on-Si meltback-etching reaction with PLD operating at a high temperature of 1000 • C. Previously, it was reported that the meltback-etching process caused by alloying reaction Ga with Si leads to a rough GaN surface and deep hollows in the Si substrate [39,40].Figure 8a,b shows the TEM images of the GaN films grown on Si(111) and Si(100), respectively.From Figure 8a,b, it can clearly be observed that no significant Ga-Si meltback occurred at the GaN/Si surface; this is likely because of the suppressed interaction between the GaN epitaxy films and the Si substrates developed through PLD. with PLD operating at a high temperature of 1000 °C.Previously, it was reported that the meltback-etching process caused by alloying reaction Ga with Si leads to a rough GaN surface and deep hollows in the Si substrate [39,40].Figure 8a,b shows the TEM images of the GaN films grown on Si(111) and Si(100), respectively.From Figure 8a,b, it can clearly be observed that no significant Ga-Si meltback occurred at the GaN/Si surface; this is likely because of the suppressed interaction between the GaN epitaxy films and the Si substrates developed through PLD. Conclusions We investigated the GaN thick films grown on a GaN/sapphire template, sapphire, Si(111), and Si(100) by high-temperature PLD.The substrate effect on GaN crystalline growth quality, surface morphology, stress behavior, and interface property were studied.This paper demonstrates the potential of using high-temperature PLD as a growth method for preparing GaN templates that exhibit improved device performance. Figure 1 . Figure 1.Low-temperature photoluminescence (PL) spectra (at 77 K) of GaN films grown on different substrates.FWHM: full width at half maximum. Figure 2 . Figure 2. X-ray diffraction (XRD) measurements results of GaN films grown on different substrates. Figure 1 . Figure 1.Low-temperature photoluminescence (PL) spectra (at 77 K) of GaN films grown on different substrates.FWHM: full width at half maximum. Figure 1 . Figure 1.Low-temperature photoluminescence (PL) spectra (at 77 K) of GaN films grown on different substrates.FWHM: full width at half maximum. Figure 2 . Figure 2. X-ray diffraction (XRD) measurements results of GaN films grown on different substrates. Figure 2 . Figure 2. X-ray diffraction (XRD) measurements results of GaN films grown on different substrates. that GaN films on Si(100) have a hexagonal phase.The different GaN film structure of the grains can be attributed to the different lattice structure of the Si substrate[29].The surface morphology and roughness of the GaN films grown on different substrates were carried out by AFM measurements with the scanning area of 10 × 10 μm 2 , as shown in Figure4.In Figure4, the root-mean-square RMS values for samples A, B, C, and D are 2.1, 3.4, 14.3, and 17.7 nm, respectively.The film grown in samples A and B exhibited quite a smooth surface, with the RMS roughness being 3.4 and 2.1 nm, respectively, and the RMS surface roughness of samples C and D was estimated as 14.3 and 17.7 nm, respectively.The large values for the surface roughness of the GaN films in samples C and D might be due to the large lattice mismatch between the film and the substrates.A decrease in surface roughness occurs with an increase in grain size, as mentioned in the XRD results. that GaN films on Si(100) have a hexagonal phase.The different GaN film structure of the grains can be attributed to the different lattice structure of the Si substrate[29].The surface morphology and roughness of the GaN films grown on different substrates were carried out by AFM measurements with the scanning area of 10 × 10 μm 2 , as shown in Figure4.In Figure4, the root-mean-square RMS values for samples A, B, C, and D are 2.1, 3.4, 14.3, and 17.7 nm, respectively.The film grown in samples A and B exhibited quite a smooth surface, with the RMS roughness being 3.4 and 2.1 nm, respectively, and the RMS surface roughness of samples C and D was estimated as 14.3 and 17.7 nm, respectively.The large values for the surface roughness of the GaN films in samples C and D might be due to the large lattice mismatch between the film and the substrates.A decrease in surface roughness occurs with an increase in grain size, as mentioned in the XRD results. Figure 5 . Figure 5. Variation in (a) resistivity; (b) carrier concentration; and (c) mobility of GaN films with different substrates. Figure 6 . Figure 6.Raman spectra of GaN films for samples MGS (metalorganic chemical vapor deposition (MOCVD)-grown GaN on sapphire), A, B, C, and D. Figure 5 . Figure 5. Variation in (a) resistivity; (b) carrier concentration; and (c) mobility of GaN films with different substrates. Figure 5 . Figure 5. Variation in (a) resistivity; (b) carrier concentration; and (c) mobility of GaN films with different substrates. Figure 6 . Figure 6.Raman spectra of GaN films for samples MGS (metalorganic chemical vapor deposition (MOCVD)-grown GaN on sapphire), A, B, C, and D. Figure 6 . Figure 6.Raman spectra of GaN films for samples MGS (metalorganic chemical vapor deposition (MOCVD)-grown GaN on sapphire), A, B, C, and D. Figure 7 . Figure 7. Residual stress and its corresponding E2 Raman shift for samples MGS, A, B, C, and D. Figure 7 . Figure 7. Residual stress and its corresponding E 2 Raman shift for samples MGS, A, B, C, and D. Figure 8 . Figure 8. Cross-sectional TEM pictures of GaN films on samples (a) C and (b) D.Figure 8. Cross-sectional TEM pictures of GaN films on samples (a) C and (b) D. Figure 8 . Figure 8. Cross-sectional TEM pictures of GaN films on samples (a) C and (b) D.Figure 8. Cross-sectional TEM pictures of GaN films on samples (a) C and (b) D.
6,501.6
2017-01-17T00:00:00.000
[ "Materials Science" ]
MOOC Teaching Model of Basic Education Based on Fuzzy Decision Tree Algorithm In recent years, the development of science and technology in China has greatly affected people's ways of entertainment. In the traditional industrial model, new industries and Internet industries represented by the Internet have emerged, and the Internet video business is an emerging business that has been gradually emerging in the Internet industry in recent years. Moreover, this new teaching method has been gradually noticed in simple education, such as MOOC, I want to self-study network, and Smart Tree, and other online learning websites have sprung up. At present, the epidemic environment makes people pay more attention to this convenient and wide range of online video education. Therefore, we need to evaluate this kind of online video teaching model from the effectiveness of this kind of method and the quality of user experience. This paper takes this as the starting point and chooses the earliest online video platform, MOOC, as the model to establish a set of perfect user experience quality evaluation methods suitable for domestic online video education mode. Considering the data source, the accuracy of the results, and other factors, we chose the industry-leading platform MOOC network as an example. Through the exploration of the MOOC teaching mode in basic education, a member experience evaluation model is established based on fuzzy decision tree algorithm. The experimental results show that the model has high accuracy and high reliability. Introduction With the popularity of online video software of Tencent, Youku, and Iqiyi, people gradually pay attention to the field of online video. e sudden epidemic has also made more and more people pay attention to the teaching model of online video education (Li et al.) [1]. Under the current epidemic situation, more and more schools choose to upload courses to online teaching platforms, which are funded by students to watch, study, and download. In addition, for some students who want to acquire professional knowledge in addition to school, some course videos taught by university teachers are the best way to learn, and video learning has become one of the ways and methods of national learning (Guand He) [2]. erefore, in recent years, the Internet video business has gradually become an emerging business. More and more Internet companies begin to focus on and tend to the development of their video business. Development is bound to be accompanied by competition, and the competition of online video education is becoming more and more intense. e richness of video content and video quality determine the retention and experience of members (Farnaz et al.) [3]. However, live video education evolved from the traditional video industry, but it is quite different from the traditional new media business in terms of video content, video architecture, user needs, member audience, and so on (Hu et al.) [4]. erefore, we cannot apply the traditional video model evaluation methods to the modern video education model, so we lack some evaluation and deep understanding of the current video education model. Although there are mature platforms abroad, such as courses, we need to consider the differences among China and foreign countries in international background and industrial development environment and user behavior. We can only learn from foreign research results, but we cannot take them as a guide (Vijay et al.) [5]. erefore, how to establish a set of user experience quality evaluation methods for China's modern online video education model is very important and necessary. In view of this problem, we found through research that the MOOC platform is one of the earliest platforms for online video courses and video education in China. Compared with other platforms, there are hundreds of 985/211/double first-class universities on the MOOC platform, and their authority and richness of courses are far higher than other online video websites. Coupled with the influence of its years, the Muke platform can become the earliest representative model of online video education in China (fuzzy et al.) [6]. erefore, taking Muke as an example, this paper discusses the user experience quality evaluation method of the Internet video education model. e data cited below are from the real access data of the platform. Based on the collected data, we establish the user experience quality evaluation model of online Internet video learning through a fuzzy decision tree algorithm. For the collected data, we first clean the data and sort the collected initial text data into quantitative visual data, which is convenient for the next data analysis. After preprocessing the original data, the original data can show the login IP address and user agent field information to infer the type of access device. Finally, we get a series of data information such as member access video program information, member login information, and so on. On this basis, we use the mathematical algorithm, namely the fuzzy decision tree algorithm to statistically analyze the set to mine the characteristic laws of the data. For example, we can explore the distribution of video quality and the correlation between video quality and member retention rate, and lay a certain foundation for the next modeling work according to the mathematical characteristic laws we mine. us, an online video user experience quality evaluation model suitable for the current situation of Internet development in China is established, and the model is modeled based on a fuzzy number decision algorithm to evaluate and verify the accuracy and effectiveness of the model. Finally, the online video learning mode represented by the MOOC is evaluated by a fuzzy decision tree algorithm. is paper establishes a set of perfect user experience quality evaluation methods suitable for domestic online video education mode. e research and innovation contributions include: through the exploration of MOOC teaching mode of basic education, a member experience evaluation model based on a fuzzy decision tree algorithm is established. e experimental results show that the model has high accuracy and high reliability. is set of evaluation teaching models is suitable for China's industrial development and combined with China's national conditions and provides a reference value. Related Work e evaluation of the online video learning mode is mainly to model and test the user experience under this mode. erefore, this paper mainly explores the video viewing law and user experience of members using MOOC online video learning mode based on a fuzzy number decision algorithm. Among them, fuzzy data measurement algorithm is mainly used in video traffic measurement. Data analysis and sorting play an important role in the cleaning process (Sun) [7]. Firstly, the video measurement method is divided into the active and passive measurements. As the name suggests, the measured party actively obtains the specific browsing data of the measured party by sending data packets, so that the browsing records and interested video content of the measured party can be obtained quickly, conveniently, and directly. erefore, the active measurement method has strong operability and simple, flexible, and direct operation (Teekaraman et al.) [8]. Passive measurement does not need to actively send data packets back to the user but directly pulls the required data from the network server through the browsing records of the network and the network characteristics on the statistical link of the background data packets. e alternative method does not need to send data. erefore, it does not need to occupy the network transmission space and has little impact on the stability of the network. And because the data are directly pulled out in the background, the accuracy is also higher than the former (Fuzzy D et al. 2020) [9]. In the process of specific experimental design, we often choose different measurement methods according to the needs of specific conditions. After determining the way of data acquisition, we need to sort out and count the data to explore the law of member behavior behind the data. is law is mainly the law of the whole process after the member starts to execute the viewing behavior channel, and the process ends after the closing operation, including the relationship between the frequency of member switching video, viewing time, and the distribution of member resolution characteristics. Foreign research teams have conducted research on the frequency behavior of members' video conversions. is research analyzes the frequency distribution of member visits; that is, the frequency of collected member visits and online video teaching platform is drawn as zero, one, and more times (Raharja) [10]. According to the proportion of times of each frequency, it is found that the retention rate of members who use the PC terminal to initiate online video access requests is higher than that of members on the mobile terminal or iPad terminal, and the video conversion rate is lower in the viewing process, that is, less than 5% of users have switched videos in the viewing process (Jan et al.) [11]. e author of this article further clustered the video conversion time initiated by members and found that most of the video conversion time was when the frequency began to play (Mu et al.) [12]. Finally, according to the above collected data and the law of analysis from the data, we can do the user experience research and evaluation model of the online video education mode. e MOOC teaching evaluation model for basic education we proposed is an evaluation method that is in line with our current industrial development status and can effectively evaluate the online video teaching model, taking MOOC as an example. is method draws lessons from the existing evaluation models abroad to a certain extent. Florin Dobrain and others are researchers who have accessed data analyses through real online video platforms for the first time. ey mainly explore the correlation between the member service scene and user participation (Shi and Huang) [13]. is is the first experiment using real scene data analyses, so the experimental results have certain reliability. However, the disadvantage is that the analysis conclusion of this study only depends on the actual data, and the specific reliability and accuracy have not been verified by relevant experiments. Subsequently, S. Shunmuga Krishnan et al. proposed to conduct research through quasi-experiment for the first time. ey explored the relationship between video quality and member access behavior through experiments and finally found that there was a causal relationship between food quality and member access behavior (Khazali et al.) [14]. e authenticity and reliability of experimental results are more accurate and have higher credibility. As for the specific experimental methods, we can see the research on the relationship between food quality indicators and member participation in the visit process. Although s. shunmuga Krishnan et al. creatively improved the reliability of the experiment by using the experimental method in the research of the online video teaching model. At this time, the experiment mainly adopts qualitative measurement, which has a new overall impact on the correlation between the two in the experimental process, exploring the heterogeneity of scale and the rationality of the method. Subjective avoidance in the process of experimental design are not perfect and needs to be further discussed (Zhenget al.) [15]. Materials and Methods e main purpose of this paper is to find a user experience evaluation scheme system suitable for the MOOC teaching mode of basic education. Firstly, the system needs to better fit the existing data on the MOOC teaching platform of basic education so that the evaluation system can become an effective proof of the credibility of the evaluation of member interview experience in the future. erefore, the essence of the system is to reflect the correlation between video quality and member experience quality in the MOOC teaching mode of basic education and the optimal solution of linear relationship fitting. ere is a certain balance between ordinary users and member users, which can neither make the ordinary video too simple nor let the member video monopolize. In the process of finding the optimal solution, it is inevitable to use data mining algorithm. erefore, we first investigated several commonly used classification algorithms in data mining, such as the naive Bayes algorithm, decision tree algorithm, support vector machine, and the fuzzy decision tree algorithm. However, the Naive Bayes algorithm is suitable for scenes where data sets are independent of each other. However, the premise of this paper is that there is a correlation hypothesis between video quality and member access quality experience, so the classification effect of the model established by the naive Bayesian algorithm is not ideal. e decision tree algorithm is suitable for data sources that are discrete value sample sets, but the data sources and subsequent classifications we deal with regard to samples as continuous value sets, so the decision tree algorithm is not suitable for this paper. However, because the decision tree is a gradual algorithm based on dichotomy, the comprehensiveness and theory of the algorithm are relatively simple. For the deep mining algorithm suitable for the optimal solution in this paper, the decision tree algorithm can better fit the data results by removing the differences of data sources. erefore, this paper adopts the fuzzy decision tree algorithm to solve this problem, which is more in line with the cognitive formula of data attribute characteristics and uncertainty, in theory. In the following practical experiments, we also take the decision tree algorithm as the comparison algorithm to evaluate the accuracy of the model. Because the remaining support vector machine algorithms are suitable for small-scale sample data mining problems, there are problems of long modeling times and complex algorithms in this paper, resulting in weak practical operability and eventually elimination. erefore, this paper finally improves on the basis of the decision tree and adopts the fuzzy decision tree algorithm with more reliable and accurate results Figure 1. Next, we collect and preprocess the data. e data we use are all from the actual access data of the MOOC network, and their content is only the log files in the process of member access, which do not involve the specific privacy of members. MOOC is an online video learning platform for basic education jointly developed by love course and Netease cloud classroom. It can be regarded as one of the first batches of the online video teaching platforms developed in China. Its platform has been settled in hundreds of universities, including 985/211/double first-class universities such as Tsinghua University, Peking University, Nankai University, and Fudan University. During the epidemic period, more and more schools choose to upload teaching videos to the MOOC platform, where students attend classes at home and complete homework and exams. In addition, more and more people choose the MOOC platform for direct learning when they choose extracurricular knowledge other than their major, and nonstudents want to study courses on the university campus. erefore, the MOOC platform has the amount of data that meet the needs of experimental research. In addition, the MOOC has the technical support of Internet head and huge database and video supports network TV, IP mobile phone IPad, PC, and other playback platforms, which also provide basic data support for us to study the behavior of members accessed by different terminals. We collect the key information from the collected original data, clean the duplicate information and fill in the vacancy information, identify the access behavior, determine the key video quality, and calculate the address and location of member access. Finally, the data result we sorted out is the collection of behavior records of the MOOC online video education platform Figure 2. Each record includes three parts: video information accessed by members, identity information of members, and specific behavior process information. Next, we will rely on the fuzzy decision tree algorithm to establish an evaluation model that reflects the correlation between the video quality visited by members and the actual experience quality of members. e following is the theoretical basis of the fuzzy decision tree algorithm based on the classification fuzziness. First, we Computational Intelligence and Neuroscience define the size of fuzzy set A, as shown in the following formula: When A represents a fuzzy set on the full data set U, it is included in the function μ A . When U is a discrete set, the fuzziness of fuzzy set U is defined in the following formula: For the discrete set X, according to the fuzzy decision tree algorithm, if there is a normalized distribution of variable Y on its set, the fuzziness of variable Y is defined as the following formula: It can be deduced from the above definition that when the variable Y can only take one value, the value of the fuzziness set is 0, which means that the variable Y does not have fuzziness at this time. When the possibility that variable Y can take any value in set X is 1, Y has the greatest fuzziness. When the numerical attribute is continuous, variable A can form a set containing S discrete semantics after fuzzy processing, in which each semantic item is a fuzzy set. In order to measure the fuzziness of A the membership function value of any variable in the fuzzy set is set u, and the fuzziness possibility distribution of continuous attribute A can be obtained. First, normalize the probability distribution of set u, as shown in the following formula: From the above, the fuzziness measurement formula of continuous variable A can be obtained, and the result can also be applied to the measurement of fuzziness of classification results. Next, we discuss the fuzziness rule and its confidence level. In the fuzziness rule, two conditional fuzzy sets A and B have been defined, and it is assumed that there is a corresponding relationship between them. erefore, we need to define the authenticity of the rule by using the concept of confidence level, expressed in S (A, B). See formula (5) for the specific calculation formula. At this time, the category possibility judgment formula of attribute variables is the following formula: Combining the fuzziness measurement formula with the confidence level, we can redefine the classification fuzziness measurement formula. See the following formula: for details: At this time, assuming that the value of variable A on set u is F and the fuzzy semantic item set corresponding to variable B is p, the calculation formula of fuzzy classification and division of the correlation between variables A and B in Computational Intelligence and Neuroscience the fuzzy decision tree algorithm rules can be seen in the following formula: where w(B t |F) represents the size of fuzzy set F, see formula (9) for specific calculation method, and see formula (10) for fuzzy evidence at this time: Finally, we bring the preprocessed data into the semantic item membership function to complete the data fuzzification, which obtains the parameters by using the Kohonen feature mapping algorithm and realizes the transformation from the fuzzy decision tree algorithm to fuzzy rules. e induction process of a fuzzy decision tree consists of the following steps: (1) data preprocessing; (2) induction and establishment of decision tree; (3) e obtained fuzzy decision tree is transformed into a set of fuzzy rules; and (4) e obtained fuzzy rules are applied to classification. e membership function of semantic items is shown in the following formulas (11)-(13): In order to verify the member experience quality evaluation of the MOOC teaching model of basic education based on fuzzy decision tree algorithm, this study designs the following experiments to evaluate the model. Firstly, the data mining classification accuracy of fuzzy decision tree is calculated by the tenfold cross validation method. In order to prove the accuracy of the fuzzy decision tree algorithm, in addition to using the fuzzy decision tree algorithm to process data, we also use the decision tree algorithm to fully prove the accuracy of the algorithm. We randomly selected three different data sets, session V, session M, and session A. Where session V represents all access records under a video, session M represents access records of different device types, and session a represents the video access records in different regions. By changing the parameters, observe the changing trend and fitting accuracy of the fuzzy decision tree model on the three different data sets of session V, session M, and session A, and compare the dependent variables under different conditions, so as to judge the impact of the quality of the video, the region of the members, and the equipment of the members on the accuracy of the prediction results of the model Figure 3. After obtaining the preliminary verification results, we need to consider its reliability. erefore, we continue to compare at different levels: significance level a and confidence level B. When a and B take different values respectively, the prediction accuracy of fuzzy decision tree models session a, session m, and session V changes. e results are shown in Figure 4. en, taking conversation a as an example, we show different significance levels a and confidence levels B. When the accuracy changes, we can see that whether the value of B is any of 0.2 to 0.8, the session set prediction accuracy remains at about 43% without obvious fluctuations. When the value of a is 0.5, it is different from B. when the value of a is 0.8, the prediction accuracy of the session begins to differ at the level of B. e maximum prediction accuracy of session a is 44%. However, compared with Class A and class B, the prediction accuracy under Class A and class B still has no significant difference. When the value of a is 0.6, the prediction accuracy of level B for level a sessions is higher. However, when the b value is 0.8, the prediction accuracy of level a sessions is lower than that of level B sessions. e prediction accuracy of level a sessions is 0.2-0.6. When the value of a is 0.9 and the value of B is 0.2, the prediction accuracy of session a is the highest, which is 67%. When e of a is 0.1 and B is 0.2, the prediction accuracy is the lowest, which is 42%. e predicted change law of session a is basically the same as that of session C. Result Analysis and Discussion In the MOOC teaching model of inquiry basic education, in the experience quality evaluation system of members for online video teaching, if only the technical needs of data mining are considered, the decision tree algorithm can be met in the commonly used data mining technology. However, because the decision tree algorithm is only used for the processing of discrete attribute data sources in practice, it has some limitations on the processing of continuous variables, such as access time. erefore, based on the existing data mining technology, this paper proposes a fuzzy decision tree algorithm. e specific advantages and disadvantages of the algorithm and the decision tree algorithm have been compared in detail at the beginning of the third part, so it is not repeated here. Only the comparison results of the prediction accuracy of the decision tree algorithm and the fuzzy decision tree algorithm on the three sets of session A, session M, and session Tare presented. e specific results are shown in Figure 5. From the figure, we can see that for different Computational Intelligence and Neuroscience data sets session A, session M, and session T, the model prediction accuracy based on fuzzy decision tree algorithm is always higher than that based on decision tree algorithm, which further verifies the rationality of our algorithm. In addition, among different sets, the model prediction accuracy of session M based on the fuzzy decision tree algorithm is the highest, reaching about 81%, while the model prediction accuracy of session T based on the fuzzy decision tree algorithm is the lowest, about 63%. It is preliminarily speculated that this is due to the different degree of analysis of the video quality capture of members' access and the capture accuracy of members' access to geographical locations. is problem can be further explored in subsequent research. In addition, we notice that the classification accuracy of fuzzy decision tree is also different on different subsets and complete sets. It can be found from Figure 6 that the prediction accuracy of the accuracy prediction model on subsets session A, session M, and session T far exceeds that on the full set S. is result shows that the experience of members accessing the online video teaching platform is greatly related to the video content, region, and equipment accessed by members. e gap between each factor is large, and there is interaction. We speculate that the reasons are as follows: Firstly, it is related to the members' own interests. e interest in video affects the members' tolerance of video. Secondly, it will be affected by the background, cultural habits, and local policies of the member's location. For example, there are great differences in the learning methods between the first tier cities and the second and third tier cities. Due to development constraints, the second and third tier cities may not understand the online platform channels, so the visit will also be affected. Or if the local requires students to teach on average, the local visit volume will also be greatly affected. In addition, using different devices to access the MOOC platform also represents the current state of members to a certain extent, so it will affect the accuracy of the evaluation model. Starting from practical reasons and exploring from the perspective of fine granularity, we can still find out other objective influencing factors. Due to the length of this paper, we will not explore in detail. Conclusion e main goal of this paper is to establish a set of evaluation teaching models suitable for the development of China's industry and combined with China's national conditions. Considering the data source, the accuracy of the results, and other factors, we chose the industry head platform MOOC network as an example. rough the exploration of the MOOC teaching model of basic education and based on the fuzzy decision tree algorithm, we establish the member experience evaluation model. Finally, experiments show that the accuracy of the model is good and the reliability of the prediction results is high. However, there are many online Internet platforms in China, and with the further development of online video teaching industry, the characteristics of each platform are also different. erefore, whether the fuzzy decision tree model proposed in this paper can be applied to other online basic education models remains to be discussed. e research scope can be further expanded in future work. Data Availability e data used to support the findings of this study are available from the corresponding authors upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,204.8
2022-06-08T00:00:00.000
[ "Computer Science" ]
Sessile multidroplets and salt droplets under high tangential electric fields Understanding the interaction behaviors between sessile droplets under imposed high voltages is very important in many practical situations, e.g., microfluidic devices and the degradation/aging problems of outdoor high-power applications. In the present work, the droplet coalescence, the discharge activity and the surface thermal distribution response between sessile multidroplets and chloride salt droplets under high tangential electric fields have been investigated with infrared thermography, high-speed photography and pulse current measurement. Obvious polarity effects on the discharge path direction and the temperature change in the droplets in the initial stage after discharge initiation were observed due to the anodic dissolution of metal ions from the electrode. In the case of sessile aligned multidroplets, the discharge path direction could affect the location of initial droplet coalescence. The smaller unmerged droplet would be drained into the merged large droplet as a result from the pressure difference inside the droplets rather than the asymmetric temperature change due to discharge. The discharge inception voltages and the temperature variations for two salt droplets closely correlated with the ionization degree of the salt, as well as the interfacial electrochemical reactions near the electrodes. Mechanisms of these observed phenomena were discussed. The dynamics and stability of liquid droplets subjected to a high electric field are of immense scientific interest and of key importance to many applications, e.g., oil recovery technologies 1 , lab-on-a-chip devices 2 , applications relevant to wetting/spreading [3][4][5] . In recent years, a significant body of research works has been conducted on the droplet dynamics on a solid surface (i.e., sessile droplet) under the influence of an imposed electric field. The wetting and spreading behaviors of droplets can be tuned by applying electric fields perpendicular to the solid surface, namely electrowetting/spreading behavior [3][4][5][6][7][8][9][10][11] . On the other hand, the behaviors of water sessile droplets subjected to horizontal electric fields have attracted great attention. Water droplets on the insulator's surfaces due to rain or fog condensation are a main factor to local intensification of the electric field, as well as the resulting occurrence of coalescence, discharge and flashover between droplets 12,13 . Similar to the droplet in another fluid, the sessile droplet firstly experiences deformation or vibration before the occurrence of coalescence and discharge/arc after applying an electric field 14 . A number of factors could affect the electric field enhancement as well as the resultant droplet deformation and discharge behavior [14][15][16][17][18][19][20][21] . The magnitude of discharge current, surface moisture-resistivity and surface hydrophobicity played the dominant roles in the low current discharge process 13,22 . Different experimental techniques have been employed to study the deformation and discharge behaviors between water droplets on the insulator surface, including high-speed photography, pulse current measurement and their combination 13,20,21,23 , as well as RF radiation sensing 24 , ultrahigh frequency (UHF) technique 12,25 , and atomic emission spectroscopy 26 . The discharge temperatures between two water droplets have been measured with optical emission spectroscopy by Xiao, et al. 27 and with infrared thermography by our group 28 . Despite these progresses, the interaction processes of sessile multidroplets and salt droplets subjected to high tangential electric fields have been rarely discussed. The understanding on the dynamics of these droplets and the resulting discharge behaviors are not only of great scientific significance, but also relevant to the practical degradation issues of high-voltage insulators. In the present work, on the basis of infrared thermography, high-speed photography and pulse current measurement, the droplet coalescence and discharge activity between sessile multidroplets as well as salt droplets under high tangential electric fields, will be investigated. The schematic diagram of the experimental setup is shown in Fig. 1. Briefly, aqueous droplets were placed on flat horizontal hydrophobic insulator's surfaces, and metal electrodes were dipped into the center point of the droplets. DC voltages between the electrodes were applied and increased gradually until the discharge inception between droplets. The side views of the temperature variation and the movement of droplets were obtained by using an infrared thermographic camera. Meanwhile, the top views of the droplet behavior were also visualized with an optical microscope. Results Two-droplet configuration. Thermographic pictures of the side-view of two water droplets on silicone rubber (SR) surfaces under high positive voltages are shown in Fig. 2. The left electrode was energized, and the right one was grounded. Positive case/discharge was defined as that when a positive voltage was applied onto the energized electrode, and vice versa. Moreover, Region A indicated with a dashed frame in Fig. 2 is the region where the maximum and average temperatures were measured. These definitions will be also applicable in the following part. In Fig. 2(a), the semi-spherical dark areas representing the water droplets (temperature: 31 °C), due to the slight temperature contrast as compared with the surrounding temperature (Δ T = − 3 °C). The corresponding top-view bright-field optical image is shown in Fig. 2(m). Droplet deformation at the energized side, (a-l) Side-view thermographic pictures of two deionized water droplets (droplet volume: 2 μ L, droplet separation: 7 mm) on the SR surface under a high positive voltage (the left electrode was energized, and the right one grounded); (m-r) Top-view bright-field optical images: (m) corresponds to (a); (n) corresponds to (c); (o) corresponds to (f); (p) corresponds to (g); (q) corresponds to (i); (r) corresponds to (l). The arrows indicate the discharge paths. The region A with a dashed frame in (b) is the region where the maximum and average temperatures were measured. The droplet volume was 2 μ L, and the electrode separation was 7 mm. The droplet diameter (d) and height (h) were 1.79 mm and 0.98 mm, and the contact radius (r) between the droplet and the solid surface (i.e., droplet footprint radius) was 0.89 mm. extending along the electric field direction towards the facing droplet (grounded droplet) could be seen when the voltage was increased to 5.5 kV [ Fig. 2 When the voltage was increased further to 5.7 kV, a non-fixed discharge path could be seen with bright traces (indicated by Arrow 1, temperature: 35 °C), moving across the faces of the droplets, as shown in Fig. 2(c). The corresponding top-view bright-field optical image at this instant is shown in Fig. 2(n). Moreover, it can be observed that continuous discharge initiated from the energized droplet to the grounded one. The temperature rise of the left part of the grounded droplet took place (temperature: 36 °C), as shown in Fig. 2(d). The thermal field rapidly diffused to the right part of the grounded droplet, as shown in Fig. 2(e). After that, the temperature around the discharge root of the grounded droplet continued to increase, and meanwhile, the temperature of the energized droplet started to increase. At 150 ms, the temperature distribution of the droplet at the grounded droplet (maximum: 46 °C) was more uniform than the energized one (maximum: 42 °C), as shown in Fig. 2(f). Moreover, the temperature also increased in the part between droplets (maximum: 45 °C), suggesting that a thin layer of water from the droplets was established on the solid surface, as evidenced by the top-view bright-field optical image in Fig. 2(o), whereas the discharge path (average temperature: 36 °C) still moved across the droplet faces. Subsequently, the temperatures of both droplets increased dramatically, and generally asymmetric temperature distributions of the droplets (particularly the grounded droplet) could be seen, as shown in Fig. 2(g). Meanwhile, spread of both droplets with the reduction of the contact angle was also apparent, and a liquid bridge was then established between droplets, as evidenced by the top-view bright-field optical image in Fig. 2(p). Afterwards, the grounded droplet was drained gradually into the energized droplet, as shown in Fig. 2(h) where the temperatures of the liquid bridge and the constricted part of the grounded electrode were obviously higher (> 60 °C). At this moment, the moving discharge path disappeared. As time progressed, the grounded droplet detached from the electrode, resulting in intermittent discharge between the detached droplet and the grounded electrode . Afterwards, intensive discharge between two electrodes and the intermediate merged droplet started. Finally, the liquid droplet between the electrodes tended to dry out, as shown in Fig. 2(l) as well as Fig. 2(r). After the negative voltage was applied onto the energized electrode and increased to the critical value (Fig. S1, Supplementary File), discharge started from the grounded droplet to the opposite droplet, and the direction of discharge path was just opposite to that under positive discharge in Fig. 2. Figure 3 shows the variations of the electric currents and the corresponding average temperatures of the area as indicated by the square frame (A) in Fig. 2 over time after discharge initiation between two droplets. As shown in Fig. 3(a), as compared with negative discharge, the electric current for positive discharge was slightly higher in the same time period (1.2-4.0 s), being consistent with the literature's result 29 , where it was further suggested the higher electric current lead to the more intensive surface heating, as verified by our work [ Fig. 3 Three-droplet configuration. Thermographic pictures of the side-view of three aligned water droplets under high voltages are shown in Fig. 4. It indicated that droplet deformation started firstly at the energized droplet under high voltages of both polarities [ Fig. 4(a,b,h,i)]. When the electric field was sufficiently large, discharge occurred instantly between the three droplets. Similar to the two-droplet configuration, the deformation of the grounded droplet was also obvious under a negative voltage at the instant of discharge inception. Occasionally, the discharge between the droplets was too fast to be captured (< 10 ms). However, it was certain that the inserted droplet coalesced with the grounded droplet in the very short period after discharge initiation both for positive and negative discharges [ Fig. 4 Afterwards, discharge between the energized droplet and the grounded bigger droplet persisted. Similar to the two-droplet case, discharge direction was from the high potential droplet to the low potential one, and the temperature of the latter firstly increased [ Fig. 4(d,k)]. The grounded droplet became consequently more voluminous than the energized one. Afterwards, the two droplets had the tendency to form a liquid bridge [ Fig. 4 (e) and (l)], followed by the drainage of liquid from the energized droplet to the grounded one [ Fig. 4(f,m)]. As time The position of the inserted droplet was changed to investigate its influence on the droplet coalescence and discharge activity. The results of three aligned water droplets with the inserted droplet near the energized droplet under high voltages are shown in Figs S3 and S4 (Supplementary File). Generally, the discharge path direction could affect the location of initial droplet coalescence. The smaller unmerged droplet would be drained into the merged large droplet Salt droplets. In order to investigate the effect of addition of salt on the droplet coalescence and discharge characteristics of sessile droplets, three groups of the salt droplets were divided. Specifically, droplets of the NaCl solutions with different salt concentrations ranging from 0 to 1 M, droplets of salt solutions of different monovalent cations (Li + , Na + and K + ) and cations of different valences (Ca 2+ and Al 3+ ) were investigated. The discharge inception voltages of these salt droplets are shown in Fig. 5, and the thermographic pictures of the discharge processes are shown in Figs 6 and 7. It could be seen in the left part of Fig. 6 that the discharge inception voltage decreased with the NaCl concentration. Thermographic pictures of these droplets in Fig. 6 show the detailed morphological and temperature changes, and apparently the key trends during discharge process are very similar to those in Fig. 2. Figure 6 also shows the discharge process was relatively violent at the low salt concentration and becomes mild at the high concentration [ Fig. 6(e1-e4)], giving rise to more obvious temperature change in the droplets with the lower concentration salt. As the salt concentration increases, it could be also observed that the discharge process lasted increasingly longer during the electrolysis stage and the droplets hardly evaporated completely. Fig. 7(e4-f4,e5-f5)] could be seen. The temperature rises of the merged droplets of CaCl 2 and AlCl 3 were more obvious near the grounded electrode than the energized electrode, while in contrast the temperature rise was higher near the energized electrode in the salt droplets with monovalent cations. In the work, a KCl droplet was also placed as the inserted droplet, and the energized and grounded droplets were of deionized water, as shown in Figs S5 and S6 (Supplementary File). Morphologically, relatively stable liquid bridges between these droplets formed due to the small temperature changes in these droplets, especially at the later stages. Discussion From the previous parts, it can be seen that the interaction of water droplets on the solid surface under a strong horizontal electric field generally experienced the following processes: 1. Droplet deformation due to the electric field enhancement (E e ) tangential to the surface 30 . Normally, the maximum E e is in the horizontal direction near the three-phase point. In equilibrium, a force balance exists at the three phase point of the static droplet without the application of an electric field 13 , SG SL where γ SG , γ SL , γ and Θ are solid-gas interfacial tension and solid-liquid interfacial tension, liquid-gas interfacial tension, and Young's angle, respectively. When an electric field is applied, a polarization stress would emerge for the energized droplet before discharge initiation 31 , resulting in the contact line deformation of droplet. 2. When E e exceeds a critical value, the release of charges from the high potential droplet to the low potential droplet occurred. Presumably, positive charges near the tip of the deformed droplet, which also give their momentum to neutral gas molecules by collisions under the high electric field, drift to the facing grounded droplet under the electrostatic force 32 . Obvious polarity effect of the discharge path direction from the high potential droplet to the low potential droplet, and the larger electric current of positive discharge as well as the resultant higher temperature rise have been observed. The reasons responsible for these observations could be roughly understood 33 : the dissolution of metal ions from the electrode with the higher potential could occur for positive discharge, and these ions would result in higher surface conduction. In the case of negative discharge, positive ions would migrate from the grounded electrode, resulting in the charge release from the grounded droplet to the energized one. 3. Droplet deformation was obvious before discharge inception and temperature rise, while the contact angle, particularly the advancing angle, did not reduce dramatically, and instead increased slightly during the droplet deformation. However, the contact angles decreased apparently after discharge initiated and the temperature was increased. It may suggest that temperature change, instead of the accumulated charges on the droplet surface, played the dominant role in affecting the surface tension variation of the droplet. Theoretically, asymmetric temperature rise could induce the presence of surface tension gradient in the droplet, and the higher temperature at the edge of water droplet at the facing sides resulted in the lower surface tension. The relationship between the surface tension, γ and temperature, T can be described as 34 where γ 0 is the surface tension at the critical temperature T 0, and k is the temperature coefficient of surface tension (k = − 0.0015 °C for water). In Fig. 2, the maximum temperature at the discharge root of the grounded droplet (T n1 ) immediately after discharge initiation [ Fig. 2(d)] is ca. 36 °C (real temperature after calibration, T r1 : 41.6 °C), and the corresponding water surface tension γ 1 ≈ 71.3 mN/m. Besides the polarization force at the contact line, the capillary line force F c in the horizontal direction due to temperature change in the droplet near the discharge root and the change of advancing angle Θ a plays a very important role in the process of droplet elongation. The capillary line force F c can be written as 35 , c a F c acts on the contact line, perpendicularly to the contact line, and a positive value indicates the force points away from the droplet. The difference between the capillary line forces of the droplet near the discharge root between the instant without an electric field, and the instant when droplet elongation and movement occur after applying an electric field (Δ F c ) is focused on. The analysis on the droplets in Fig. 2 will be taken as the example, and the temperature and the contact angle of the droplets have been measured. Then, we can get Δ F c-energied = 0.112 mN for the energized droplet, and Δ F c-energied = 0.117 mN for the grounded droplet, suggesting both droplets could protrude equally. It can be confirmed by the results in Fig. 2(g) where protrusions of close lengths could be seen. The role of hydrostatic pressure variation in distorting the droplet shape is neglected because the Bond number ≈ 0.33 < 1, where g and ρ are the acceleration of gravity and the liquid density. Then, the forces resisting droplet elongation and translation include the viscous force F v , the contact angle hysteresis F f , the contact line friction, F CLF and the drag force from the electrode F d 9,36 . The former three forces are related with the droplet elongation and translation. The fourth force equates the frictional force of the contact line on the electrode, which is relevant to the droplet detachment. The viscous force F v is 37 where τ is the shear stress, η is the liquid viscosity, and v is the motion velocity of the protrusion. The contact angle hysteresis F f is 37 where Θ a0 and Θ r0 are the advancing and the receding contact angles, respectively. The variations of contact angles (advancing and receding angles) with and without an electric field are shown in Fig. S7 (Supplementary File). The contact line friction F CLF is roughly approximated as F CLF ≈ λv 9 , where λ is the frictional factor (determined empirically). The drag force from the electrode F d is 37 . Basically, protrusion from the droplet could be seen when the summation of polarization force and capillary force is larger than the contact angle hysteresis. When droplet movement starts before the droplet detaching from the electrode, the resistive forces include both the viscous force and the contact line friction. When the driving forces for the droplet movement are larger than the viscous force, the contact line friction and the drag force from the electrode F d , the droplet would detach from the electrode correspondingly. For the grounded droplet in Fig. 2, F v = ~10 −8 N (v ≈ 6 mm/s for Fig. 2(f,g)), F v ≈ 1.5 × 10 −8 N, F CLF = ~1.08 μ N (λ is ~0.1), F d ≈ 44 μ N (γ is assumed to be constant). It can be inferred that F CLF might be the main resistive force during the droplet elongation process while F d might be the main force resisting the droplet detachment from the electrode. Due to the changes of the electrocapillary line force, both droplets gradually spread accordingly. As time passed the protrusions joined together to form a liquid bridge, and ultimately the droplet detached from the electrode. Previously, it has been shown that the discharge was quite transient, followed by the formation of the liquid bridge between two droplets of a larger volume (3 μ L) and a smaller separation (4 mm) 28 . More results on the relationship between the droplet coalescence/discharge and the droplet separation as well as the droplet size are shown in Fig. S8 (Supplementary File). For the case of three aligned droplets, the above results showed that the direction of initial discharge path between three droplets was similar to the two-droplet cases. In our work, the dynamic behaviors of four aligned droplets with an unchanged separation between two extreme droplets exposed to high voltage were investigated. It was however difficult to get a reproducible result for determining the location where the coalescence of droplets firstly started because these droplets were quite close. Recent work also suggested that the evaporation rate difference could result in the asymmetric shape of droplets in very close proximity 38 , and this would add complexity to the droplet coalescence and discharge behaviors, especially for the three-droplet and four-droplet systems. Nevertheless, wherever droplet coalescence firstly started and whichever droplet was with the higher temperature, the unmerged droplet would be drained into the merged large droplet, primarily due to the fact that the pressure inside the smaller droplet was larger 39 . The effects of salt concentration and type have been demonstrated to be important in the electro-coalescence behavior between droplets [40][41][42] . The discharge inception voltages in Fig. 5 suggested that discharge could be initiated between droplets more easily with the increasing salt concentration, the increasing ionic radius of the monovalent cations (Li + , Na + and K + ). As mentioned above, when the amount of ions is sufficiently large, discharge could occur near the deformed tip of the droplet due to the released charges colliding on the neutral gas molecules and then acting onto the facing droplet. It could be inferred that the inception of discharge closely related with the number of accumulated charges near the deformed tip. Apparently, the amount of charges could be increased with the increasing salt concentration. In the case of droplets with different monovalent cations (Li + , Na + and K + ), the amount of charges at the deformed tip of the droplet correlated with the different degrees of ionization of the salts in the droplet, depending on the electronegativity difference between the cation and the anion of the salt compound. The electronegativity difference between K + (0.82) and Cl - (3.16) was larger than those between Li + (0.98)/Na + (0.93) and Cl − (3.16) 42 , as verified by the electric conductivities of the salt solutions in Table 1. It could be expected that more ions dissolved in the droplet and accumulated near the deformed tip of the droplet under the influence of the electric field. Consequently, discharge inception voltage would be lowest for the KCl droplets and highest for the LiCl droplets, as shown in Fig. 5. In the case of CaCl 2 and AlCl 3 droplets, the electronegativity differences between Ca 2+ (1.0)/Al 3+ (1.61) and Cl − (3.16) were slightly smaller than those of K + (0.82)/Na + (0.93) and Cl − (3.16) 43 . As a result, the discharge initiations for CaCl 2 and AlCl 3 droplets were generally more difficult than those for NaCl and KCl droplets at the same salt concentration. Furthermore, it has been mentioned above that for CaCl 2 and AlCl 3 droplets, the temperature rises of the merged droplets were more obvious near the grounded electrode than the energized electrode, which was contrary to the cases of the salt droplets with monovalent cations. For this phenomenon, a retrospect of the previous discussion on the temperature rise of the deionized water could tell us that the dissolution of metal ions from the electrode with the higher potential made a non-ignorable contribution to the higher surface conduction near the electrode with the lower potential, where less temperature rise will be expected. Following this approach, the observed phenomenon could be understood as: Cations (Li + , Na + , K + , Ca 2+ and Al 3+ , as well as H + and dissolved Cu 2+ from the anode) in the salt droplet would migrate from the energized droplet (anode) to the grounded droplet after a bridge was established between the two droplets, and in the same way, anions (Cl − and OH − ) would migrate towards the region adjacent to the energized electrode. Near the anode (energized electrode), chloride ions (Cl − ) would donate electrons to the anode to form chlorine gas, i.e., 2Cl − (aqueous) -> Cl 2 (gas) + 2e-. Meanwhile, the hydroxyl ion (OH − ) would also donate electrons to form water. In contrast, the electrochemical reactions were slightly different for the cations near the cathode (grounded electrode). In the cases of LiCl, NaCl and KCl droplets, hydrogen ions (H + ) from water picked up electrons to form hydrogen gas, i.e., 2 H + (aqueous) + 2e -> H 2 (gas). At the same time, the migrating copper ions (Cu 2+ ) from the anode would also pick up electrons to form deposited metallic copper on the cathode. Under these circumstances, the part near the energized electrode (anode) would be more resistive than that near the grounded electrode in the merged LiCl, NaCl and KCl droplets due to the depletion of anions and the accumulation of residue cations (e.g., Li + , Na + , and K + ) near the grounded electrode. In the cases of CaCl 2 and AlCl 3 droplets, the generation of Ca(OH) 2 and Al(OH) 3 (or AlO 2− ) would be possible, and then deposition of these chemical substances could increase the resistance of the merged droplet near the grounded electrode (cathode). Consequently, the temperature rises could be asymmetrically higher near the grounded electrode, being in contrast with the cases of LiCl, NaCl and KCl droplets, as evidenced by the results in Fig. 7. In addition, the formation of morphologically stable liquid bridges between water droplets when the inserted droplet was conductive salt droplets was speculated as a result of the liquid bridge and the salt droplet being less resistive, giving rise to the small temperature rise and consequently the morphologically stable features. Conclusions In the present work, infrared thermography, high-speed photography and pulse current measurement were used to investigate the droplet coalescence and discharge characteristics of sessile multidroplets and salt droplets under high DC tangential electric fields. For both two-droplet and three-droplet configurations, due to the anodic dissolution of metal ions from the electrode, the discharge path direction was from the high potential droplet to the low potential droplet at the initial stage after discharge initiation, and the temperature rise of the low potential droplet was more obvious. For the three-droplet configuration, the initial droplet coalescence between the inserted droplet and the extreme droplets were largely affected by the discharge path direction, and the small droplet would be drained into the merged large droplet due to the pressure difference in the droplets. The investigations on the effect of chloride salts on the droplet coalescence and discharge behaviors suggested that the discharge inception voltages between two salt droplets had been found to closely correlate with the electronegativity difference between the cation and the anion of the salt. The electrochemical reactions near the electrodes were important for the morphological and temperature variations in the salt droplets during the droplet coalescence and discharge processes. Methods Experimental setup. Most experiments were conducted with water droplets on flat horizontal insulator's surfaces, and the test insulator's materials were hydrophobic silicone rubber, SR and polytetrafluoroethylene, PTFE, which were typical insulating materials in high power applications. As shown in Fig. 1, metal electrodes, which consisted of copper wire (diameter, d electrode = 0.2 mm) were dipped into the center point of the droplets, and the gap distance between the center points was defined as the droplet separation. DC voltages between the electrodes were applied and increased gradually until the inception of droplet deformation or discharge between droplets. The potential drop across the resistance (500 Ω) was fed directly to a digital storage to measure the discharge current. Preparation of droplets. Aqueous droplets having the same volume were placed on the top of the solid surface with a micropipette. Droplets of deionized water and various chloride salt solutions (sodium chloride, NaCl, lithium chloride, LiCl, potassium chloride, KCl, calcium chloride, CaCl 2 , aluminium chloride, AlCl 3 ) were used. The electric conductivities of the salt solutions used in the present work are summarized in Table 1. The Young's angles (Θ) of deionized water and salt solution droplets on the SR and PTFE surfaces were around 100°. Visualization and temperature measurement of droplets. The side views of the temperature variation and the movement of droplets were obtained by using an infrared thermographic camera with a microscopic lens (InfraTec GmbH, resolution: 10 μ m/pixel, speed: 100 frames/s). Meanwhile, the top views of the droplet behavior were also obtained with an optical microscope and recorded by using a high-speed charged coupled device (CCD) camera at 400 frames/s. Thermodynamic equilibrium of the droplets between the three (solid/liquid/gas) phases was reached prior to applying a voltage, when the alterations of the droplets' shape and the temperature became negligible. For calibration purpose, water with different temperatures were measured with the infrared thermocamera and a thermal meter, and a relationship between the real temperature T r (°C) of water and the nominal value T m (°C) measured with the infrared thermocamera was fitted, i.e., = . − T T 1 267 4 r m . However, the nominal value was mostly discussed for convenience in the following parts.
6,812.4
2016-04-28T00:00:00.000
[ "Physics" ]
Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors Since the new high-Tc superconducting family based on iron pnictides was discovered Kamihara et al. (2008); Chen et al. (2008a;b), and the critical temperature was lifted to 56 K under high pressure Wu et al. (2009) which is considerably larger than the McMillan Limit McMillan (1968), the superconductive pairing mechanism and properties have attracted great interests experimentally and theoretically. These newly-found superconductors also provide a potential application prospect in two respects: on the one hand, simple components and rich resource of the FeAs-based compounds show the most possibility of large-scale applications; on the other hand, extremely large upper critical fields Hc2Wen et al. (2008) in FeAs superconductors imply realistic applications in the near future. To find higher Tc FeAs superconductors in further experiments and to improve their critical currrent density, it is essentially important to theoretically understand various normal and superconducting properties of iron pnictides, especially the superconducting pairing symmetry and its microscopic pairing mechanism. Once the pairing symmetry is known, many superconducting properties could be qualitatively understood. mixture of S x 2 y 2 -wave and d x 2 −y 2 -wave Seo et al. (2008) to spin triplet p-wave Dai et al. (2008); Lee et al. (2008).These suggestions on the pairing symmetry raised critical and hot debates in the literature.A few authors focused on the microscopic origin of the superconducting pairing according to the antiferromagnetic spin fluctuation and the Fermi surface nesting topology through the characteristic wavevector Q=(π, 0) of the antiferromagnetic spin fluctuations.They proposed the s +− -wave pairing symmetry Mazin et al. (2008); Kuroki et al. (2008), i.e. the phase of the superconducting order parameters of the inner Fermi surface around the Γ point is antiphase to that of the Fermi surface around the M point.The s +− -wave symmetry of the superconducting order parameters seems to receive sufficient support in theory and experiment Mazin et al. (2008), and is consistent with the nesting picture of electron-type and hole-type Fermi surfaces in FeAs-based superconductors.However, the most recently found K x Fe 2−y Se 2 compounds clearly rule out the presence of a hole-type Fermi surface around the Γ point Xiang et al. (2011), suggesting that an alternative pairing symmetry is possible.Actually, from the researching history of the high-T c cuperates, we have known that a pairing mechanism based on the Fermi surface nesting is rather delicate, since any finite electron-electron interaction, which usually occurs in high-T c cuprates and iron pnictides, will destroy the perfect nesting of the Fermi surfaces.These disagreements and debates in the experimental data and theoretical results on the superconducting pairing symmetry of iron pnictides appeal for more efforts to unveil the mysterious nature of the superconducting iron pnictides. On the other hand, the effect of the electron correlation in iron pnictides should be taken into account, since the bad metallic behavior and the existence of antiferromagnetic spin moments suggest that the iron pnictide is close to a metal insulator transition Haule et al. (2008).In this Chapter, starting with the minimal two-orbital t-t ′ -J-J ′ model Manousakis et al. (2008); Raghu et al. (2008), we develop a mean-field theory of the multi-orbital superconductors for the weak, intermediate, and strong correlation regimes, respectively.Taking a concrete t-t ′ -J-J ′ model which has the same topology as the Fermi surface and the band structures of LaFeAsO, we obtain the superconducting phase diagram, the quasiparticle spectra in normal state and superconducting phase, and the ARPES manifestation of the superconducting energy gaps.Our theory is applicable not only for FeAs superconductors, but also for ruthenate and heavy fermion, and other spin-fluctuation mediated multi-orbital superconductors.For realistic iron pnictides, we show that the pairing symmetry d x 2 −ηy 2 +S x 2 y 2 -wave is stable in the reasonable parameters region; two superconducting gaps and their weak anisotropy and nodeless qualitatively agree with the observations in ARPES experiments.However, a quantitative comparison between theory and experiment shows a more elaborate theoretical model is necessary. The rest of this Chapter is arranged as follows: in Sec.II we present the theory and methods for multi-orbital superconductivity; in Sec.III and IV we show the numerical results on the pairing symmetry of multi-orbital iron-pnictide superconductors, and the orbital dependence of superconducting energy gaps; Sec.V is devoted to the comparison between our theory and experimental observations, and finally we make a concluding remarks in Sec.VI. Theory and methods of multi-orbital superconductivity For the iron-pnictide compounds, the electron-phonon coupling seems to be irrelevant to the superconducting pairing origin Boeri et al. (2008), the antiferromagnetic spin fluctuation is naturally thought to contribute the pairing glue of the superconducting electron pairs, due to the antiferromagnetic ground state in updoped FeAs compounds.Considering the multi-band and electron correlation characters, a minimal model for describing the low-energy physics of the FeAs-based superconductors is the two-orbital t-J model and its extension.Based on the band structures results and theoretical analysis, the twofold-degenerate d xz /d yz orbits are essential for the ironpnictide superconductors.We firstly depict such physical processes with the two-orbital Hubbard model, where c † iασ creates a d xz (α=1) or d yz (α=2) electron with orbit α and spin σ at site R i .tan d t ′ denotes the hopping integrals of the nearest-neighbor (NN) and the next-nearest-neighbor (NNN) sites, respectively.U, U ′ and J H are the intra-orbital, inter-orbital Coulomb interactions and the Hund's coupling. In the strongly correlated regime, it is well known that the t-t ′ -J-J ′ model can be derived from Eq.( 1); meanwhile, even in the weak correlation regime in the atomic limit Manousakis et al. (2008), the two-orbital Hubbard model in Eq.( 1) can derive to the t-t ′ -J-J ′ model, although not strictly.Thus, we can describe the low-energy processes in iron pnictides with the two-orbital t-t ′ -J-J ′ model, on a quasi-two-dimensional square lattice.This Hamiltonian consists of the tight-binding kinetic energy H t−t ′ and the interaction part H J−J ′ .The kinetic energy term reads, with the notations The intra-orbital components of the nearest-neighbour (NN) hopping integrals t αβ ij are t 11 x =t 1 =-1, and t 22 x =t 2 = 1.3.The components of next-nearest-neighbour (NNN) hopping integrals, t ′αβ ij ,aret 3 = t 4 = −0.85Raghu et al. (2008).Throughout this paper, all the energies are measured in units of | t 1 |.The carrier concentration is equal to 0.18, which is a typical doping concentration in the iron-based superconductors Dubroka et al. (2008). 27 Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors www.intechopen.com The interaction term reads, which contains a NN and a NNN antiferromagnetic spin couplings.Here J and J ′ are the NN and the NNN spin coupling strengths, respectively.S iα is the spin operator of the electron in the α-orbit at R i and n iα is the particle number operator.α, β(=1,2) are orbital index. To explore the essence of the iron-pnictide superconductors and other multi-orbital superconductors, we discuss the t-t ′ -J-J ′ model in three different correlation regimes: Weak correlation regime When the kinetic energy of the d xz -a n dd yz -electrons is much larger than the Coulomb interaction, we adopt the conventional mean-field decoupling approach to study the superconducting pairing symmetry and its orbital dependence.This ansatz is applicable for many FeAs-based and other multi-orbital superconductors with metallic ground states. Notice that the d xz and d yz orbits are spatially anisotropic, in other words, the intra-orbital hopping integral along the x-direction is not equal to that along the y-direction for each orbital, as one can see from Due to the asymmetry of the different directions in different orbits, the amplitude of the superconducting gap of the local pairing along the x-direction may be not equal to that along the y-direction in each orbit.Thus, the single orbital d-wave or s-wave superconducting order parameter, in which the superconducting energy gap has 4-fold symmetry of rotational invariance in the xy plane, is not suitable for describing the pairing symmetry of the intra-orbital superconducting order parameters in this multi-orbital system.Considering all of the possible kinetic correlations and the superconducting pairings for the NN and NNN sites along different directions, we introduce the following order parameters, Here P α x/y and P 1/2 xy (P α 3 and P 3/4 xy ) are the kinetic average of the NN (NNN) intra-orbital and inter-orbital hopping integrals.These terms could be decoupled within the framework of the mean-field approximationSeo et al. (2008).Δ 1α x/y (Δ 2α x±y ) is the mean-field amplitude of the local NN (NNN) pairing order parameter in the α-orbit.The inter-orbital pairing parameter < c † i1↑ c † j2↓ > is very small, hence is neglected Seo et al. (2008).With these parameters, one can decouple the interaction terms in Eq.(3) within the framework of the self-consistent mean-field approximation, and obtain the mean-field Hamiltonian, and const is the collection of all the constant energy terms from the mean-field decoupling. The modified intra-orbital and inter-orbital kinetic energy reads, The superconducting order parameter Δ α (k) of each orbital channel in the momentum space is Thus the superconducting pairing symmetry of the α-orbit is determined by (cos To characterize the complicated superconducting order parameters in different parameter regions, we define the S x 2 +ηy 2 -wave or d x 2 −ηy 2 -wave as the pairing symmetry when It reduces to the conventional S x 2 +y 2 -wave or d x 2 −y 2 -wave symmetry at η = 1.We also define the S ηx 2 y 2 -wave or d ηxy -wave as the superconducting pairing symmetry when In this situation, it reduces to the familiar S x 2 y 2 -wave or d xy -wave symmetry at η = 1. Diagonalizing the matrix A(k) by an unitary transformation U(k), U(k) † A(k)U(k),a n d minimizing the free energy of the system with respect to these parameters in Eq.(5-7), one 29 Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors obtains the self-consistent equations, where With these self-consistent equations, we could obtain not only the groundstate phase diagram, but also the temperature dependence of the Fermi surfaces in normal state and the quasiparticle spectra in the normal and superconducting states.In fact, the intra-orbital hopping integral of the d yz orbit is symmetric with that of the d yz orbit under a coordinate transformation (x,y,z) ← (y,x,z).Due to this symmetry, the superconducting order parameters Δ 2 (k) can be obtained from Δ 1 (k) under the coordinate transformation.Therefore, we mainly focus on the properties of the superconducting order parameters Δ 1 (k) in the first orbit d xz . Nevertheless, the global superconducting pairing order parameter of the two-orbital t-t ′ -J-J ′ model should be rotationally symmetric in the xy-plane, as we can see from the Hamiltonian Eq.( 2). Within the present scenario, we could obtain not only the groundstate phase diagram, but also the quasiparticle spectra in the normal and the superconducting states.The temperature dependence of the Fermi surface in normal state and that of the spin-lattice relaxation rate in the superconducting state can also be obtained.Among these quantities, the spin-lattice relaxation rate in the NMR experiment is expressed as Matano (2008): Superconductors -Properties, Technology, and Applications www.intechopen.com Providing 1/T 1N in the normal state satisfies the Korringa law, the spin lattice relaxation rate 1/T 1s becomes Xiang (2007): 1/T 1s ∝ (k B T) • T 1N /T 1s . Intermediate correlation regime When the kinetic energy of the conduction bands becomes small and is comparable with the Coulomb interaction, we need to consider the electronic correlation effect, as one sees in FeTe 1−x Se x and other superconductors.We utilize the Kotliar-Ruckenstein's slave boson approach for some FeAs-based and ruthenate superconductor with intermediate magnetic moments.To reflect the multi-orbital character of iron pnictides, we need to extend the single-orbital Kotliar-Ruckenstein's slave boson approach Kotliar et al. (1988) to the two-orbital Hubbard models for various configurations.In the multiorbital Hubbard model, a few of auxiliary boson field operators representing the possibilities of various electron occupations are introduced, such as e, p, d, b, t, q, which denote the possibilities of none, single, double, triplicate, quaternity occupations.With these auxiliary boson fields, an original fermion operator can be expressed as: Here f † iασ is the new slaved fermion operator and Q iασ is an auxiliary particle number operator Kotliar et al. (1988).Projecting the original fermion operators into these boson field and fermion field operators, one could not only obtain an effective Hamiltonian, but also get the groundstate energy in the saddle point approximation with the normalization condition and the fermion number constraints Kotliar et al. (1988).Here we employ a generalized Lagrange multiplier method to enforce these constraint conditions, thus the interorbital hoppings and crystal field splitting can be treated on the same foot.The fermion occupation number is constrained with the penalty function method.To enforce the normalization condition, we have a boundary constrained condition: With these projections to the boson states, one can easily obtain an effective t-t ′ -J-J ′ model subjected to the normalization and fermion number constraints.Following the similar steps above in the weak correlation regime to decouple the spin exchange terms, one could obtain the self-consistent equations similar to Eq.( 8) to get various supercoundcting groundstate properties in the intermediate correlated iron pnictides, such as FeSe/FeTe, etc. Strong correlation regime Once the Coulomb interaction is so large that double occupation is excluded on each site, we use the Barnes-Coleman's slave boson approach to discuss the pairing symmetry and orbital-dependent superconducting energy gaps, which is applicable for some FeAs-based superconductors and heavy fermion superconductors with significantly large magnetic moments in the parent phases. 31 Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors www.intechopen.com Within the slave-boson representation Barnes (1976); Coleman (1984), Eq.( 4) is rewritten in terms of the projected fermion operators f † imσ and f imσ , as well as the slave boson operators at each site, which rule out the double and multiple fermion occupancies.The constrained Hilbert space (S)ofeachsitei is including the single-occupied states of the spin-up and spin-down in 1-orbit, and those in 2-orbit, together with the vacancy state, respectively.The present constrained spin-orbital formulation resembles to the 4-fold degenerate state of pseudo-angular momentum j=3/2 proposed by Barnes Barnes (1976) and Coleman Coleman (1984), if we define creates an empty occupation state at the ith site, and the fermion operator f † imσ ( f imσ ) creates (annihilates) a slaved electron at site i with the orbit m and spin σ(=↑, ↓).After projecting the original fermion representation into the present boson representation, one could obtain an effective t-t ′ -J-J ′ model subjected to these constraints.Following the similar steps above in Sec.2.1 to decouple the spin exchange terms, one could readily solve the superconducting groundstate properties in strongly correlated K x Fe 2−y Se 2 compound. Pairing symmetry of multi-orbital iron-pnictide superconductors We present the pairing symmetry of the two-orbital t-t'-J-J' models for weak-correlated FeAs-based superconductors.The main numerical results in the weak correlation situation are addressed as follows.Also one could easily obtain the numerical results of the intermediate and strong correlation situations. Stability of unusual superconducting pairing symmetry First of all, we determine the stable ground state of the present two-orbital t − t ′ − J − J ′ model with the electron filling n=1-δ in a square lattice through comparing the groundstate energies of various pairing-symmetric superconducting states: the isotropic s wave, anisotropic s x 2 −y 2 wave, s x 2 y 2 wave, d x 2 −y 2 wave, and d x 2 y 2 wave, etc.By minimizing the groundstate energies of various candidates and finding a stablest state, we obtain phase diagrams of the systems for various parameters, such as the hopping integrals t ab , the doping concentration n, the exchange parameters J and J ′ and so on.Our numerical results show that in the superconducting phase of iron pnictides, the energy of the weakly anisotropic and nodeless d x 2 −y 2 + s x 2 y 2 -wave-like superconducting state is lower than those of the s-waves and d-wave states for most of the situations we investigated. Phase diagram of superconducting pairing symmetry In this subsection, we first obtain the phase diagrams and mark the pairing symmetry of each stable superconducting phase in the J-J' and t-J planes, and locate the most possible position of the pairing symmetry of iron-pnictide superconductors. The J ′ -J phase diagram of the t-t ′ -J-J ′ model at carrier concentration x=0.18 is shown in Fig. 1a. Different from Seo et al.'s phase diagram Seo et al. (2008), we obtain five stable phases in the present model.The first one is a normal phase in the small J and J ′ region, denoted by N in Fig. 1a.Obviously, when the superexchange coupling J and J ′ are too small to provide the pairing glue, the kinetic energy is dominant, and the electrons stay in the normal state, which is analogous to the single-orbital t-J model Kotliar et al. (1988).Among the four superconducting phases mediated through the spin fluctuations, a large NN spin coupling J and a small NNN spin coupling J ′ ,o rJ >>J ′ , favor the S x 2 +ηy 2 (here and below η 1 =η) superconducting phase with the gap Δ 1 (k) ∝ cos(k x )+ηcos(k y ), where the pairing symmetry is the combination of the S x 2 +y 2 -wave and the d x 2 −y 2 -wave components, as seen the pink region in Fig. 1a.The S x 2 +ηy 2 symmetry arises from the major contribution of the NN spin coupling J term.The NNN spin coupling contributes very little to Δ 1 (k) due to J>>J ′ . On the other hand, small NN spin coupling J and large NNN spin coupling J ′ favor the S x 2 y 2 superconducting phase with the symmetry Δ 1 (k) ∝ cos(k x + k y )+cos(k x − k y ), as seen in the blue region in Fig. 1a, which is mainly attributed to the NNN spin coupling.In this situation, Δ 1 (k) is almost isotropic in the xy-plane due to the isotropy of the dominant 33 Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors NNN hopping integrals in the xy-plane.The superconducting order parameter becomes complicated when J and J ′ compete with each other.As seen in Fig. 1a, the pairing symmetry of the superconducting phase in the green region of Fig. 1a is the combination of the S x 2 +ηy 2 and the S x 2 y 2 components, and the symmetry of the superconducting phase in the yellow region of Fig. 1a is the combination of the d x 2 −ηy 2 and the S x 2 y 2 components. It is interesting to ask in which region the realistic parameters of the iron pnictides fall.(2008).Fig. 1b shows the J ′ dependence of the groundstate energy difference, ΔE=E d -E η , between the two superconducting phases at different J, here E d and E η are the energies of the δ s S x 2 y 2 ± δ d d x 2 −y 2 symmetric phase and of the symmetric phase in Fig. 1a, respectively.It is clear that in wide J-J ′ range, the d x 2 −ηy 2 + S x 2 y 2 phase is always more stable than the δ s S x 2 y 2 ± δ d d x 2 −y 2 phase.Thus d x 2 −ηy 2 + S x 2 y 2 -wave is most likely the superconducting pairing symmetry in iron pnictide superconductors. T-dependence of fermi surface and superconducting energy gaps In the present situation with a weakly broken orbital symmetry, we find that two superconducting energy gaps synchronously approach zero as T is lifted to T c .To concretely discuss the properties of the superconducting state and the normal state, and compare the theory with the ARPES experimental results, in what follows, we focus on two sets of typical superexchange coupling parameters, Case I: J=0.3 and J ′ =0.7, i.e., the NN spin coupling is weaker than the NNN coupling; and Case II: J=0.7 and J ′ =0.3, i.e. the NN spin coupling is stronger than the NNN coupling.In both of the situations, the parameters fall in the yellow region in Fig. 1a, so the superconducting pairing symmetries are d x 2 −ηy 2 + S x 2 y 2 -wave.We present the temperature evolution of the Fermi surfaces in the normal state in Fig. 2 for the case I with J=0.3 and J ′ =0.7.The Fermi surface topology for the case II with J=0.7 and J ′ =0.(2008).Interestingly, the hole-like Fermi sheets expand a little with increasing temperature; in contrast, the electron-like Fermi sheets shrink considerably.This indicates that the electron-like Fermi sheet may play a more important role in the low-energy processes in finite temperatures.This behavior arises from that the electronic thermal excitations increase with the lift of temperature, leading to the chemical potential decreasing with increasing T. Thus the electron-like Fermi surface decreases and the hole-like Fermi surface increases with increasing T. Fig. 3 shows the temperature dependence of the superconducting energy gaps on the two hole-like Fermi sheets along the θ = 0 o direction in the polar coordinate system for the two sets of parameters.With the increasing temperature, the two energy gaps decrease monotonously and vanish simultaneously, as observed in the ARPES experiments Ding et al. (2008).Obviously, the superconducting-state to normal-state transition is a second-order phase transition.For the case I, the magnitude of the energy gap on the small Fermi srface (α 1 ) is larger than that on the large Fermi surface (α 2 )i nt h eΓ point, in agreement with the ARPES results Ding et al. (2008);Zhao et al. (2008).In contrast, for the case II, the magnitude of the gap on the small fermi surface (α 1 ) is smaller than that on large Fermi surface (α 2 )i n the Γ point, which disagrees with the experiment Ding et al. (2008);Zhao et al. (2008).These indicate that the first set of parameters in Case I is more suitable for describing the FeAs superconductors. From the present theoretical results in Fig. 3, we find that in Case I, the ratios of the energy gaps to the transition temperature are 2Δ 1 /k B T c =3.6 for the large gap, and 2Δ 2 /k B T c =2.9 for the small gap, respectively.The ratio of the large energy gap around the small Fermi sheet to the small one around the large Fermi sheet gives rise to Δ 1 /Δ 2 =1.25.These theoretical 35 Pairing Symmetry and Multiple Energy Gaps in Multi-Orbital Iron-Pnictide Superconductors www.intechopen.com2008), however, the ratios of these two gaps with respect to T c ,2 Δ α /k B T c , also strongly disagree with Ref. Ding et al. (2008).These facts demonstrate that there exist some essential shortages in the present t-t ′ -J-J ′ model or in the self-consistent field method.One also notices that for Case II, the decline of the two superconducting energy gaps with the increasing temperature is not smooth, which comes from the fact that the different local pairing order parameters, Δ 1α x/y and Δ 2α x±y , interplay with each other, reflecting the anisotropic pairing symmetry in Case II. Angle dependence of superconducting energy gaps In this section we present the dependence of each superconducting energy gap with the d+s-wave pairing on the orientational angle, and show that the anisotropy of the superconducting energy gaps crucially depends on the inter-orbital hopping and the ratio of J'/J. The ARPES experiment provides direct information about the quasiparticle spectra in normal state and the pairing symmetry of the superconducting energy gaps in the superconducting state.Here we present our theoretical results of the angle resolved energy gaps of the two orbits, and compare them with experimental observation.Fig. 4 shows the superconducting energy gap characters of the t-t ′ -J-J ′ model for the two sets of the parameters in Case I and II.In Fig. 4. The angle dependence of the superconducting energy gaps near the small hole Fermi surface (α 1 ) and the large hole Fermi surface (α 2 )aroundtheΓ point in the polar coordinate system.The theoretical parameters are the same as these in Fig. 3, Case I, J = 0.3, J ′ = 0.7 (wine and green circles); and Case II, J = 0.7, J ′ = 0.3 (red and blue circles). both cases, two distinct gaps open on the hole-like Fermi sheets, α 1 and α 2 , as seen in Fig. 2. The presence of two different energy gaps demonstrates the nature of a multi-gap superconductor in the t-t ′ -J-J ′ model.Our results show that for the case I, the superconducting energy gap structure exhibits a nearly isotropic symmetry with invisible anisotropy, as seen in Fig. 4. A large energy gap opens on the small hole Fermi sheet (α 1 ), and a small energy gap opens on the large hole Fermi sheet (α 2 ).For the case II, the angular dependence of the energy gaps is visible, exhibiting weak spatial anisotropy.The oscillation amplitude is about 16%.However, the amplitudes of the superconducting energy gaps on the different Fermi surfaces, α 1 and α 2 , are contrast to these in Case I, i.e., a small energy gap opens on the small Fermi surface sheet (α 1 ), and a large energy gap opens on the large hole fermi surface sheet (α 2 ). One finds that in Case I, the anisotropy of the superconducting energy gaps is very weak, consistent with Zhao et al nodes, so the system exhibits weakly anisotropic and nodeless s-wave-like energy gaps on the Fermi surface sheets. Spin-lattice relaxation rate in NMR In this subsection one can also obtain the theoretical spin-lattice relaxation data under different temperatures, especially the temperature-dependence of the Knight shift in iron-pnictide superconductors.We attribute the unusual T-dependence of the Knight shift to the multi-gap character.2008), observed the nodeless gap function in the superconducting phase of ReFeAsO 1−x F x and Ba 1−x K x Fe 2 As 2 compounds, the line nodes in the superconducting energy gap was also suggested by the NMR experiment Matano (2008).The two characters in the NMR experiment supported the line nodes: lack of the coherence peak and the T 3 behavior in the nuclear spin-lattice relaxation rate, 1/T 1 .Using the gap function obtained in this paper, we calculate the spin lattice relaxation rate 1/T s ,a n dt h e numerical result is shown in Fig. 5.We also plot the T 3 law (the red line) for a comparison.It is found that over a wide temperature range, the spin lattice relaxation rate in the present model can be fitted by the T 3 law, in agreement with the observation of the NMR experiments Matano (2008). A small coherence peak appears around the critical transition temperature, as clearly seen in the inset of Fig. 5. Experimentally, such a small coherence peak may be easily suppressed by the impurity effect or the antiferromagnetic spin fluctuations, similar to the situations in cuprates.This leads to the missing of the Hebel-Slichter coherence peak in the NMR experiment in iron pnictide SC.With decreasing temperature, one finds a drop in the spin lattice relaxation rate, 1/T 1s , consistent with the observation of the NMR experiments Matano (2008).Such a behavior deviating from the T 3 law may be attributed from the multi-gap character of this system, and such a drop reflects the different energy gaps in different orbits.Surely, more meticulous studies are needed in the near future.We also notice that the extended s ± energy gaps found by Parker etal.also can give the same NMR relaxation rate in superconducting pnictides Parker et al. (2008).Parish et al. Parish et al. (2008) suggested that the deviation of the T 3 law in the spin-lattice relaxation arises from the inter-band contribution. Comparison with other theories and experimental observations From the preceding discussions, we find that many unusual properties in the normal state and the superconductive phase of newly discovered FeAs compounds could be qualitatively interpreted in the two-orbital t-t (2008).It may seem strange that this intermediate coupling theory based upon the proximity to a Mott transition has essentially the same pairing solutions as the Fermi-liquid analysis of Ref. Mazin et al. (2009).But it is not surprising at all because the fermiology and the spin fluctuation wave vector (the structure of magnetic excitations in the reciprocal space) predetermine this symmetry, as is suggested by Mazin et al. (2009).There is, however, an important difference between our results and those of Chubukov et al Chubukov et al. (2008). In their case, pairing mechanism is due to the increase in the intra-band pairing hopping term, not necessarily due to spin-fluctuations that is the pairing mechanism in our analysis. Also, one should keep in mind that a completely quantitative comparison between the theory and experiment is still difficult, since the present two-orbital t-t ′ -J-J ′ model only describes the topology structure of the Fermi surfaces of the FeAs superconductors, but does not contain all the details of the Fermi surfaces and the band structures in iron pnictide compounds.On the other hand, in the realistic material, the spin couplings, J and J ′ , might be a strong asymmetry Yin et al. (2008), which is not taken into account in the present t-t ′ -J-J ′ model.Hence, we expect that the more elaborate tight-binding parameters and anisotropic coupling J-J ′ model will improve the present results in future studies.Also the present constrained mean-field approximation needs to be further improved. In some FeAs-based superconductors, the weakly anisotropic orbital symmetry makes it very difficult to distinguish which orbitals are involved in the formation of the superconducting state.To further uncover the orbital dependence of the superconducting energy gaps, we study the superconducting properties of a highly anisotropic two-orbital t-J model in the strong correlation regime.We study how the phase diagram evolves with the band asymmetric factor R = t 22 /t 11 , and the numerical result is shown in Fig. 6 in the strongly correlation regime.Note that we consider only the nearest-neighbor hopping on a square lattice.It is found that at n = 1.98, the difference between Δ 1 and Δ 2 increases with R deviating from the unity.The superconducting order parameters exhibit different behaviors: Δ 2 monotonously increases and almost saturates as R < R c ≈ 0.6; however, Δ 1 monotonously decreases and vanishes at R c , indicating the appearance of an orbital dependent superconducting phase, where the superconducting gap in orbit 2 exponentially approaches zero, and the energy gap in orbit remains finite.As the doping concentration increases to n = 1.95 in Fig. 6b, the two superconducting order parameters behave similarly to Fig. 6a.Finally, the TGSC-intermediate superconducting phase transition occurs at R c ≈ 0.7.Obviously, with the decrease of R, the bandwidth of 2-orbit considerably shrinks and the pairing coupling of the orbit-2 electrons significantly deviates from that of 1-orbit.Thus, the orbital-dependent intermediate superconducting phase easily occurs when the symmetry of the orbital hopping is broken. With the increase of the hopping integral asymmetry, the bandwidth of the 2-orbit becomes narrower and narrower, and more and more orbit-2 electrons transfer to orbit-1, hence the amplitude of the superconducting order parameter of 2-orbit gradually decreases to zero.In the same time, the superconducting order parameter of 1-orbit increases.The system enters Here TGSC and OSSC denote the two-gap and orbital dependent superconducting phases, respectively the orbital dependent superconducting regime, and the orbital dependent superconducting phase is more robust with the deviation of R from the unity, as we see in Fig. 6b.As one expects, when the hopping integral ratio R is larger than the unity, the behavior of Δ 1 is inter-changed with that of Δ 2 .The properties in the system with R are analogous to those with 1/R in the absence of the crystalline field splitting. Remarks and conclusions We notice the profound difference of the superconducting properties between cuprates and iron pnictides.Comparing with the copper-based superconductors with a 4-fold rotational symmetry, the inequivalence between the x-a n dy-direction of the orbit d xz/yz in iron pnictides results in the anisotropic factor, η, and leads to a distinct pairing symmetry.In the present iron pnictide superconductors, the NNN spin coupling contributes an important role to the S x 2 y 2 pairing symmetry.Further, the multi-orbital character also contributes two weakly anisotropic and nodeless energy gaps, significantly different from the single energy gap in the cuprate superconductors. Strong next-nearest-neighbour coupling and inter-orbital hopping in iron-pnictide superconductors favor a weak anisotropic and nodeless d+s wave symmetry.From Eq. (3), one could see that the NN interaction J favors the order parameters Δ 1α x/y and the NNN interaction J ′ favors Δ 2α x±y .Thus, when the NN interaction J is dominant in the system, the local superconducting order parameters Δ 1α x/y become a dominant term in Eq.( 5); and when the NNN interaction J ′ is considerably larger than J, the local superconducting order parameters Δ 2α x±y become dominant in Eq.( 5).In summary, our results have shown that many properties observed in iron-based superconductors could be comprehensively understood in the present model qualitatively.In the reasonable physical parameters region of LaFeAsO 1−x F x , the pairing symmetry of the model is nearly isotropic and nodeless d x 2 −ηy 2 +S x 2 y 2 -wave, mainly originating from the Fermi surface topology and the spin fluctuation in these systems, which is in agreement with the observation of ARPES and the NMR experiments in ironpnictide superconductors. Fig. 1 . Fig. 1.(a) Superconducting phase diagram of the t-t ′ -J-J ′ model for the d xz -orbit at the carrier concentration x=0.18.N denotes the normal state, the other four phases are superconductive with different pairing symmetries.(b) The energy difference ΔE between Seo et al's Seo et al.(2008) and our ground states vs the NNN spin coupling J ′ at different J, J=1,2 and 3, respectively. 3 is almost identical to Fig.2, hence is not plotted.From Fig.2, one sees that in the large Brillouin zone (BZ) associated with the present t-t ′ -J-J ′ model with one Fe atom per unit cell, there exist two hole-like Fermi sheets (α 1 and α 2 )aroundtheΓ point, and two electron-like Fermi sheets (β 1 and β 2 )aroundtheM point.This is in agreement with the ARPES experiment Ding et al. (2008) and consistent with the first-principle electronic structures calculations Mazin et al. (2008); Boeri et al. (2008); Cao et al. (2008); Singh et al. (2008); Lebègue et al. (2007); Ma et al. Fig. 2 . Fig. 2. The Fermi surface topology in the large Brillouin zone at the different temperatures: T=0.15 (black), 0.2 (red) and 0.7 (green).Dashed square outlines the reduced Brillouin Zone.The theoretical parameters: J = 0.3, J ′ = 0.7; the other parameters are the same as these in the Fig.1. Fig. 3 . Fig. 3. Temperature dependence of the superconducting energy gaps near the small hole Fermi surface (α 1 ) and the large hole Fermi surface (α 2 )aroundtheΓ point along the θ = 0 o direction in the polar coordinate system.The doping concentration x=0.18.Theoretical parameters: Case I, J = 0.3, J ′ = 0.7 (wine and green circles); and Case II, J = 0.7, J ′ = 0.3 (red and blue circles).resultssignificantly deviate from the ARPES experimental dataDing et al. (2008).In Case II, Δ 1 /Δ 2 =2, in agreement with Ref.Ding et al. (2008), however, the ratios of these two gaps with respect to T c ,2 Δ α /k B T c , also strongly disagree with Ref.Ding et al. (2008).These facts demonstrate that there exist some essential shortages in the present t-t ′ -J-J ′ model or in Fig. 5 . Fig. 5. Temperature dependence of the spin lattice relaxation rate in the t-t ′ -J-J ′ model.The red arrow indicates the superconducting critical temperature T c .The red line is the T 3 law for comparison.Inset shows the detail near T c .The theoretical parameters are the same as case II in Fig.2.Although many experimental measurements, such as the Andreev reflection Chen et al. (2008), the exponential temperature dependence of the penetration depths and the ARPES Ding et al. (2008); Zhao et al. (2008), observed the nodeless gap function in the superconducting phase of ReFeAsO 1−x F x and Ba 1−x K x Fe 2 As 2 compounds, the line nodes in the superconducting energy gap was also suggested by the NMR experimentMatano (2008).The two characters in the NMR experiment supported the line nodes: lack of the coherence peak and the T 3 behavior in the nuclear spin-lattice relaxation rate, 1/T 1 .Using the gap function obtained in this paper, we calculate the spin lattice relaxation rate 1/T s ,a n dt h e numerical result is shown in Fig.5.We also plot the T 3 law (the red line) for a comparison.It is found that over a wide temperature range, the spin lattice relaxation rate in the present model can be fitted by the T 3 law, in agreement with the observation of the NMR experimentsMatano (2008). Fig. 6 . Fig. 6. (Color online) Dependence of the superconducting order parameters Δ m (m=1,2) on the level splitting E Δ for (a) R = 1andδ = 0.02, and (b) R = 0.8 and δ = 0.02, respectively.Here TGSC and OSSC denote the two-gap and orbital dependent superconducting phases, respectively From the first-principles calculations, Ma et.al. suggested that J ≈ J ′ ≈ 0.05 eV/S 2 Ma et al. (2010), where S is the spin of each Fe ion.When the hopping integral |t 1 |≈ 0.1 ∼0.5eV ,su c hase t of parameters falls in the yellow region in Fig.1a, implying that the FeAs superconductors should have the d x 2 −ηy 2 + S x 2 y 2 pairing symmetry, and the anisotropic factor η is not equal to 1. Also, some other authors suggested other parameters for the FeAs superconductors, for example, Seo et al.Seo et al. (2008) proposed that J=0.25 and J ′ =0.5; and Si et al.Si et al. x 2 −ηy 2 + S x 2 y 2 type, rather than the δ s S x 2 y 2 ± δ d d x 2 −y 2 type with η = 1(hereδ s and δ d are the weights of the S x 2 y 2 wave and the d x 2 −y 2 wave component, respectively).In Fig.1b, we compare the groundstate energy difference between theirs and ours, and find that the groundstate energy in the present superconducting phase, E We notice that Seo et al.'s J-J′ parameters also falls in the yellow region in Fig.1a, i.e. the pairing symmetry is d η , is lower than the E d in Seo et al.'s paper Seo et al. Kondo et al.Kondo et al. (2008)et al. ′ s Ding et al. (2008) ARPES data.In Case II, the superconducting energy gaps with about 16% anisotropy is in agreement with the ARPES experiment byKondo et al.Kondo et al. (2008).Noticing that in Case II, such spatial anisotropy is still under the resolution of the ARPES experiment, hence does not conflict with Zhao et al.'s Zhao et al. (2008)and Ding et al. ′ s Ding et al. (2008) observation.It is the d x 2 −ηy 2 +S x 2 y 2 -wave pairing symmetry that leads to weakly anisotropy and nodeless superconducting energy gap structures.Although the d x 2 −ηy 2 -wave pairing has nodes in the line cos k x − η cos k y = 0, and the S x 2 y 2 -wave pairing has nodes in the lines k x = ±π/2 or k y = π/2.The mixed superconducting pairing symmetry, d x 2 −ηy 2 +S x 2 y 2 , diminishes the model, showing that to some extent, this model is a good approximation to describe iron pnictide superconductors.Within this scenario, the mixing pairing symmetry with d x 2 −ηy 2 + S x 2 y 2 -wave contributes to the weakly anisotropic and nodeless energy gaps.Such a pairing symmetry assembles the characters of usual d-wave and s-wave, hence shares the properties of the usual d-wave superconductors, like cuprates, and the s-wave superconductor, such as MgB 2 .Nevertheless, to quantitatively compare the theoretical results with the experimental observation, more subtle band structures of the t-t ′ -J-J ′ ′ -J-J ′ model are expected.It is of interest that for sufficiently large NNN spin coupling J ′ , S x 2 y 2 is a dominating pairing state which is the same as the pairing symmetry obtained by Chubukov et alChubukov et al.
9,529.8
2012-04-20T00:00:00.000
[ "Physics" ]
Textile-reinforced mortar (TRM) versus fi ber-reinforced polymers (FRP) in shear strengthening of concrete beams This paper presents an experimental study on shear strengthening of rectangular reinforced concrete (RC) beams with advanced composite materials. Key parameters of this study include: (a) the strengthening system, namely textile-reinforced mortar (TRM) jacketing and fi ber-reinforced polymer (FRP) jacketing, (b) the strengthening con fi guration, namely side-bonding, U-wrapping and full-wrapping, and (c) the number of the strengthening layers. In total, 14 RC beams were constructed and tested under bending loading. One of the beams did not receive any strengthening and served as control beam, eight received TRM jacketing, whereas the rest fi ve received FRP jacketing. It is concluded that the TRM is generally less effective than FRP in increasing the shear capacity of concrete, however the effectiveness depends on both the strengthening con fi guration and the number of layers. U-wrapping strengthening con fi guration is much more effective than side-bonding in case of TRM jackets and the effectiveness of TRM jackets increases considerably with increasing the number of layers. © 2015 The Authors. Introduction and background Structural retrofitting of existing reinforced concrete (RC) structures is a constantly growing need due to their deterioration (ageing, environmental induced degradation, lack of maintenance, and need for upgrading to meet the current design requirements). One of the most common structural deficiencies is the poor shear capacity of RC beams or bridge girders. The use of fiber reinforced polymers (FRP) as externally bonded (EB) reinforcement in shear strengthening of RC members has become very popular over the last two decades. Following the studies of Triantafillou 1998 [1] and Khalifa et al. [2] a big effort was made by researchers worldwide to further investigate or even improve this technique [i.e.3e9]; with all the results showing the high effectiveness of using EB FRP in shear strengthening of RC beams. However, the FRP strengthening technique has a few drawbacks mainly associated with the use of epoxy resins, namely high cost, poor performance in high temperatures, inability to apply on wet surfaces, as discussed in Ref. [10]. In an attempt to alleviate the problems arising from the use of epoxies, researchers have introduced a novel composite material, namely textile-reinforced mortar (TRM), which combines advanced fibers in form of textiles (with open-mesh configuration) with inorganic matrices, such as cement-based mortars. Over the last decade it has been reported in the literature that TRM is a very promising alternative to the FRP retrofitting solution. TRM has been used for the strengthening of RC members [i.e.10e21] and, as well as for the seismic retrofitting of masonry-infilled RC frames [22]. Bousias et al. [23] applied TRM jackets for the seismic retrofitting of a large-scale RC 2-story building. Selected case studies of actual applications of TRM in the construction field can be found in ACI guidelines [24]. Shear strengthening of RC beams with TRM has been investigated by few researchers [25e29]. In these studies various parameters were investigated including the number of layers [25,27,29], the strengthening configuration [27] and the mechanical anchorage of the jackets [26,29]. A key parameter, namely the effectiveness of TRM versus FRP in shear strengthening of RC beams, has only been investigated on a limited number of specimens in Refs. [25] and [29]. In particular, in Ref. [25] it was concluded that TRM jackets are 45% less effective than their FRP counterparts, based on the results of two specimens retrofitted with closed jackets. Moreover, Tzoura and Triantafillou [29] on the basis of four specimens retrofitted with U-jackets concluded that TRM jackets are nearly 50% less effective than their counterparts in case of non-anchored jackets, whereas in case of mechanically anchored jackets the TRM system is marginally inferior to the FRP system. It is clear that the existing literature does not cover adequately the subject of comparing the two different strengthening systems (TRM versus FRP) when used in shear strengthening of concrete members. This paper presents the first systematic study on the effectiveness of TRM versus FRP jackets in shear strengthening of RC beams. The investigations address additional parameters including the number of layers and the strengthening configuration. Details are provided in the following sections. Test specimens and investigated parameters The main objective of this study was to compare the effectiveness between TRM and FRP jacketing in shear strengthening of RC beams. A total of fourteen rectangular half-scale RC beams (crosssection dimensions of 102  203 mm) were constructed and tested as simply supported in (non-symmetric) three-point bending as shown in Fig. 1a. The total length of the beams was equal to 1677 mm, whereas the effective flexural span was equal to 1077 mm (Fig. 1b), providing adequate anchorage length to the longitudinal reinforcement. To emulate old detailing practices, the beams were designed to be deficient in shear in one of the two shear spans. To achieve this, the critical shorter shear span of 460 mm length did not include any transverse reinforcement, whereas the larger shear span included 8-mm diameter stirrups at a spacing of 75 mm (Fig. 2a). It should be noted that the effectiveness of FRP jackets is influenced by the presence and amount of stirrups [30]. However the aim of the present study was to directly assess the contribution of TRM and FRP jackets in the shear capacity of the strengthened beams excluding such an influence. Strengthening was applied only at the critical shear span aiming to increase its shear resistance. By design, the shear force demand in order to develop the full flexural capacity of the (unretrofitted) beams was targeted to be 3 times their shear capacity. As shown in Fig. 2b two 16 mm-diameter and two 10 mm-diameter deformed bars were placed at the tension and compression zone of the rectangular beams, respectively. The geometrical ratio of tensile rebars was 2.2%. The key investigated parameters of this study comprise: (a) the strengthening system (TRM or FRP), (b) the strengthening configuration, and (c) the number of layers. One beam was tested as-built without receiving strengthening and served as control specimen (CON). The rest 13 beams were divided in two main groups (Fig. 3a). The first group comprised 8 beams strengthened with TRM jackets, whereas the second group comprised 5 beams strengthened with FRP jackets. Three different strengthening configurations were applied on each group's specimens, namely Side-Bonded jackets (SB), U-Wrapped jackets (UW) and Fully-Wrapped jackets (FW). For the SB and the UW configurations the specimens of the first group received from 1 to 3 TRM layers, whereas the specimens of the second group received 1 and 2 FRP layers. For the FW configuration the first group specimens received 1 and 2 TRM layers, while only one specimen of the second group received 1 FRP layer. The notation of specimens is X_YN, where X refers to the strengthening configuration (SB, UW and FW), Y denotes the type of the binding material (M for Mortar or R for Resin) and N denotes the number of layers (1, 2 or 3). Materials and strengthening procedure The specimens were cast in groups of four using the same concrete mix design. The compressive strength and the tensile splitting strength of the concrete were experimentally obtained on the day of testing by conducting standard tests on cylinders of 150 mm-diameter and of 300 mm -height. The results are summarized in Table 1 (average values of 3 specimens). The 16 and 10 mm-diameter longitudinal bars had a yield stress of 547 MPa and 552 MPa, respectively (average of 3 specimens), while the standard deviation is 6.24 MPa and 2.66 MPa, respectively. The corresponding values for the 8 mm-diameter bars used for stirrups were 568 MPa and 2.71 MPa, respectively. The same reinforcement was used in both strengthening systems; the only difference between the two systems was the binding material (epoxy resin in case of FRP and cementitious mortar in case of TRM). This reinforcement comprised a textile with equal quantity of high-strength carbon fibers in two orthogonal directions (Fig. 3b). The weight of the textile was 348 g/m 2 , whereas its nominal thickness (based on the equivalent smeared distribution of fibers) was 0.095 mm. According to the manufacturer datasheets the tensile strength and the modulus of elasticity of the carbon fibers were 3800 MPa and 225 GPa, respectively. For the specimens receiving mortar as binding material an inorganic dry binder was used, consisting of cement and polymers at a ratio of 8:1 by weight. The water-binder ratio in the mortar was 0.23∶1 by weight, resulting in plastic consistency and good workability. Table 2 summarizes the strength properties of the mortar (average values of 3 specimens) obtained experimentally on the day of testing using prisms of 40  40  160 mm dimensions, according to the EN 1015-11 [31]. For the specimens receiving epoxy adhesive as binding material, a commercial adhesive (twopart epoxy resin with a mixing ratio 4:1 by weight, Sikadur ® À330) was used with an elastic modulus of 3.8 GPa and a tensile strength of 30 MPa (according to the manufacturer datasheets). The glass transition temperature (T g ) of the epoxy resin is equal to 68 C. The low viscosity of the adhesive allowed for the impregnation of the textile meshes by using a plastic roll. Prior to strengthening a thin layer of concrete cover was removed and a grid of groves (2e3 mm deep) was created as shown in Fig. 4a, using a grinding machine. The corners of the specimens receiving UW or FW jackets were rounded to a radius of approximately 15 mm in order to avoid stresses concentration. For FRPjacketed specimens the first textile layer was applied on the top of the first resin layer and was then impregnated in-situ with resin using a plastic roll (Fig. 4b). Special care was taken to ensure the full impregnation of the textile fibers with resin. If more than one textile layers were to be applied, the process was repeated until the application of all the layers was completed. For TRM-jacketed specimens the mortar was applied in approximately 2 mm-thick layers with a smooth metal trowel. After application of the first mortar layer on the (dampened) concrete surface, the textile was applied and pressed slightly into the mortar, which protruded through all the perforations between the fiber rovings. The next mortar layer covered the textile completely, and the operation was repeated until all textile layers were applied and covered by mortar (Fig. 4c). Of crucial importance in this method, as in the case of epoxy resins, was the application of each mortar layer while the previous one was still in a fresh state. Experimental setup and procedure The beams were subjected to monotonic loading using a stiff steel reaction frame and a three-point bending set-up configuration as shown in Fig. 1a. A vertically positioned servo-hydraulic actuator was used for the application of the load at a displacement rate of 0.02 mm/s. As illustrated in Fig. 5a the vertical displacement was measured at the position of load application using external Linear Variable Differential Transducer (LVDT); the displacement measured from this sensor was used to plot the loadedisplacement curve for each specimen. Moreover, measurements from the potentiometers placed at the critical shear span in one side of the beam, were utilized to monitor the average shear strain of the span (Fig. 5b). Additionally, the Digital Image Correlation (DIC) technique was employed to monitor relative displacements within the critical shear span, using two high-resolution cameras (on the side of the beam which was free of sensors). Finally, strain gauges were mounted to the longitudinal bars at the cross-section of maximum moment to monitor possible yielding of the steel reinforcement. It is noted that all data was synchronized and recorded using a fullycomputerized data acquisition system. Experimental results The response of all specimens tested is given in Fig. 6 in the form of loadedisplacement curves. Key results are also presented in Table 3. They include: (1) The peak load. (2) The observed failure mode. (3) The shear resistance of the critical shear span, V R , which is the shear force in the critical span at peak load. (4) The contribution of the jacket to the total shear resistance, V f , which is calculated as the shear resistance of the strengthened specimen minus the shear resistance of the control specimen. (5) The shear strengthening effectiveness which is expressed by the ratio of the shear resistance of a strengthened specimen, V R,str , to the shear resistance of the control beam, V R,con . (6) The average shear strain of the critical shear span at peak load, g Pmax . (7) The shear deformation capacity enhancement of the critical span as expressed by the ratio of the g Pmax,str of the strengthened specimen to the g Pmax,con of the control beam. The average shear strain at the critical shear span, g, was obtained from readings of the two potentiometers placed in X configuration (Fig. 5b) according to Eq. (1). (1) The control beam (CON) failed in shear at an ultimate load of 51.8 kN after the formation of a large shear crack in the critical span as shown in Fig. 7. The strong dowel action provided by the longitudinal reinforcement prevented the sudden drop of load and contributed to the residual shear resistance of the beam after the peak load. All beams strengthened with SB or UW FRP jackets failed in shear at an ultimate load substantially higher than that of the control beam; thus confirming the effectiveness of FRP jacketing in shear strengthening of RC members. The peak load attained by specimens SB_R1, UW_R1, SB_R2 and UW_R2 was 105, 113.4, 124.5 and 126.2 kN, respectevily, which yields 103%, 119%, 140% and 143% increase in the shear capacity, respectively. In all these specimens failure occurred due to FRP debonding; the excellent bond between the resin and the concrete substrate resulted in peeling off of the FRP jackets with part of the concrete. It was observed that in specimens with SB jackets the part of concrete that peeled off was thinner with respect to the specimens with UW jackets (Fig. 8a and Fig. 8b). Debonding of FRP reinforcement was initiated from the point of load application and propagated instantly to the support (Fig. 8c). One layer of FW FRP jacket resulted in enhancing at least 2.8 times the shear capacity. Specimen FW_R1 reached its ultimate moment capacity at a load of 150.3 kN and failed due to concrete crushing after yielding of the tensile longitudinal reinforcement (at approximately 140 kN e Fig. 8d). This confirmed the high effectiveness of closed FRP jackets. However, the use of closed jackets is not feasible in beams of typical RC buildings or bridge girders due to the presence of concrete slabs or decks, respectively. With only the exception of specimen FW_M2 which failed in flexure, all the TRM-strengthened specimens failed in shear and displayed considerably higher shear resistance (from approximately 10% up to approximately 150%) compared to the control specimen. The behaviour of TRM-strengthened specimens is described below into groups depending on the number of strengthening layers. Specimens SB_M1, UW_M1 and FW_M1, which received one TRM layer, reached an ultimate load of 56.6, 78.2 and 111.2 kN, respectively. The corresponding increase in their shear capacity was equal to 9%, 51% and 115%. The failure of these specimens was associated with damage of the TRM jackets (Fig. 9aec). The loaddrop in these specimens was attributed to the following local phenomena: (a) slippage of the vertical fiber rovings through the mortar and (b) partial rupture of the fibers crossing the shear crack. Going from the SB to the UW and finally to the FW configuration, the second phenomenon is more pronounced, whereas the first one tends to be eliminated. The nature of these local phenomena did not allow for a brittle failure mode. In fact, after the peak load was reached, relatively soft load degradation was recorded. As can be seen in Fig. 6(a,c,e) for specimens strengthened with one layer the descending branch is quite smooth for SB jackets and becomes less smooth in UW and FW jackets, respectively. Specimens SB_M2 and UW_M2 failed in shear at a load of 88.7 and 120.2 kN, respectively. Compared to the control specimen the increase in the shear resistance was equal to 71% and 132%, respectively. Failure in these specimens was attributed to debonding of the TRM jacket at a large part (approximately 2/3) of the shear span which was accompanied by peeling off of the concrete cover (Fig. 9d and e). This type of failure, although it was brittle, it was not as explosive as in the case of FRP-strengthened beams. Finally specimen FW_M2, after the formation of a shear crack at 70 kN, reached its ultimate moment capacity and (identically to FW_R1) failed in flexure due to concrete crushing at the compression zone (Fig. 9f). Specimens SB_M3 and UW_M3 failed in shear at even higher loads (108.9 and 131.1 kN, respectively) when compared to the corresponding specimens strengthened with two TRM layers (SB_M2 and UW_M2). The shear resistance of specimens SB_M3 and UW_M3 was increased by 110% and 153%, respectively, with respect to the control specimen. Specimen SB_M3 (Fig. 9g) failed in a similar way with specimen SB_M2 (Fig. 9d), whereas the failure mode of specimen UW_M3 was unique among all TRMstrengthened specimens. In the latter case debonding of the Ujacket occurred at the full-length of the shear span (Fig. 9h) and was as explosive as in case of all FRP-strengthened specimens. Fig. 10 presents the load versus average shear strain curves for all specimens which are plotted up to peak load (specimens FW_M2 and FW_R1 that failed in flexure are plotted up to yielding). The values of shear strain at peak load (g Pmax ) are presented in Table 3. Strengthening the beams with TRM or FRP jackets resulted in an increase in the average shear strain over the critical shear span, which at peak load varied from 1.32 to 3.38 times (compared to the control specimen). This increase is mainly attributed to redistribution of shear stresses in the shear span, which ultimately led to a more dense crack pattern and hence to an increased shear deformation capacity. It is also concluded that, in general, higher average strains were developed in TRM-retrofitted specimens compared to their FRP counterparts. Strengthening configuration and number of layers The curves in Fig. 11a illustrate the effect of the strengthening configuration (SB, UW or FW) on the shear capacity enhancement (V f /V con  100%). In FRP-strengthened specimens, the shear capacity was only marginally increased when UW jackets were applied instead of SB ones. On the other hand in TRM-strengthened specimens, the effectiveness of the UW jackets (expressed as the shear capacity enhancement) was 5.5 and 1.85 times the effectiveness of the SB jackets for 1 and 2 layers, respectively. Therefore, the benefit of applying UW instead of SB jackets was more pronounced in TRM than FRP system, especially as the number of layers increased. Finally, FW jacketing was the most effective configuration for both strengthening systems. In particular, the effectiveness of the FW jacket was 2.2 times the UW jacket effectiveness in case of one TRM layer, and at least 1.5 times in case of two TRM layers. For one FRP layer FW jacket was at least 1.6 times more effective than the UW jacket. The effect of the number of layers on the shear capacity enhancement for SB and UW strengthening configurations is illustrated in Fig. 11b. Doubling the amount of reinforcement (two layers instead of one) resulted in dramatic increase of the TRM jackets effectiveness. In particular, this increase was equal to 7.8 and 2.6 times for the SB and UW jackets, respectively. The corresponding increase when resin was used as binder was 1.35 and 1.2 times. The latter is consistent with the typical behaviour in FRP jackets in which increasing the amount of EB reinforcement results in limiting the effectiveness of FRP strengthening. To further investigate this effect on TRM jackets, beams with three layers were also tested for both SB and UW strengthening configurations. As shown in Fig. 11b, applying a third TRM layer resulted in strength increase of 1.55 and 1.15 times for SB and UW jackets, respectively. In the case of UW TRM jackets, increasing the number of layers from two to three had approximately the same effect as in FRP jackets for the increase from one to two layers. This trend is clearly illustrated Fig. 11b from the slope of V f /V con e no. of layers curves. A possible explanation for the difference between the two strengthening systems, regarding the effect of increasing the number of layers from one to two, could be found in the observed failure modes. As described in Section 3, all specimens retrofitted with FRP jackets exhibited the same failure mode which is associated to the failure of the concrete substrate with no damage in the composite jackets. However, in the case of TRM-retrofitted specimens a change in the failure mode was witnessed when the number of layers was increased from one to two or three. In specific, when one layer was applied the failure was attributed to local damage of the TRM jacket; the vertical fiber rovings crossing the developed shear crack at the jacket experienced a combination of partial rupture and slippage through the mortar ( Fig. 9a and b). The increase in the number of layers in that case prevented these local phenomena and as a result the damage was shifted to the concrete substrate. When preventing the local damage of the TRM jacket, due to partial rupture and slippage through the mortar (Fig. 9a and b), one of the following failure modes will occur: (a) debonding at the interface between the jacket and the concrete substrate, (b) interlaminar shear failure at the interface between two textile layers, or (c) peeling off of the concrete substrate. The first two, which are premature failure modes, are more likely to happen for low values of mortar tensile strength which could lead to its bond failure. The benefit of using a relatively high-strength mortar (like the one used in this study) is that excellent bond conditions can be achieved, resulting in the development of the third failure mode and therefore yielding the best results in terms of shear capacity enhancement. The arising question at this point is why the relatively poor behaviour of the TRM jacket at the local level was significantly improved when a second layer of textile was provided. The authors believe that the key answer to this question could be found in the mechanism of transferring forces from the textile reinforcement to the matrix. It seems that by providing just a second layer of textile, the mechanical interlock, which is the main mechanism of transferring forces from the reinforcement to the matrix in TRM systems, is drastically improved. This improvement might be attributed to the fact that two (at least) overlapping textile layers create a denser mesh-pattern than one. This happens due to the possible offset between the two layers ( Fig. 12). Provided that mortar will not fail in shear; the denser mesh-pattern in turn creates conditions for improved mechanical interlocking characteristics, which ultimately results in altering the failure mode. Deformation aspects of jackets based on DIC The response of the TRM and FPR SB and UW jackets, as a means of vertical deformations distribution along the beam height, was captured using the DIC method and is presented in Fig. 13. Each curve illustrates the relative displacement of each point along the beam height with respect to the bottom of the beam (at the middle cross-section of the critical shear span) at the instant of peak load. For the sake of comparison, the corresponding curve of the control beam is also plotted in all curves. As illustrated in Fig. 13 the control beam exhibited concentration of the vertical deformations at a specific level, which is related to the development of a single shear crack at that level (at around 130e140 mm from the bottom of the beam). The specimens with one TRM layer exhibited identical behaviour, with concentration of the deformations at a single level. Better distribution of deformations was observed in the rest specimens, indicating that the jackets were activated over a broader area due to better redistribution of stresses. Fig. 13 provides the evidence that in TRM jackets the force transferring mechanism is being modified with additional layers, thus resulting in a performance of the TRM jackets similar to the performance of the FRP jackets. However, the distribution of deformations in FRP jackets is consistently better than the TRM jackets for the same number of layers. Another interesting aspect of the behaviour of the TRM jackets is associated to their strengthening configuration. SB jackets deform at the central region along the beam height (with the two ends being almost inactive), whereas UW jackets deform from the bottom level to a level in the central region, thanks to the anchorage of the jackets at the bottom corner of the beams. Photos of the vertical deformations field of the TRM jackets, obtained through DIC at peak load are shown in Fig. 14. From there it is also evident that in TRM jackets with two or three layers the vertical deformations are distributed over a broader region of the shear span, when compared to TRM jackets with one layer. Effective stress and TRM versus FRP effectiveness factor: design aspects For calculating the FRP contribution to the shear capacity of RC members most of the design models use the effective stress of the FRP (s eff ), which can ideally be described as the average stress of the fibers crossing the shear crack. Given the effective stress, the shear force carried by the FRP, V f , can be calculated using the Eq. (2) [1] under the assumptions that: (a) the shear crack forms an angle q ¼ 45 with respect to the member axis, and (b) fibers crossing the crack are perpendicular to the member axis. Where r f is the geometrical reinforcement ratio of the composite material, expressed as r f ¼ 2t f /b w (this is valid for continuous FRP sheets and not for FRP strips), b w is the width of the beam, d is the effective depth of the member section, and t f is the total thickness of the composite material (usually taken equal to the thickness of the fabric times the number of layers). According to Triantafillou and Papanicolaou [25] the format of Eq. (2) can also be used for the calculation of the shear force carried by TRM jackets. In particular, assuming that: (a) the shear crack forms an angle q ¼ 45 with respect to the member axis, and (b) a two-directional textile is applied with the one direction of fibers being perpendicular to the member axis and the other being parallel to the member axis, then Eq. (2) can be used without any modifications. Application of Eq. (2) to the (SB and UW jacketed) beams tested in this study with t f equal to the nominal thickness of the fibers, results in the values of s eff and ε eff given in Table 4 (ε eff is the socalled effective strain and is calculated by dividing s eff by the modulus of elasticity of the fibers, E f ). In addition, Table 4 includes the effectiveness factor k, which is defined as the ratio of the TRM to FRP jackets effective stress. It should be noted that Triantafillou and Papanicolaou [25] obtained a value of k ¼ 0.55 from tests on two rectangular RC beams retrofitted with TRM and FRP closed jackets. In the present study the effectiveness factor k varies significantly not only for different strengthening configuration (SB or UW jacketing), but also for different number of layers. In particular, it varies from 0.09, which corresponds to one layer of SB jacket, to 0.92 which corresponds to two layers of UW jacket. Hence, the results of this study indicate that TRM jackets are less effective than FRP jackets as in Ref. [24], but the effectiveness is sensitive to parameters such as the strengthening configuration and the number of layers. By increasing the number of layers from one to two the effectiveness factor increases substantially, whereas the same happens when UW jackets are applied instead of SB jackets. It seems that after a critical value of r f,crit , TRM jackets develop full-composite action and behave similar to FRP jackets (e.g. specimen UW_M3). It should be noted that the development of fullcomposite action of TRM jackets depends on several factors such as the mortar mechanical properties, the strengthening configuration, the number of layers and possibly the textile geometry. Further investigation regarding this critical value is beyond the scope of this paper. Future studies should be directed towards investigating the effectiveness between TRM and FRP jackets for a wider range of r f values in parallel with the validation of the complex local phenomena in TRM jackets that strongly influence their effectiveness. Conclusions This paper presents an experimental investigation on the effectiveness of TRM and FRP jackets in shear strengthening of rectangular RC beams. Key parameters of this study were: (a) the strengthening system (TRM versus FRP), (b) the strengthening configuration (SB, UW or FW jacketing) and (c) the number of layers. For this purpose, fourteen shear-deficient beams were subjected to three-point bending under monotonic loading: one was tested as-built, whereas the rest thirteen were strengthened prior to testing. The main conclusions drawn from this study are summarized as follows: TRM is generally less effective in increasing the shear capacity of RC beams than FRP jacketing, but the effectiveness depends on both the strengthening configuration and the number of layers. The TRM versus FRP effectiveness factor varies from 0.09, which correspond to one layer of side-bonded jacket, to 0.92, which correspond to two layers of U-wrapped jacket. TRM jackets are more effective in increasing the beams deformation capacity (expressed as the average shear strain of the shear-critical span) than FRP jackets. U-wrapping (UW) strengthening configuration is much more effective than side-bonding (SB) in case of TRM jackets. On the contrary, in case of FRP jackets the UW configuration was found only slightly more effective than the SB configuration. Fullwrapping (FW) is the most effective strengthening configuration for both strengthening systems. A major difference between TRM and FRP strengthening systems is observed by increasing the number of layers from 1 to 2. In particular, the effectiveness of FRP jackets increases by 1.35 and 1.2 times for SB and UW configurations, respectively, whereas the effectiveness of TRM jackets increases by 7.8 and 2.6 times, respectively. The considerably higher effectiveness of TRM jackets when two instead of one textile layers are applied is linked to the change in the failure mode. The local damage the TRM jackets experience when one layer is applied (partial fibers rupture and slippage of fiber filaments through the mortar), is being shifted to the concrete substrate when two layers are applied (debonding of the jacket with peeling-off of the concrete). This is attributed to the better mechanical interlock conditions created by the overlapping of at least two textile layers. The above conclusions should be treated carefully as they are based on limited number of half-scale specimens. In this respect, future research should be directed towards investigating a wide range of jackets reinforcing ratio for different strengthening configurations, as well as testing of full-scale beams retrofitted with TRM jackets in order to increase the level of confidence, especially on the effective strain, and thus to allow for the development of reliable design models. technician Nigel Rook, and the technicians Gary Davies and Luke Bedford, for their assistance in the experimental work. The research described in this paper has been co-financed by the UK Engineering and Physical Sciences Research Council (EP/L50502X/1) and the University of Nottingham through the Dean of Engineering Prize, a scheme for pump priming support for early career academic staff.
7,463.4
2015-08-01T00:00:00.000
[ "Engineering", "Materials Science" ]
The Comparative Value Relevance of Donation and Advertising Expenditure Before and After the Global Financial Crisis in Korea This paper investigates the comparative value relevance of donation and advertising expenditures before and after the 2008 global financial crisis in listed Korean stock markets between 2004 and 2011. To test whether the value relevance of donation and advertising expenditures is associated with the 2008 global financial crisis, this paper first divides its sample into preand post-December 31, 2007, periods and then divides those into several subgroups to observe value relevance changes according to the characteristics and conditions of listed firms in the Korean stock markets. This paper’s empirical results offer important evidence concerning the comparative changes in the value relevance of advertising and donation expenditures. First, advertising and donation expenditures have positive value relevance before and after the global economic crisis and show a positive association with firm value in every subsample group divided according to firm characteristics. Second, the results show significant time-period differences in the value relevance of donation and advertising expenditures before and after the global financial crisis. The results also show that value relevance changes according to the circumstance and contexts of the firms (e.g., KOSPI vs. KOSDAQ, large vs. small and medium, high technology vs. low technology). Introduction and Ohlson (1995) assume firm value to be a function of measurable and immeasurable net assets.Measurable assets are usually published as tangible assets, and immeasurable assets, as intangible assets.For decades, many researchers have perceived the importance of intangible assets as a value-relevant factor, with most reporting the empirical results of studies on R&D investment.Some of the early R&D investment studies document the significant value relevance of R&D investment.Later studies suggest, though, that R&D activity indeed has a positive impact on firm value (Griliches & Mairesse, 1984;Hirschey, 1982;Hirschey & Weygandt, 1985;Bublitz & Entredge, 1989;Chauvin & Hirschey, 1993;Sougiannis, 1994;Lev & Sougiannis, 1996;Hall, 1999;Choi & Jung, 2001;Chung & Cho, 2004;Luo, 2005;Ahn & Kwon, 2006).These studies inspired many countries, including Korea, to change their R&D investment practices from expensing to capitalizing. However, many studies have been indifferent to other intangible assets such as donation and advertising expenditures.Advertising is the process of announcing merchandise, products, and corporate images to unspecified individuals to promote sales.Donation is the non-business activity of giving free gifts such as merchandise, products, and money to the needy for charitable purposes.Whether intended for business purposes or not, both expenditure types can enhance a firm's reputation.Fombrun et al. (2000) and Sen and Bhattacharya (2001) show that donations have positive effects on financial performance dimensions such as sales.They also indicate that a firm's charitable activities can enhance the image of a firm's products and merchandise, thus enhancing its overall reputation and, ultimately, its value. Other studies, such as Keller and Lehmann (2003), demonstrate the value relevance of advertising expenditure through the mechanism of the brand value chain model.They assume that advertising promotes a brand's image, which affects customers' buying motivation and eventually leads to higher firm value.Many studies have thus documented the significant value relevance of donation and advertising expenditures but have not treated them as capitalized items in financial statements; most countries' accounting rules demand the expensing of donations and advertising.Thus, accounting and finance research must focus on the possibility of capitalization. Amid the global financial crisis (GFC) triggered in 2007 (Ryan, 2008) by the U.S. subprime mortgage crisis, many countries suffered severe economic recessions and stock market failures.Asian countries had already experienced a financial crisis in 1997.After that Asian financial crisis (AFC), many studies reported that the AFC changed the value relevance of earnings components.For example, Johnson et al. (2000), Janice and John (2008), and Choi et al. (2010) suggest that the value relevance of accounting variables changed after the AFC.They argue that the AFC significantly reduced the information value of accounting variables, thus reducing accounting information's value relevance.This paper assumes that the GFC and AFC have had similar impacts and have thus produced similar value relevance changes. Therefore, this paper compares the value relevance of donation and advertising expenditures before and after the 2007 global financial crisis in listed Korean stock markets between 2004 and 2011.To test whether the value relevance of donation and advertising expenditure is associated with the global financial crisis, this paper divides its sample into pre-and post-December 31, 2007, periods and further divides those into several subgroups to observe how value relevance changed according to the characteristics and conditions of the firms.The study divides its sample into several subgroups (such as KOSPI vs. KOSDAQ, large vs. small and medium, and high vs. low technology) to test the characteristics of the value relevance and the market response to firms' donations. The remainder of this study is structured as follows.Section 2 outlines the literature on the value relevance of donation and advertising expenditures.Section 3 develops the study of hypothesis and designs the empirical models.Section 4 analyzes the empirical results of the main tests.Finally, Section 5 summarizes this paper discusses the limitations of this study and proposes future research plans. The Cause and Effects of the Global Financial Crisis in Korea Since 2007, many countries (including Korea) have suffered economic recessions due to the GFC caused by U.S. subprime mortgage defaults.Korea had already experienced the AFC of 1997, consequent to which many studies documented value relevance changes in accounting variables (Graham et al., 2000;Swanson et al., 2003;Davis-Friday et al., 2006;Janice & John, 2008). For example, Davis-Friday et al. (2006), comparing the value relevance of accounting variables in Asian countries such as Korea, Indonesia, Thailand, and Malaysia, show that the value relevance changed except in Korea.By contrast, Ho et al. (2001) indicate that the value relevance decreased significantly in Korea by comparing the value relevance of net income and net assets before and after the AFC in listed Korean financial markets.They report that the value relevance of accounting variables decreased slightly after 1997.Several studies have investigated whether the value relevance of accounting variables has changed since GFC.For example, Choi and Choi (2010) report that the value relevance of accounting information increased significantly after the GFC in listed Korean stock markets. Despite their different causes and processes, the AFC and GFC produce similar economic impacts.This paper thus assumes that the value relevance changes that occurred in accounting variables after the AFC happened after the GFC. For example, McGuire, Sundgren, and Schneeweis (1988) argue that firms usually perform charitable acts to gain a social reputation.Smith ( 2003) also reports that CSR may increase a firm's reputation and thus enhance firm value.Moreover, Fombrun et al. (2000) and Sen and Bhattacharya (2001) document that CSR is positively associated with a firm's financial performance. In the same vein, Choi et al. (2009), Choi and Lee (2009), Choi et al., (2009), Kim and Choi (2011), Kim andKim (2011), andShin et al. (2011) have examined whether donation expenditures proxying for corporate social responsibility have value relevance in listed Korean stock markets.They suggest that firms' donation activities enhance their reputation, thus promoting their value. Other studies such as Yu and Kim (2006) have sought to determine the most important factor in deciding donation expenditure levels.Their empirical results indicate that debt ratio, liquidity, and financial performance have significant impacts on donation amounts.Kim et al. (2008) have examined whether corporate ownership structures are associated with CSR using donation expenditure as a proxy for CSR.Their results suggest that the percentage of majority shareholdings, firm scale, R&D investment, and cash flows are significantly associated with donation expenditures. These studies offer evidence that CSR activities have the power to enhance a firm's reputation (which is very important to businesses) and thus promote its value.As CSR activities usually take the form of donations, CSR activity levels can be quantified by consulting the donation expenditures noted on financial statements. Literature Review on the Value Relevance of Advertising Expenditure Comanor and Wilson investigated the value relevance of advertising expenditures in 1967 and offered empirical evidence that they are significantly associated with accounting earnings proxying for firm value.After this study appeared, many others investigated the value relevance of advertising expenditure on the assumption that advertising strengthens a firm's brand name and reputation, thus promoting the firm's intangible assets and value. Studies on the value relevance of advertising expenditures have not produced conclusive empirical results.Some have reported that advertising expenditures have significant value relevance (Peles, 1971;Abdel-Khalik, 1975;Clarke, 1976;Hirschey & Weygandt, 1985;Lee, 1994;White & Miles, 1996;Cho & Jung, 2001;Paek & Jeon, 2004;Jung & Cho, 2004;Cho & Ryu, 2006;Lee & Choi, 2007;Huh et al., 2007), whereas others do not (Picconi, 1977;Bulitz & Ettredge, 1989;Hall, 1993;Choi, 1994;Chung & Lee, 1996;Kwon & Lee, 1999;Yook, 2003;Parke, 2005;Kim et al., 2006).For example, White and Miles (1996) show that advertising has long-lasting effects on firm value and thus argue that advertising expenditures should not be expensed but capitalized.Other studies have sought to confirm whether advertising has long-lasting value relevance effects.Abdel-khalik (1975) investigates the value relevance of advertising in various industries, showing that advertising provides long-lasting value relevance in the food, drug, and cosmetics industries but not in the tobacco, soap, and cleaning industries.Chauvin and Hirschey (1993) also report that advertising expenditures positively affect value relevance and that this value relevance is greater for large firms than for small and medium firms. Contrariwise, other studies find no value relevance for advertising expenditures.For example, Bubblitz and Ettredge (1989) and Hall (1993) show that R&D investment has a long-lasting impact on firm value and that advertising expenditures have a value relevance of only one year.On that basis, they conclude that R&D investments should be capitalized and that advertising should not. Hypothesis and Empirical Model Many studies have investigated the value relevance of intangible investments such as donations and advertising.Smith (2003) indicates that CSR may enhance firm reputation, which is usually linked to firm value.Miles (1996) argues that advertising expenditures should be capitalized by documenting advertising's multi-period value relevance.The 2007 GFC changed the value relevance of accounting variables in the world's listed stock markets (Choi & Choi, 2010), but no study has yet connected these variables with economic conditions. Therefore, this study intends to address the value relevance of intangible investments such as donations and advertising in the context of the changes the GFC has inflicted on global financial markets.This paper examines the changes in the value relevance of donations and advertising in listed Korean stock markets between 2004 and 2011, before and after the GFC, and compares the value relevance of donations and advertising expenditure before and after GFC.This paper divides its sample into several subgroups to investigate the value relevance change according to the characteristics and conditions of the companies.The study proposes the following hypotheses: Hypothesis 1: The value relevance of donation expenditures before and after the GFC differs. Hypothesis 1: The value relevance of advertising expenditures before and after the GFC differs. To test these hypotheses, this paper replicates the empirical models in Myers (1997) and Ohlson (1995).Myers (1977) divides a firm's value into measurable parts and immeasurable parts as in equation (a) below: (a) V: firm value, V(A): Value measurable of net assets, V(G): Value of immeasurable net assets where V refers to a firm's value measured as the sum of V(A) and V(G).In equation (a), V(A) is defined as the value of measurable net assets, and V(G) is the value of immeasurable net assets.V(G) is not recorded in financial statements because it cannot be calculated.V(G) usually includes the intangible assets not published in the financial statement because of their immeasurable characteristics such as entertainment costs, donation expenditures, advertising expenditures, corporate reputation, brand name, and customer loyalty (Barth et al. 1998;Black, 1998).Moreover, since Myers (1997), many studies have argued that immeasurable net assets can promote corporate future earnings and operating cash flows (Benard 1994;Biddle et al. 1995;Ohlson 1995;Collins et al. 1997;Barth et al. 1998;Black 1998).This paper adapts Myers (1977) by adding immeasurable variables such as donations and advertising expenditures, as in equation ( b where MV refers to the market value of equity, calculated as the sum of BV and IMA.BV is defined as the book value of equity, and IMA proxies for immeasurable assets.In equation (b), IMA includes the intangible assets not recognized in financial statements, such as donation and advertising expenditures (Barth et al. 1998;Black 1998).This paper converts the model in Ohlson (1995) into the three empirical models below by adding donation and advertising expenditures: where , refers to the stock price at the end of fiscal year t, year t is the event year, , is the book value of the equity at the end of year t-1, , are the accounting earnings before deducting donation expenditures in period t, , is defined as the donation expenditure in period t, , are the accounting earnings before deducting advertising expenses in year t, , is the advertising expenditure in period t, , are the accounting earnings before deducting donation and advertising expenditures in year t, and , is an error term in all equations.All variables are standardized by dividing the total numbers of shares outstanding at the end of fiscal year t. To analyze these empirical models, this study splits its sample into several subgroups (KOSPI vs. KOSDAQ, large vs. small and medium, and high technology vs. low technology) according to stock markets, firm size, and technology level to test the firms' characteristics in relation to the value relevance of their donation and advertising expenditures. Sample Selection This study obtained its sample data from the KIS-FAS (Korea Investors Service-Financial Analysis System) databases; the data cover the period from 2004 to 2011 and are drawn from listed Korean stock markets.This paper excludes firms without stock prices, book values, accounting earnings, donation expenditures, or advertising expenditures.This study also excludes financial banking, business firms, and the impairment of capital firms and removes outliers by excluding sample data with a Cook's Distance greater than 0.5 and an absolute value of studentized residuals greater than 1.Table 1 presents the selected sample data and their sources.The empirical results show that R 2 , indicating the explanatory validity of the empirical model, is 0.8629, 0.8744, and 0.8768 in models 1, 2, and 3 respectively.This result also shows that all independent variables (such as book value, earnings, donations, and advertising) are positively associated with the market value of equity at the 1% significance level.This result is similar to Kwon (2004), which reports the value relevance of book value and accounting earnings at the 1% significance level.The coefficient of donation is higher than that of advertising (8.03662 > 1.13011), a result contrary to the common knowledge that advertising is done for business and donations are not.The empirical result shows that R 2 , which indicates the explanatory validity of the empirical model, is over 0.8619 in every model and both before and after the GFC.This result also shows that the independent variables (such as book value, earnings, donations, and advertising) are positively related to the market value of equity at a 1% significance level in every model and both before and after the GFC.This result is similar to that shown in Table 4 and in Kwon (2004), which shows that book value and accounting earnings are positively associated with firm value. The coefficient of donation is higher than that of advertising before (62.8231 > 1.65911) the GFC, but smaller than that of advertising after the GFC (1.41847 < 3.87286), indicating that donations were more value relevant than advertising before the GFC and that advertising had more value relevance than donation in listed Korean stock markets after it.The Chow test supports these results, with the F value showing a 1% level of significance (110.49),indicating that the difference between the variables' coefficients before and after the GFC have a statistical significance.These results suggest that donations can increase firm value more than advertising in a steady state phase but that advertising has more power to promote company value than donations have in an economic crisis phase.Korean financial markets are classified into KOSPI and KOSDAQ markets; "KOSPI" stands for "Korea Composite Stock Price Index," and "KOSDAQ," for "Korea Securities Dealers Automated Quotation."The listed examination standard level of KOSPI is higher than that of KOSDAQ, and the firm size of KOSPI is larger than that of KOSDAQ. Table 6 shows the total number of samples (4,273 firm-years) and the subgroups for before (2,083 firm-years) and after (2,190 firm-years) the GFC in KOSPI.The empirical results show that the R 2 s are over 0.8536 in every model and both before and after the GFC in the KOSPI market.This result also shows that book value, earnings, donation, and advertising are positively associated with the market value of equity at the 1% significance level in every model before the GFC, while donations have a negative relationship with market value in model 3 after the GFC.This result is different from that shown in Table 4 and Table 5, in which all independent values are positively related to market value. The donation coefficient is higher than that of advertising before (60.79429 > 0.05619) the GFC, whereas the donation coefficient is smaller than that of advertising after the GFC (2.61458 < 4.57795), and the donation coefficient shows negative estimates (-10.30385) in model 3 after the GFC. These results indicate that donations are more value relevant than advertising before the GFC but that advertising has more value relevance than donations in KOSPI markets after it.The F value of the Chow test is significant at the 1% level (110.49),indicating that the coefficients of the donation and advertising expenditures before and after the GFC are significantly different, a result similar to that in Table 5.This suggests that a firm's donations can increase its market value more than its advertising in a steady state phase (before the GFC) and that advertising has more potential to promote firm value than do donations in an economic crisis phase (after the GFC).Table 7 shows the total number of samples (5,734 firm-years) and subgroups (2,619 firm years before the GFC; 3,115 firm-years after the GFC) in the KOSPI market.The result shows that the R 2 s are between 0.4887 and 0.5613 in every model and both before and after the GFC, lower results than those in Tables 5 and 6.The R 2 results indicate that the explanatory power of the main variables (such as book value, earnings, donation, and advertising) in the KOSDAQ is smaller than that in the KOSPI. Table 7 also shows that book value, earnings, donations, and advertising are positively related to the market value of equity at the 1% significance level in every model before and after the GFC, a result similar to those shown in Tables 4 and 5. Table 7 also shows that the coefficients of donation expenditure are much higher than those of advertising both before (15.32321 > 0.59853) and after (41.11454 > 1.28871) the GFC. Contrary to the results shown in Table 6, these results suggest that donations are a more value relevant factor than advertising in both the pre-and post-GFC KOSDAQ markets.The pre-and post-GFC value relevance differences in donations and advertising are also supported by the F value of the Chow test, which is significant at the 1% level (2.71).This suggests that a firm's donations could increase market value more than advertising could in both the pre-and post-GFC KOSDAQ markets.firms with more than one thousand employees or assets amounting to more than 500 billion won (USD 550,000,000) are classified as "large" and others are "small and medium." To examine the value relevance changes in large firms' donation and advertising expenditures before and after the GFC, this paper classifies the large firm sample into pre-and post-December 31, 2007, groups.Table 8 shows the total big firm sample (3,767 firm-years) and its subgroups (1,835 firm-years before the GFC; 1,932 firm-years after the GFC). All the R 2 s are over 0.8423 in every model both pre-and post-GFC, These are similar to the results in the total sample and in the KOSPI group (see Tables 5 and 6), but much higher than in the KOSDAQ samples (see Table 7).Table 8 also shows that donation expenditures are positively associated with firm value (58.99558), whereas advertising has a negative value relevance (-0.2437) at the 1% significance level before the GFC.Similarly, donations are positively associated with firm value (5.89785), while advertising has a negative value relevance (-0.2437) at a 1% significance level after the GFC. These results show that donations have more value relevance than advertising both before and after the GFC in large firms.The difference in the value relevance of donation and advertising expenditures before and after the GFC is supported by the F value of the Chow test, which is significant at the 1% level (41.52), indicating that large firms' donations increased their market value more than their advertising both before and after the GFC.This result differs from this study's prediction and the results of previous studies (Peles, 1971;Abdel-Khalik, 1975;Clarke, 1976;Hirschey and Weygandt, 1985;Lee, 1994;White and Miles, 1996;Cho and Jung, 2001;Paek and Jeon, 2004;Chung and Cho, 2004;Cho and Ryu, 2006;Lee and Choi, 2007;Huh et al., 2007), which report that advertising is positively associated with firm value.Table 9 shows the comparative value relevance of pre-and post-GFC donation and advertising expenditures from 2004 to 2011 for small and medium firms.To investigate the changes in the value relevance of donation and advertising expenditures, the study divides the small and medium group into two pre-and post-December 31, 2007, subgroups.Table 9 shows the total number of small and medium firm samples (6,242 firm-years) and the subgroups (2,867 firm-years before the GFC; 3,375 firm-years after the GFC). The R 2 s of model 1, 2, and 3 show the estimates above 0.6017 both before and after the GFC, similar to the estimates of the KOSDAQ group (see Table 7).Table 9 also shows that donation and advertising expenditures are positively related to company value before and after the GFC.The value relevance of donations is much higher than that of advertising before the GFC (12.80328 > 3.53605), whereas the value relevance of advertising is higher than that of donations after the GFC (3.10695 > 1.49918). These results are supported by the F value of the Chow test, which is significant at the 1% level (6.02), as presented in Table 9.This suggests that a small and medium firm's donations had more of a potential to promote the market value of its equity than its advertising both before and after the GFC and that advertising had more value relevance than donation after the GFC.Table 10 shows whether donations or advertising expenditures had more value relevance and which was the more value relevant factor in both the pre-and post-GFC periods, from 2004 to 2011, for high-technology firms. To investigate the value relevance changes and the comparative value relevance of donation and advertising expenditures before and after the GFC, this paper splits the sample into the high-tech and low-tech company groups.Then, the paper investigates the value relevance change and comparative value relevance of donation and advertising expenditures before and after the GFC. This paper divides its sample into high-and low-technology firm groups according to Himmelberg and Petersen (1994), who define the chemical, pharmaceutical, metal, electronic component, medical, precision and optical instruments, and electrical equipment industries as comprising the high-technology industry and the other industries as comprising the low-technology industry. Table 10 shows the total number of high-tech firm samples (4,730 firm-years) and their subgroups (2,177 firm-years before the GFC; 2,553 firm-years after the GFC).The R 2 s of models 1, 2, and 3 are above 0.7982 both before and after the GFC.Table 10 shows that donation and advertising expenditures are positively associated with firm value before and after the GFC.The value relevance of donation expenditures is higher than that of advertising both before (74.49984 > 10.44624) and after the GFC (24.30043 > 18.36903).The F value of the Chow test, at a 1% significance level (69.96), supports the value relevance difference between donation and advertising expenditures and indicates that a firm's donation activity had more value relevance than advertising activity did both before and after the GFC for high-tech firms.This section examines the value relevance change and the comparative value relevance of donation and advertising expenditures from 2004 to 2011 in the low technology firm group. Table 11 shows the total number of low-tech firm samples (5,279 firm-years) and subgroups (2,525 firm-years before the GFC; 2,754 firm-years after the GFC).All R 2 s in models 1, 2, and 3 show estimates above 0.8513 both before and after the GFC.Table 11 also shows that donation expenditure is positively associated with firm value, while advertising expenditure is negatively associated with company value before and after the GFC.The result also shows that the value relevance of donation expenditures is much higher than that of advertising both before (43.76693 > -0.46831) and after the GFC (11.97270 > -2.49709) for low-tech companies.The F value of the Chow test, at a 1% significance level (59.66), supports the superiority of donation expenditure in the valuation.This result provides evidence that a firm's donation activity was significantly associated with firm value whereas a firm's advertising activity had a negative effect on company value both before and after the GFC in low-tech firms, contrary to the expectation of this paper and previous studies (Peles, 1971;Abdel-Khalik, 1975;Clarke, 1976;Hirschey & Weygandt, 1985;Lee, 1994;White & Miles, 1996;Cho & Jung, 2001;Paek & Jeon, 2004;Jung & Cho, 2004;Cho & Ryu, 2006;Lee & Choi, 2007;Huh et al., 2007), which found that advertising has a positive value relevance. Conclusions This paper investigates the comparative value relevance of and value relevance change between donation and advertising expenditures from 2004 to 2011 in the Korean stock market to compare the value relevance change between donation and advertising expenditures before and after the GFC.To do this, it classifies its sample data according to firm characteristics such as market, size, and technology level.This paper proposes two hypotheses, H-1 (the value relevance of donation expenditures before and after the GFC differ) and H-2 (the value relevance of advertising expenditures before and after the GFC differ) and designs empirical models replicating the theory in Myer (1977) and the model in Ohlson (1995) by including donation and advertising expenditures. The empirical model of this paper shows that hypotheses 1 and 2 are significantly supported as applied to Korea's stock markets.The test results concerning the comparative value relevance of donation and advertising expenditures are unexpected, however. First, the examination of the total sample involving the independent variables (donation and advertising expenditures) shows their positive association with firm value at the 1 % significance level, but the comparative value relevance contradicts the general expectation that advertising occurs for business purposes while donations do not: donation expenditures show a higher value relevance than do advertising expenditures in all sample firms. Second, this paper divides its sample into two subgroups (pre-and post-GFC) to investigate value relevance changes after the global financial crisis.As in the overall test results, donation and advertising expenditures show significant value relevance, but the detailed results differ.Donation expenditures display much higher value relevance than advertising before the GFC while advertising shows higher value relevance than donations after the GFC, suggesting that economic crises such as the GFC impact the value relevance of donations and advertising and that donations can increase firm value more than advertising in a steady state (as from 2004 to 2007), whereas advertising, which focusses on business, can promote firm value more than donations can in economic crises (as from 2008 to 2011). Third, this study extracts firms in the KOSPI and KOSDAQ markets to test the comparative value relevance of and value relevance change between donation and advertising expenditures.The results show that donation expenditures had much higher value relevance than advertising did before the GFC in the KOSPI market, but that advertising had higher value relevance than donations did after the GFC, suggesting that economic crises such as the GFC has a significant effect on the value relevance of donation and advertising activities.As in the empirical results described above, this result also shows that donations enhance firm value more than advertising does in a steady state (as from 2004 to 2007) and that advertising can promote firm value more than donations can in a financial crisis state (as from 2008 to 2011) in a KOSPI market.Donation expenditures have more value relevance than advertising expenditures in the KOSAQ than in the KOSPI group both before and after the GFC, indicating that the comparative value relevance of donation and advertising expenditures changes according to the financial market a firm belongs to. Fourth, this study extracts both large and small and medium firms from the sample to examine the comparative value relevance of and value relevance change between their donation and advertising expenditures.The results show that donation expenditures had much higher value relevance than advertising expenditures and that advertising had a negative value relevance, both before and after the GFC in large firms.For small and medium firms, advertising had positive value relevance at a 1% significance level both pre-and post-GFC.In addition, advertising had more value relevance than donations did after the GFC in small and medium firms.These results suggest that the global economic crisis had a significant effect on the value relevance of the donation and advertising activities of small and medium firms, while the GFC had no effect on their value relevance in large firms. Fifth, this paper extracts high-and low-tech firms from the samples to investigate the comparative value relevance of and value relevance change between their donation and advertising expenditures.The results show that donation expenditures had much higher value relevance than advertising did both before and after the GFC in high-tech firms, with the same result seen in low-tech firms.Advertising expenditures have a negative value relevance both before and after the GFC in low-tech companies, contradicting this study's general expectation and the literature.This result suggests that the comparative value relevance of donation and advertising expenditures changes according to the level of a firm's technology.Nevertheless, this paper has a limitation.It fails to cover a large sample of countries (e.g., the U.S. China, Japan).It will be necessary to extend the coverage of the sample to include more global companies. Market value of equity, BV: Book value of equity, IMA: Proxies for immeasurable assets Table 1 . Sample selection Table 2 Ohlson (1995)descriptive statistics of the sample firms' main variables.The dependent variable, , , has a mean value of 20,036, a minimum value of 21, and a maximum value of 1,707,000.The independent variable, , , has a mean value of 1,508, a minimum value of -56,641, and a maximum value of 343,507.The mean of MV , : stock price in the end of fiscal year t, where year t is the event year, BV , : book value of equity at the end of year t-1, NI , : accounting earnings in period t, DON , : donation expenditure in period t, AD , is defined as advertising expenditure in period t.Though the Pearson correlation analysis does not show the cause and effect between dependent and independent variables, these results demonstrate the strong possibility that independent variables such as NI, BV, DON, and AD are positively related to MV. Comparative Value Relevance of Donation and Advertising Expenditure: Total Firms Table4presents the value relevance of donation and advertising expenditures from 2004 to 2011 in listed Korean stock markets.This paper uses a modifiedOhlson (1995)model that includes donation and advertising expenditures to investigate the comparative value relevance of the two variables. 4.2.2CorrelationAnalysisTable3presents the Pearson correlation analysis between the dependent and independent variables of this paper.MV, NI, BV, DON, and AD are positively correlated at the l % significance level.As some variables, such as MV, NI, and BV, show a high correlation, this paper tested for multicollinearity.The results indicate low multicollinearity by showing that all VIF (variance inflation factors) are smaller than 10. Table 4 . Comparative value relevance of donation and advertising expenditure: Total sample Comparative Value Relevance of Donation and Advertising Expenditures before and after GFC: Total Firms Table5shows the comparative value relevance of donation and advertising expenditures before and after the 2007 global financial crisis, between 2004 and 2011, in Korean stock markets.To test whether the value relevance of donation and advertising expenditures is associated with the global financial crisis of 2008, this paper divides its sample into pre-and post-December 31, 2007, groups.Table6displays the total number of samples (10,009 firm-years) from 2004 to 2011 in Korean stock markets. Table 5 . Comparative value relevance of donation and advertising expenditure before and after GFC: Total firms Comparative Value Relevance of Donation and Advertising Expenditures before and after the GFC: KOSPI Market Table6presents the data on the comparative value relevance of donation and advertising expenditures before and after the 2007 global financial crisis from 2004 to 2011 in listed Korean stock markets (KOSPI).To examine the value relevance changes in the donation and advertising expenditures, this paper split the sample firms into pre-and post-December 31, 2007, groups. Table 6 . Comparative value relevance of donation and advertising expenditure before and after GFC: KOSPI market Table 7 . Comparative value relevance of donation and advertising expenditure before and after GFC: KOSDAQ Comparative Value Relevance of Donation and Advertising Expenditures before and after the GFC: Large Firms Table8presents the comparative value relevance of donation and advertising expenditures before and after the 2007 global financial crisis from 2004 to 2011 for large firms.According to Korea's basic small enterprise law, Table 8 . Comparative value relevance of donation and advertising expenditure before and after GFC: Large firm Comparative Value Relevance of Donation and Advertising Expenditures before and after the GFC: Small and Medium Firms Table 9 . Comparative value relevance of donation and advertising expenditure before and after GFC: Small & medium firm Comparative Value Relevance of Donation and Advertising Expenditures before and after the GFC: High-Tech Firms Table 10 . Comparative value relevance of donation and advertising expenditure before and after GFC: High-tech firm Comparative Value Relevance of Donation and Advertising Expenditures before and after the GFC: Low-Tech Firms Table 11 . Comparative value relevance of donation and advertising expenditure before and after GFC: Low-tech firm
7,904.4
2013-04-27T00:00:00.000
[ "Business", "Economics" ]
A Pleiotrophin C-terminus peptide induces anti-cancer effects through RPTPβ/ζ Background Pleiotrophin, also known as HARP (Heparin Affin Regulatory Peptide) is a growth factor expressed in various tissues and cell lines. Pleiotrophin participates in multiple biological actions including the induction of cellular proliferation, migration and angiogenesis, and is involved in carcinogenesis. Recently, we identified and characterized several pleiotrophin proteolytic fragments with biological activities similar or opposite to that of pleiotrophin. Here, we investigated the biological actions of P(122-131), a synthetic peptide corresponding to the carboxy terminal region of this growth factor. Results Our results show that P(122-131) inhibits in vitro adhesion, anchorage-independent proliferation, and migration of DU145 and LNCaP cells, which express pleiotrophin and its receptor RPTPβ/ζ. In addition, P(122-131) inhibits angiogenesis in vivo, as determined by the chicken embryo CAM assay. Investigation of the transduction mechanisms revealed that P(122-131) reduces the phosphorylation levels of Src, Pten, Fak, and Erk1/2. Finally, P(122-131) not only interacts with RPTPβ/ζ, but also interferes with other pleiotrophin receptors, as demonstrated by selective knockdown of pleiotrophin or RPTPβ/ζ expression with the RNAi technology. Conclusions In conclusion, our results demonstrate that P(122-131) inhibits biological activities that are related to the induction of a transformed phenotype in PCa cells, by interacing with RPTPβ/ζ and interfering with other pleiotrophin receptors. Cumulatively, these results indicate that P(122-131) may be a potential anticancer agent, and they warrant further study of this peptide. Background Pleiotrophin, also known as HARP (Heparin Affin Regulatory Peptide) is a 136-amino acid, secreted growth factor that, along with Midkine, constitutes a two-member sub family of heparin binding growth factors (HBGFs). Although pleiotrophin has been shown to promote neurite outgrowth in the developing brain [1], elevated concentrations of this growth factor are found in many types of tumors as well as in the plasma of patients with different types of cancer [2][3][4]. Pleiotrophin induces a transformed phenotype in several cell lines [5,6] and exhibits mitogenic, anti apoptotic, chemotactic, and angiogenic actions in vitro as well as in vivo [7][8][9][10]. Growth factors can be hydrolyzed by proteases, leading to the production of biological active peptides. Previous studies indicate that pleiotrophin is cleaved by enzymes in the extracellular environment, such as plasmin, trypsin, chymotrypsin, and MMPs. Moreover, the resulting peptides exert altered biological functions compared to the whole molecule. The proteolytic cleavage of pleiotrophin is also affected by the presence of glycosaminoglycans (GAGs), suggesting that a complex system serves to regulate the overall effect of this growth factor [19,20]. Furthermore, pleiotrophin and pleiotrophin peptides modulate the biological actions of other growth factors such as VEGF, contributing to the complex mode of growth factor actions [21]. Prostate cancer (PCa) is the most common cancer among men in Western countries, although the development of PCa as well as the signals contributing to the transformed phenotype of PCa cells remains incompletely understood [22]. During adulthood, maintenance of normal prostate function depends on mesenchymalepithelial interactions, which contribute to the homeostatic equilibrium of the glandular prostate epithelial cells. Disturbances in this equilibrium lead to the development of diseases like PCa. Although the mechanisms that control the mesenchymal-epithelial interactions are poorly understood, numerous studies suggest that growth factors have a key role in prostate homeostasis. Pleiotrophin has been implicated in PCa progression and acts as an autocrine growth factor in various prostate-derived cell lines including DU145, PC3, and LNCaP [23,24]. Truncated forms of pleiotrophin or synthetic peptides corresponding to defined domains of this growth factor have been studied in an attempt to understand the structure/function relationship of pleiotrophin [25][26][27]. We previously reported that the biological effects of this growth factor were inhibited by the truncated mutant PTNΔ111-136 and corresponding synthetic peptide P (111-136) [28]. In the context of defining peptides with anti tumor actions, we sought to identify the minimum sequence responsible for the inhibition of pleiotrophin activity. Since an obvious feature of P(111-136) is the stretch of basic residues, we investigated whether the basic sequence P(122-131) (KKKKKEGKKQ) may have biological activities that are related to the induction of a transformed phenotype in PCa cells. Here, we investigated the effect of P(122-131) on the adhesion, proliferation, and migration of two prostate epithelial cell lines as well as on in vivo angiogenesis. Results In a previous work, we reported that P(122-131) inhibits anchorage-independent growth of DU145 prostate cancer cells [29]. In the present work, we tested the effect of P (122-131) on other tumor phenotypes in the well-established prostate carcinoma cell lines, DU145 and LNCaP. We also investigated the effect of these peptides on angiogenesis in vivo, using the CAM assay. Since P(122-131) contains seven lysines and is highly charged, we also examined the effects of two "mock" peptides in parallel. One consisted of D-amino acids (designated AAD), while the other consisted of five lysines (designated 5K). P(122-131) inhibits adhesion of DU145 and LNCaP cells The effect of P(122-131) on the adhesion of DU145 cells was tested using three approaches. First, an equal number of cells was incubated with increasing concentrations of peptides and immediately seeded. In the second approach, cells were incubated with different concentrations of the peptides for 30 min before seeding. In the final approach, cells were pre-incubated for 30 min with increasing concentrations of peptide, then washed and seeded. After a 10-min incubation period, adherent cells were measured by the crystal violet assay. Under all conditions, P(122-131) inhibited adhesion in a concentration dependent manner, having a maximal effect (50% inhibition relative to control) at a concentration of 20 μΜ ( Figure 1A). To confirm this result, we looked for inhibitory effect of P(122-131) on the adhesion of LNCaP cells. As shown in Figure 1A, P(122-131) inhibited LNCaP adhesion in a concentration-dependent manner, having a maximal effect (40% inhibition relative to control) at a concentration of 20 μΜ. P(122-131) inhibits anchorage-independent proliferation of DU145 and LNCaP cells The effect of P(122-131) on the proliferation of DU145 and LNCaP cells was investigated. We found that P (122-131) inhibited anchorage-independent proliferation in a concentration-dependent manner, having a maximal effect (60% inhibition relative to control) at a concentration of 20 μΜ ( Figure 1B). P(122-131) inhibits migration of DU145 and LNCaP cells We next investigated the effect of P(122-131) on DU145 and LNCaP chemotaxis, as measured using Transwell assays. Similar to its effects on adhesion and proliferation, P(122-131) inhibited chemotactic migration in a concentration-dependent manner (55% inhibition relative to control). The maximal effect was observed at the concentration of 20 μM ( Figure 1C). P(122-131) inhibits the in vivo angiogenesis The inhibitory effects of P(122-131) on DU145 and LNCaP adhesion, proliferation, and migration are consistent with a possible anti-tumor action for this peptide. Therefore, we tested the effect of this peptide on in vivo angiogenesis. Tumor angiogenesis plays a key role in cell proliferation by providing nutrients and oxygen. It also facilitates metastasis through the formation of new, leaky vessels. We observed that P(122-131) reduced the total length of blood vessels in the CAM assay in a concentration-dependent manner. Angiogenesis was inhibited up to 45%, with maximal inhibition occurring in the presence of 2 nmol P(122-131) ( Figure 1D). In contrast to P(122-131), neither AAD nor 5K had any measurable effect on angiogenesis. Similarly, neither affected adhesion, proliferation, nor migration of DU145 or LNCaP cells ( Figure 2). P(122-131) binds to RPTPb/ζ and is endocytosed P(122-131) harbours a cluster of basic residues known to bind to cell receptors [25]. To begin to understand the mechanism through which P(122-131) exerts its biological actions, we investigated the effect of this peptide on signaling mediated by the pleiotrophin receptors. In a previous work, co-immunoprecipitation/Western blot analysis of P(122-131) and RPTPβ/ζ indicated that this pleiotrophin receptor interacts with P(122-131) [29]. To provide additional support for an interaction between P (122-131) and RPTPβ/ζ, DU145 cells were co-labelled for B(122-131) and RPTPβ/ζ. After a 30-sec incubation of cells with B(122-131), confocal microscopy revealed B (122-131) bound to the cell surface and co-localized with RPTPβ/ζ ( Figure 3A). After a 20-min incubation, endocytotic vesicles containing both B(122-131) and ). An equal number of DU145 or LNCaP cells was incubated with increasing concentrations of P(122-131) for 30 min before seeding. After a 10-min incubation period, adherent cells were measured by the crystal violet assay. (B) Soft agar growth assays showing anchorage-independent proliferation. An equal number of DU145 or LNCaP cells was resuspended in growth medium containing 10% FBS, 0.3% agar, and increasing concentrations of P(122-131), and seeded onto the bottom agar, which consisted of growth medium containing 10% FBS and 0.8% agar. The top agar was allowed to solidify, and standard growth media supplemented with peptide was added to each well. The cells were incubated 12 days, after which cell colonies larger than 50 μm were quantified by counting the entire area of each well. (C) Migration of cells through Transwell filters. The lower compartment of Transwell filters (8 μm pores) was filled with growth media containing 2.5% FBS, 0.5% BSA, and increasing concentrations of P(122-131). An equal number of DU145 or LNCaP cells was resuspended in growth medium containing 2.5% FBS and 0.5% BSA, and transferred into Transwell inserts. Cells that successfully migrated through the filter pores, were fixed, stained and quantified by counting the entire area of each filter. (D) Effect of P(122-131) on angiogenesis, as measured by the chicken embryo CAM assay. An 1 cm 2 area of chicken embryo CAM, restricted by a silicon ring, was incubated with increasing concentrations of P(122-131). 48 h later total vessels length was quantified as described in Materials and Methods. Results are mean values ± SE from at least 3 independent experiments. RPTPβ/ζ were detected in the cytoplasm ( Figure 3B). Furthermore, as shown in Figure 3C, P(122-131) association with RPTPβ/ζ was displaced by pleiotrophin. As expected, incubation of cells with only streptavidin-FITC or rhodamine conjugated secondary antibodies produced no signal (data not shown). Elucidation of the mechanism through which P(122-131) exerts its biological actions DU145 and LNCaP cells synthesize and secrete pleiotrophin, which in an autocrine manner stimulates cells [3,24]. To examine whether the inhibitory effects of P (122-131) on DU145 adhesion, proliferation, and migration may be the result of endogenous pleiotrophin inhibition, we stably transfected DU145 cells with pcDNA3.1+ plasmid encoded the antisense mRNA of pleiotrophin. After 1 month of selection with neomycin, clones were screened for down-regulation of pleiotrophin expression. Strong down-regulation of pleiotrophin expression was observed in clones #2 and #5 (DU145-HM2 and DU145-HM5 respectively). No reduction of the pleiotrophin expression was observed in cells transfected with pcDNA3.1+ alone (DU145-NC1, DU145-NC3) (Additional file 1-A). As shown in Figure 4A, pleiotrophin knockdown decreased DU145 adhesion, and P(122-131) further decreased it. However, the inhibitory effect of P(122-131) on the adhesion of DU145-HM2 cells was up to 20%, while its inhibitory effect on DU145 adhesion was up to 50%. No difference between the adhesion of DU145 and DU145-NC3 cells was observed (data not shown). These results indicate that P(122-131) not only inhibits pleiotrophin-mediated adhesion, but also may interfere with other growth factors actions. Furthermore, since pleiotrophin enforces dimerization of RPTPβ/ζ, inactivates its catalytic activity [30], and inhibits cellular adhesion (unpublished data), the inhibitory effect of P(122-131) may also be the result of its interaction with RPTPβ/ζ. To examine whether RPTPβ/ζ mediates the inhibitory effect of P(122-131), we tested the effect of P(122-131) on DU145 cells with stable down-regulation of RPTPβ/ζ expression (DU145-RM6). We found that P(122-131) inhibits DU145-RM6 adhesion up to 20% ( Figure 4A), indicating that the peptide interferes with other pleiotrophin receptors or with other growth factors. To determine whether P(122-131) may interfere with other growth factors, we transiently transfected DU145-HM2 cells with a siRNA targeting the mRNA of RPTPβ/ζ. In parallel, DU145-HM2 cells were transiently transfected with a siRNA that does not target any mRNA (negative control) (Additional file 1-B). As shown in Figure 4A, pleiotrophin/RPTPβ/ζ knockdown decreased DU145 adhesion, and blocked the inhibitory effect of P(122-131). No difference between the adhesion of DU145-HM2 and DU145-HM2 cells transfected with the negative control siRNA was observed (data not shown). Taken together, these results indicate that P(122-131) interacts with RPTPβ/ζ and induces RPTPβ/ζ inhibitory effect on cellular adhesion, while in parallel antagonizes the interaction of pleiotrophin with its other receptors, probably SDC3, inhibiting pleiotrophin-induced adhesion. As shown in Figure 4B, pleiotrophin knockdown decreased DU145 anchorage-independent proliferation, and P(122-131) further decreased it. However, the inhibitory effect of P(122-131) on this biological action was not as effective as on wild type cells. These results indicate that P(122-131) interferes with pleiotrophin and other growth factors actions. However, the effect of RPTPβ/ζ knockdown on the formation of cell colonies was so strong that we cannot draw any conclusion about the mechanism that P(122-131) inhibits anchorage-independent proliferation. As shown in Figure 4C, pleiotrophin knockdown decreased DU145 chemotactic migration, and P(122-131) further decreased it. However, the inhibitory effect of P (122-131) on the migration of DU145-HM2 cells was up to 20%, while its inhibitory effect on DU145 migration was up to 55%. No difference between the migration of DU145 and DU145-NC3 cells was observed (data not shown). These results indicate that P(122-131) not only inhibits pleiotrophin-mediated migration, but also may interfere with other growth factors signaling. Furthermore, since pleiotrophin enforces dimerization of RPTPβ/ζ, inactivates its catalytic activity [30], and inhibits cellular migration (unpublished data), the inhibitory effect of P(122-131) may also be the result of its interaction with RPTPβ/ζ. To examine whether RPTPβ/ζ mediates the inhibitory effect of P(122-131), we tested the effect of P(122-131) on DU145-RM6 cells. We found that P(122-131) inhibits DU145-RM6 migration up to 20% ( Figure 4C), indicating that the peptide interferes with other pleiotrophin receptors or with other growth factors. To determine whether P (122-131) may interfere with other growth factors, we examined its effect on cells in which both pleiotrophin and RPTPβ/ζ expression was down-regulated. As shown in Figure 4C, pleiotrophin/RPTPβ/ζ knockdown decreased DU145 migration, and blocked the inhibitory effect of P (122-131). No difference between the migration of DU145-HM2 and DU145-HM2 cells transfected with the negative control siRNA was observed (data not shown). Taken together, these results indicate that P(122-131) interacts with RPTPβ/ζ and induces RPTPβ/ζ inhibitory effect on cellular migration, while in parallel antagonizes the interaction of pleiotrophin with its others receptors, probably SDC3, inhibiting pleiotrophin-induced migration. P(122-131) inactivates Src, Fak, and Erk 1 / 2 , and activates Pten Src activation is strictly regulated and depends on dephosphorylation of Y527 in the carboxy-terminal tail, which is prerequisite for its subsequent activation by autophosphorylation of Y416 in the activation loop of the kinase [31]. To determine whether Src may be affected by P(122-131), DU145 cells were serum starved for 4 h, then incubated with increasing concentrations of P(122-131) for 3 to 45 min. Src inactivation was indirectly assessed by Western blot analysis of phosphorylated Src at site Y416 and HSC70 that was used to normalize the results. As shown in Figure 5A, P(122-131) promoted a swift decrease in Src phosphorylation within 3 min in a concentration dependent manner, having a maximal effect (70% inhibition relative to control) at a concentration of 20 μΜ. This inactivation returned to near basal levels by 45 min ( Figure 5A). We next investigated the effects of P(122-131) on inactivation of other molecules known to interact with Src. We found that FAK phosphorylation was decreased 15 min after incubation of DU145 cells with P(122-131) in a concentration dependent manner, having a maximal effect (70% inhibition relative to control) at a concentration of 20 μΜ. This inactivation returned to near basal levels by 45 min ( Figure 5C). As shown in Figure 5D, ERK 1 / 2 were also inactivated after a 15-min incubation with P(122-131) that was sustained up to 45 min (70% inhibition relative to control). Finally, PTEN was activated after a 15-min incubation with P(122-131) that was sustained up to 45 min (80% inhibition relative to control) ( Figure 5B). Elucidation of the mechanism through which P(122-131) affects Src, Fak, Pten, and Erk 1 / 2 activity In a previous study, we have shown that pleiotrophin enforces RPTPβ/ζ dimerization and inactivation, and reduces the phosphorylation levels of Src, Fak, Pten, and Erk 1 / 2 (unpublished data). In this study, we showed that P(122-131) interacts with RPTPβ/ζ and induces RPTPβ/ ζ inhibitory effect on cellular adhesion and migration, while in parallel antagonizes the interaction of pleiotrophin with its others receptors, probably SDC3, inhibiting pleiotrophin-induced biological actions. To confirm that P(122-131) interferes with pleiotrophin signaling, we tested its effect on activation of Src, Fak, Pten, and ERK 1 / 2 of DU145-HM2 cells. As shown in Figure 6A, the phosphorylation levels of Src, Fak, Pten, and Erk 1 / 2 are induced on DU145-HM2 cells compared with wild type cells, while pleiotrophin knockdown partially blocked P(122-131)-induced Src, Fak and Erk 1 / 2 inactivation, and Pten activation. Since pleiotrophin expression levels are low on DU145-HM2 cells, RPTPβ/ζ cannot be dimerized, and as monomer reduces the phosphorylation of Src at site Y527 and induces autophosphorylation of Y416, resulting in increased Fak, Pten, and Erk 1 / 2 phosphorylation. However, when DU145-HM2 cells are treated with P(122-131), it cannot inhibit pleiotrophin signaling, but can reduce the phosphorylation levels of Src, Fak, Pten, and Erk 1 / 2 through RPTPβ/ ζ. Furthermore, as shown in Figure 6B, RTPPβ/ζ knockdown partially blocked P(122-131)-mediated Src and Fak inactivation, and inhibited P(122-131)-mediated Pten activation and Erk 1 / 2 inactivation. Taken together, these results indicate that P(122-131)/RPTPβ/ζ interaction triggers a signal transduction pathway that reduces the phosphorylation levels of these four signal transduction molecules, and inhibits cellular adhesion and migration, while in parallel P(122-131) interferes with other pleiotrophin receptors, probably SDC3, inhibits pleiotrophin-mediated Src and Fak activation, resulting in inhibition of pleiotrophin-induced cellular adhesion and migration. Discussion During the last decade, pleiotrophin has come to be recognized as a pleiotropic growth factor that participates not only in neurite outgrowth in the developing brain [1], but also in angiogenesis, and malignant transformation of many cell types. Pleiotrophin is elevated in sera or tumors from patients with colon, stomach, pancreatic, and breast cancer [2][3][4][5][6][7][8][9][10]. Moreover, the differential expression of pleiotrophin mRNA and protein among normal and malignant prostate epithelial cells, implicates this protein in the induction of a transformed phenotype [24]. NMR studies showed that pleiotrophin contains two β-sheet domains connected by a flexible linker. In addition, its two lysine cluster sequences within both the Nand C-terminal domains lack a detectable structure and appear to form random coils [32]. To date, pleiotrophin activities have been attributed either to the entire molecule or to specific domains. From previous studies, it is known that either but not both the N-or C-terminal domains is required for pleiotrophin activity [27], and that the C-terminal domain is involved in the mitogenic, angiogenic, and tumor formation activities of this growth factor [25,28]. Furthermore, pleiotrophin peptide fragments have been detected in cell supernatants, as well as in tissues [33,34], and such peptides can also be generated in vitro by proteolytic cleavage of pleiotrophin [20]. Our group has already characterized the biological actions of several pleiotrophin peptides [10,20,25,28]. It is noteworthy, that although pleiotrophin N-and Cterminal domains lack a detectable structure, peptides corresponding to these domains induce in vitro and in vivo angiogenesis [10,33]. Therefore, the biological actions of pleiotrophin should be always considered to be the overall outcome of its secretion, degradation, and specific cleavage, with latter event possibly generating pleiotrophin peptides with diverse, or even opposite, biological actions. To illustrate this point, a study on glioblastoma cell proliferation and migration has revealed that cleavage of the 12 C-terminal amino acids from pleiotrophin (124-136) leads to distinct biological activities through differential activation of RPTPβ/ζ or ALK signalling pathways [15]. In this study, we sought to identify the minimum sequence of the C-terminal region of pleiotrophin that is responsible for the inhibition of biological activities that are related to the induction of a transformed phenotype in PCa cells. Since an obvious feature of pleiotrophin C-terminal domain is the stretch of basic residues, we investigated the effect of the basic sequence P(122-131) (KKKKKEGKKQ) on tumor phenotypes. Our results showed that P(122-131) inhibits DU145 and LNCaP cell adhesion, anchorage-independent proliferation, and migration in a concentration dependent manner. Furthermore, the CAM assay revealed that P(122-131) suppressed the formation of new blood vessels, a process important for tumor growth and metastasis. These biological activities of P(122-131) could be attributed solely to its high positive charge. Nevertheless, this does not seem to be the case, since, in the same set of experiments, neither AAD nor 5K exerted any detectable biological activity. Thus, the action of P(122-131) is more likely due to its specific amino acid sequence and charge. To reveal the mechanism through which P(122-131) exerts its biological actions, we investigated the effect of this peptide on signaling mediated by the pleiotrophin receptors. Pleiotrophin binds to specific cell surface receptors such as SDC3 [11], ALK [13], and RPTPβ/ζ [12]. RPTPβ/ζ is synthesized as a membrane-bound CS proteoglycan and its extracellular variant, which is generated by alternative splicing, is phosphacan, a major soluble CS proteoglycan [12,35]. Pleiotrophin binding to RPTPβ/ζ depends on the CS portion of this receptor, and the removal of CS results in a remarkable decrease in binding affinity [36]. However, treatment of cells with chondroitinase had no effect on the binding of P(122-131) to DU145 cells, suggesting that P(122-131) does not bind to the RPTPβ/ζ-derived glycosaminoglycans, in spite of its basicity [29]. Our results demonstrate that P (122-131) actions are mediated by RPTPβ/ζ. P(122-131) was co localized with RPTPβ/ζ at the cell surface and eventually become cytoplasmic, likely as a result of endocytosis. Moreover, immunoprecipitation followed by Western blotting confirms the interaction between P (122-131) and RPTPβ/ζ [29]. RPTPβ/ζ is a receptor phosphatase with intrinsic catalytic activity [30]. In a previous study, we showed that pleiotrophin/RPTPβ/ζ interaction leads to different biological responses according to RPTPβ/ζ substrates. Pleiotrophin/RPTPβ/ζ-Src interaction reduces the phosphorylation levels of Src, Fak, Pten, and Erk 1 / 2 , and inhibits cellular adhesion and migration (unpublished data). Investigation of the transduction mechanism revealed that P(122-131) induced Src, Fak, and Erk 1 / 2 inactivation in a concentration and time-dependent manner. Furthermore, P(122-131) activated Pten, a tumor suppressor which activity has been proposed to reduce cell migration and proliferation [37,38]. The finding that the inhibitory effect of P(122-131) on cellular adhesion and migration could be reduced by down-regulation of pleiotrophin or RPTPβ/ζ expression, demonstrates that this peptide not only ineracts with RPTPβ/ζ and inhibits cellular adhesion and migration, but also antagonizes the interaction of pleiotrophin with its others receptors. P(122-131) interference with other pleiotrophin receptors was confirmed by the finding that P(122-131) induced Src and Fak inactivation on cells with RPTPβ/ζ knockdown. Furthermore, we excluded the possibility of P(122-131) interference with other growth factors, since the peptide did not exert any biological action on cells that the expression levels of both pleiotrophin and RPTPβ/ζ are down-regulated. Our results also showed that P(122-131) inhibits anchorage-independent proliferation, but the effect of RPTPβ/ζ knockdown was so strong that we cannot draw any conclusion about the mechanism through which the peptide inhibits this biological action. It is known that RPTPs show structural and functional similarity to CAMs. Although certain RPTPs mediate homophilic interactions [39], there are no data indicating that RPTPβ/ζ is implicated in such interactions. Conclusions In the context of studying the functions of specific domains of pleiotrophin and defining peptides with anti tumor actions, we identified the minimum sequence responsible for the inhibition of pleiotrophin activity. Our results demonstrated that P(122-131) interacts with RPTPβ/ζ and triggers a signal transduction pathway that inhibits DU145 and LNCaP adhesion and migration, while in parallel antagonizes the interaction of pleiotrophin with its others receptors, inhibiting pleiotrophininduced biological actions (Figure 7). Cumulatively, these results indicate that P(122-131) may be a potential anticancer agent, and they warrant further study of this peptide. Adhesion assay 24-well culture plates were coated with 10 μg/ml fibronectin for 1 h at 37°C. Wells were then incubated with a 0.5% solution of bovine serum albumin (BSA) for 1 h at 37°C to block further non specific adsorption of protein. 50.000 resuspended cells in RPMI-1640 medium supplemented with 2.5% FBS were then seeded. After a 10-min incubation period unattached cells were removed by shaking the plates at 2.000 rpm for 10 sec, and by three washes with PBS. Attached cells were fixed with 4% paraformaldehyde and stained with crystal violet. Crystal Violet assay Adherent cells were fixed with methanol and stained with 0.5% crystal violet in 20% methanol for 20 min. After gentle rinsing with water, the retained dye was extracted with 30% acetic acid, and the absorbance was measured at 590 nm. Soft agar growth assay Anchorage-independent growth was assessed by measuring the formation of colonies in soft agar. Twelve-well plates were layered with bottom agar, which consisted of growth medium containing 10% FBS and 0.8% agar. After the bottom agar had solidified, 2000 cells were resuspended in growth medium containing 10% FBS, 0.3% agar, and peptide, then seeded onto the bottom agar. The top agar was then allowed to solidify, and standard growth media supplemented with peptide was added to each well. The cells were incubated at 37°C, in 5% CO 2 for 12 days. Cell colonies larger than 50 μm were quantified by counting the entire area of each well. Transwell assay Migration assays were performed in Boyden chambers using filters (8 μm pore size, Costar, Avon, France) coated with fibronectin (7,5 μg/cm 2 ) for 1 h at 37°C. Filters were washed, blocked with 0.5% BSA for 1 h at 37°C , and dried. Assay medium (RPMI-1640 medium supplemented with 2.5% FBS, and 0.5% BSA, with or without the chemo attractant) was added to the lower compartment, and 10 4 cells were added into the insert. After incubation for 30 min at 37°C, filters were fixed. Non-migrated cells were scrapped off the upper side of the filter, and filters were stained with crystal violet. Number of migrated cells was quantified by counting the entire area of the filter. Chicken embryo chorioallantoic membrane (CAM) assay The in vivo CAM angiogenesis model was used as previously detailed [10]. Immunofluorescence confocal microscopy DU145 cells grown in 8-well tissue culture slides (Nunc) were incubated with 100 μΜ biotinylated P(122-131) (B (122-131)) or with pleiotrophin at 4°C for the indicated time. The cells were then fixed in 4% paraformaldehyde for 10 min at room temperature, rinsed three times with PBS, quenched with 50 mM Tris buffer pH 8.0 and 100 mM NaCl, permeabilized for 15 min in PBS containing 0.3% Triton X-100 and 0.5% bovine serum albumin (BSA), and blocked in PBS containing 3% BSA for 1 h at room temperature. Cells were incubated for 1 h with streptavidin-FITC (1:100), anti-RPTPβ/ζ antibody (1:100), and rhodamine-conjugated goat anti-mouse IgG (1:600) in permeabilization buffer. After three rinses in PBS, cells were mounted using Sigma mounting fluid. Labelling was observed using a Nikon confocal microscope and photographed. Pleiotrophin-antisense RNA transfection Stable transfection of DU145 to down-regulate pleiotrophin expression was performed as previously described [23]. siRNA transfection RNA oligonucleotide primers and the siPORT NeoFX Transfection Agent were obtained from Ambion Inc. shRNA transfection Stable transfection of DU145 cells using shRNA targeting RPTPβ/ζ expression was performed using the pSilencer 4.1-CMV expression vector and the siPORT XP-1 Transfection Agent obtained from Ambion Inc. Based on the siRNA sequence, shRNA was designed, ligated into the pSilencer 4.1-CMV expression vector and transfected into cells according to Ambion's instructions. Briefly, siPORT XP-1 and shRNA were mixed at a final ratio of 1:6 in OPTI-MEM media. The transfection complexes were then overlaid onto 24-well plate cultures grown in RPMI-1640 supplemented with 10% FBS. After 1 month of selection with 300 μg/ml G418, clones were screened for down-regulation of RPTPβ/ζ expression. Double-stranded negative control shRNA from Ambion was also used. Immunoprecipitation Media from DU145 cultures grown in 60 mm plastic dishes were aspirated, cells were washed twice with icecold PBS, and cells were lysed in 1 ml buffer containing 50 mM HEPES pH 7.0, 150 mM NaCl, 10 mM EDTA, 1% Triton X-100, 1% Nonidet P-40, 1 mM phenylmethylsulfonyl fluoride, 1 mM sodium orthovanadate, 5 μg/ml aprotinin, and 5 μg/ml leupeptin. Cells were harvested, sonicated for 4 min on ice, and centrifuged at 20.000 g for 10 min at 4°C. Approximately 400 μg of the supernatant was then incubated with 30 μl of protein A-Sepharose bead suspension for 60 min at room temperature. Beads were collected by centrifugation, and the supernatants were incubated overnight at 4°C with anti-RPTPβ/ζ (1:200) or anti-Src (1:1000) primary antibodies. The mixtures were then incubated with 80 μl protein A-Sepharose beads for 3 h at 4°C. The beads and bound proteins were collected by centrifugation (10.000 g, 4°C), washed three times with ice-cold lysis buffer, and resuspended in 60 μl 2× SDS loading buffer (100 mM Tris-HCl pH 6.8, 4% SDS, 0.2% bromphenol blue, 20% glycerol, 0.1 M dithiothreitol). Samples were then heated to 95-100°C for 5 min and centrifuged. Fifty microliters of the supernatant were analyzed by Western blotting. Western blot analysis Cells were starved for 4 h, then incubated with P(122-131) for varying times. Cells were subsequently washed twice with PBS and lysed in 250 μl 2× SDS loading buffer under reducing conditions. Proteins were separated by SDS-PAGE and transferred to an Immobilon-P membrane for 3 h in 48 mM Tris pH 8.3, 39 mM glycine, 0.037% SDS, and 20% methanol. The membrane was blocked in TBS containing 5% non-fat milk and 0.1% Tween 20 for 1 h at 37°C. Membranes were then probed with primary antibody overnight at 4°C under continuous agitation. Anti-RPTPβ/ζ antibody was used at a 1:500 dilution. All other antibodies were used at a 1:1000 dilution. The blot was then incubated with the appropriate secondary antibody coupled to horseradish peroxidase, and bands were detected with the ChemiLucent Detection System Kit (Chemicon International Inc., CA), according to the manufacturer's instructions. Where indicated, blots were stripped in buffer containing 62.5 mM Tris HCl pH 6.8, 2% SDS, 100 mM 2-mercaptoethanol for 30 min at 50°C and reprobed. Quantitative estimation of band size and intensity was performed through analysis of digital images using the ImagePC image analysis software (Scion Corporation, Frederick, MD).
7,015.4
2010-08-25T00:00:00.000
[ "Biology", "Medicine" ]
Splitting ore from X-ray image based on improved robust concave-point algorithm Image segmentation is a key part of ore separation process based on X-ray images, and its segmentation result directly affects the accuracy of ore classification. In the field of ore production, the conventional segmentation method is difficult to meet the requirements of real-time, robustness and accuracy during ore segmentation process. In order to solve the above problems, this article proposes an ore segmentation method dealing with pseudo-dual-energy X-ray image which is composed of contour extraction module, concave point detection module and concave point matching module. In the contour extraction module, the image is firstly cut into two parts with high and low energy, then the adaptive threshold is used to obtain the ore binary image. After filtering and morphological operation, the image contour is obtained from the binary image. Concave point detection module uses vector to detect concave points on contour. As the main contribution of this article, the concave point matching module can remove the influence of boundary interference concave points by drawing the auxiliary line and judging the relative position of auxiliary line and ore contour. With the matching concave points connected, the whole ore segmentation is completed. In order to verify the effectiveness of this method, a comparative experiment was conducted between the proposed method and conventional segmentation method using X-ray images of antimony ore as data samples. The result of industrial experiment shows that the proposed intelligent segmentation method can remove the interference of pseudo concave points on the contour, achieve accuracy segmentation result, and satisfy the requirements of processing X-ray image of ore. INTRODUCTION In the process of ore production, separation is an indispensable procedure, the effect of ore separation directly determines whether the raw ore can be fully utilized. At present, the main beneficiation methods are hand separation, gravity separation, floating separation, and so on (Qin et al., 2017). Among them, hand selection is based on the difference in color, luster and shape between target minerals and gangue in raw ore. Although laborious, this method can often obtain a higher grade concentrate. Gravity separation makes use of the density difference of ore and gangue. Floating separation is the main method for mineral extraction, but it is not suitable for all minerals. Taking antimony ore as an example, while antimony sulfide is a floating minera, antimony oxide belongs to refractory ore. Above methods have very significant problems, such as low identification accuracy, requiring of large space and higher investment costs, etc. With the development of computer science and image processing technology, machine vision has been applied in the field of mineral separation (Jung & Choi, 2021) in recent years. Separation method based on dual-energy X-ray has attracted more and more attention. Scholars have carried out a lot of research on mineral separation. While most of the research focuses on how to conduct the separation according to the physical characteristics of ore itself (Von Ketelhodt & Bergmann, 2010), there are few researches on the separation of adherent ores. The traditional image segmentation process mainly uses the methods based on threshold, edge, region and clustering. The essence of image segmentation method based on threshold (Otsu, 1979) is to classify image gray histogram by setting different gray threshold values (Huang, Zheng & Liang, 2020). Edge detection (Rosenfeld, 1981) consists of serial edge detection and parallel edge detection (Khan, Bhuiyan & Adhami, 2011). The serial edge detection method first detects the starting point of the edge, from which the adjacent edge points are searched and connected by the similarity criterion to complete the image edge detection; The parallel edge detection method is segmented by using the spatial calculus algorithm and convolving its template with the image in parallel. In practice, the parallel edge detection method can complete the segmentation by convolution directly with differential operators such as Robers (Rosenfeld, 1981), Sober (Gao et al., 2010) and Canny (Er-sen et al., 2009). The region based image segmentation method uses the spatial information of the image for classification, and there are many methods (Pham, Xu & Prince, 2000), among which the region growing algorithms, spliting and merging algorithm (Tremeau & Borel, 1997) and watershed algorithm (Chandra, Supraja & Bhavana, 2017) are the most commonly used methods. The region growing algorithms collects pixels with similar properties to form independent regions to get segmentation results. The essence of the spliting and merging algorithm is to get each sub-region of the image by constantly splitting and merging. The watershed algorithm (Liu, 2019;Chien, Huang & Chen, 2003) treats the image it operates as a topographic map, in which the brightness value of each pixel represents its height. The image segmentation method based on clustering (Huang, Zheng & Liang, 2020, Rosenfeld, 1981 gathers pixels with similar features into the same area, iterates and converges the clustering results repeatedly, and finally divides all pixels into several different categories to get the segmentation results. With the development of deep learning, convolutional neural network has been introduced into the field of image segmentation as an important means of image processing. It can make full use of the semantic information of the image to realize the segmentation of the image. A series of image semantic segmentation methods based on deep learning, such as FCN (Long, Shelhamer & Darrell, 2015), PSPNet (Zhao et al., 2017), DeepLab (Chen et al., 2016 and Mask R-CNN (He et al., 2017), have been proposed. However, although the deep learning method has strong adaptability, it still has some shortcomings, such as requiring a large number of datasets for training and hard to obtain real-time segmentation results. For specific segmentation scenarios, some of the algorithms can achieve good segmentation result. However, the result of ore adhesion is varied, thus the method mentioned cannot be directly applied to ore segmentation. The method based on concave point detection is a very effective method to segment circular adhesion objects. It can be observed that there must be multiple concave points existing in the outline when circular objects adhere to each other, and we can use this priori knowledge to segment such adhesive objects. For example, Yao et al. (2017) uses the concave point detection algorithm to segment rice, and Song, Zhao & Liu (2014) proposes a method combining the concave point detection and watershed algorithm to segment the adhesion cells. The segmentation method based on concave point detection is not only fast enough to meet the needs of tasks requiring high real-time performance, but also can achieve good results for objects with smooth surface such as cells. However, although the ore is also a kind of circular object, its physical characteristics determine that there are many small interference concave points on its surface. If the conventional concave point detection method is used directly, these interference concave points will not be filtered out, which will inevitably lead to a large amount of over-segmentation phenomena. To solve the above problems, inspired by the traditional concave point detection algorithm, a new concave point matching algorithm is proposed in this article to filter the small concave points on the edge caused by the ore's own characteristics, and the proposed method can greatly improve the accuracy of concave point matching. The proposed algorithm includes three parts: contour extraction module using adaptive threshold segmentation and image binarization and Suzuki's contour extration algorithm (Suzuki & Abe, 1985), concave point detection module based on vector angle and concave point matching module based on concave point auxiliary line. In order to validate the proposed method, industrial experiment have been carried out on the antimony ore dataset and compared with other methods. The experiment show that the proposed concave matching method based on the auxiliary line of the concave point can well remove the small interference concave points, and meet the requirements of ore X-ray image segmentation. Industrial background The typical structure of ore sorting device based on X-ray is shown in Fig. 1. The sorting process is as follows: first, raw ore is crushed into small stones of uniform size, which are sent to the pseudo-dual-energy X-ray identification equipment through the belt conveyor. Then, the industrial computer needs to determine the grade of ore from the high and low energy images obtained from the X-ray device and pass the coordinate of ore to the valve controller, which uses the coordinates to adjust the direction of the air flow to blow the ore into the correct separator box. In this process, the image segmentation algorithm must be able to get the correct position of the ore from the X-ray image, otherwise the subsequent valve controller will not work correctly. The pseudo-dual-energy X-ray processing subsystem used in the system is shown in Fig. 2. It contains an X-ray source and a pseudo-dual-energy X-ray detector. The X-ray source emits X-ray beam with a continuous spectrum, and the dual-energy detector is a two-layer structure, the upper layer collects low-energy signals, the middle layer uses a copper sheet to filter out the low-energy part of the ray, and the lower layer collects highenergy signals. The pseudo-dual-energy X-ray system is widely used in industrial inspection due to its simple structure, high precision and cost performance. At present, the mainstream sorting method based on pseudo-dual-energy X-ray relies on the fact that different grades of ore have different physical characteristics and thus have different absorption ability for X-ray, so that the gray level of the picture obtained from the X-ray detector is different (one example is shown in Fig. 3). When the ore is separated from the original image, through these gray features, combined with the classification algorithm, different grades of ore and gangue can be separated. The difficulty of ore image segmentation based on x-ray Real time performance Industrial computers must segment, classify ore from image and send ore locations and categories to subsequent controller in limited time. If the algorithm takes too long time, the antimony will not be blown into the corresponding box by the air valve in time, which will lead to the failure of sorting. Accuracy The classification algorithm uses each pixel of the segmented ore to classify the mineral. If over-segmentation occurs, the number of pixels available for ore classification will be reduced, resulting in errors in classification results. Accordingly, if undersegmentation occurs, the classification results will be inaccurate because the samples to be tested are mixed with different kinds of ores. Robustness In the actual production process, there will be a lot of small stones and other debris on the belt, and eventually these debris will appear on the X-ray gray image in the form of noise. If these noises are not processed, over-segmentation will occur and the accuracy of recognition will be reduced. ROBUST ORE IMAGE SEGMENTATION ALGORITHM The ore image segmentation algorithm proposed in this article is implemented on the system shown in Fig. 1. It consists of three modules: contour extraction module, concave point detection module and concave point matching module. The contour extraction module consists of image cutting, image binarization, noise processing and contour extraction. The concave point detection module determines whether the point on the contour is concave by calculating the angle between three consecutive points. As the main contribution of this article, the concave point matching module filters out the interference concave points on the boundary by drawing parallel lines of connecting lines of concave points and judging the relative position of parallel lines and ore contours, so as to reduce the probability of over-segmentation. Image cutting As shown in Fig. 4, the output image of the X-ray processing subsystem consists of two parts: high-energy part and low-energy part, so it needs to be cut apart through the slicing operation of the matrix. The image obtained after cutting is shown in Fig. 5. Image binarization In order to find the ore contour using Suzuki's method (Suzuki & Abe, 1985), the image obtained in the Image cutting section needs to be binarized. Fixed threshold segmentation, Otsu threshold segmentation, and adaptive threshold segmentation are the most commonly used methods. It is found in practice that for the samples obtained from actual production process, fixed threshold and Otsu threshold segmentation, as a global threshold segmentation method, cannot effectively binarize ore image under the condition of dark background, much noise and interference caused by stone powder, while as a method using local threshold, adaptive threshold segmentation can better adapt to complex situations in different scenes. Therefore, this article adopts adaptive threshold segmentation algorithm to segment non-adhesive images, and carries out preliminary segmentation of adhesive images. By selecting the low energy image or high energy image in Fig. 5, the binary image can be obtained after being processed with the adaptive threshold segmentation algorithm, and the result is shown in Fig. 6. Noise filtering In the actual production process, the binary image of ore has a lot of noise. If the noise is not filtered, the program will treat the noise as tiny ores, which not only increases the computing load of the computer, but also interferes with the process of recognizing normal ores. We use morphological operation (Comer & Delp, 1999) to remove noise from the image. Firstly, the noise inside the stones were removed by dilation operation, as shown in Fig. 7. Similarly, in order to remove noise outside the ore, erosion operation is used, and the effect is shown in Fig. 8. Contour extraction In this article, the method proposed by Suzuki (Ren, Zhang & Zhang, 2019, Suzuki & Abe, 1985 is used to extract contour from binary image, and the extracted contour is shown in Fig. 9. Concave point detection It can be observed that the concave point is a class of points with the maximum local curvature on the contour formed by two or more circular objects stacked. For a single smooth elliptic object, the contour curve will not have a large curvature mutation, nor will it form a concave region caused by overlapping of different objects. On the contrary, there must be points whose curvature change suddenly on the contour of the cohesive ores, which is the basis for determining concave points. As shown in Fig. 10, two vectors named u and v are formed by three consecutive points on the contour (the length of the u and v are greater than the hyperparameter d), then the angle between the two vectors is calculated to obtain the concavity corresponding to the point. For points on the contour ½p 1 ; p 2 ; …; p k ; …; p n , in order to find the concavity corresponding to point p k on the contour, by using Eq. (1), m ¼ i and m ¼ j can be solved separately (in Eq. (1), i . k; j , k, and in this article, d is set to 6). arg min m jk À mj s:t: jp k À p m j . d To calculate the angle between the vectors u and v, the function hðx; yÞ defined in Eq. (4) is used to calculate the angle between the x-axis and the vector u, and the same is true for v, so that we can obtain the values of hðu x ; u y Þ and hðv x ; v y Þ. Let h 1 be the difference between hðu x ; u y Þ and hðv x ; v y Þ. Note that hðx; yÞ in Eq. (4) is 0 in the positive direction of the x-axis, and gradually increases in the counterclockwise direction, with a max value of 2p. As h 1 2 ½À2p; 2p, the result obtained in the previous step need to be processed by using Eq. (6) to obtain the angle a. We can obviously judge the convexity and angle of the local contour from the value of a, that is, if a 2 ½0; p, it is concave, otherwise it is convex. In practical use, in order to reduce interference, relatively flat local contours will be excluded, that is, point on the contour can be regarded as a concave point only if a 2 ½0; h t , where h t , p. After repeated experiments, it is found that selecting h t ¼ 3p 4 as threshold in this article can achieve better detection effect. By using the concave detection algorithm described above, the concave points can be detected in the extracted contour (b) in Fig. 9, and these concave points are annotated in red in Fig. 11 (the contour drawn in the Contour extraction section is hidden to highlight the concave points). Concave point matching The physical characteristics of the ore often lead to the existence of a large number of concave points on its edge in the absence of adhesion with other ores. For example, because some antimonite exists in the form of crystal, many concave points will be detected when the concave detection algorithm is performed. For this kind of ore, it will lead to oversegmentation if the concave line is used as the dividing line directly. This problem can be solved perfectly by using three-line method proposed in this article. The schematic diagram of this method is shown in Fig. 12. As can be seen from the Fig. 12, due to the characteristics of the ore itself, concave points can be detected even from the outline of a single ore. Obviously, in such case, the connecting line of the concave points cannot be directly used as the dividing line (marked in green). In order to eliminate the false segmentation caused by such points, auxiliary lines are drawn in the figure (marked in black). The connecting line of its concave points can be used as a dividing line if the auxiliary lines are all inside the contour. The detailed steps of the method are as follows: Finding the auxiliary line As shown in the Fig. 12, the auxiliary line is actually the parallel line of the connecting line of the two concave points, and it has the same length as the connecting line of the concave points. Therefore, the auxiliary line can be obtained by combining the slope k of the connecting line between the concave points and the given distance d. Let E 0 ¼ ðr 0 ; c 0 Þ and E 1 ¼ ðr 1 ; c 1 Þ be the endpoints of the connecting line. First, calculate its slope k by using Eq. (7). k ¼ c 1 À c 0 r 1 À r 0 (7) The distance difference (D x and D y ) between the concave point and the end point of the auxiliary line is obtained by using slope k and distance d. The endpoints of the concave points ðr 0 ; c 0 Þ and ðr 1 ; c 1 Þ are translated by distance D x and D y in the horizontal and vertical directions to obtain the endpoints of the auxiliary line: P 0 , P 1 , N 0 and N 1 . Wire P 0 and P 1 to get the auxiliary line L P . Similarly, the auxiliary line L N can be obtained by connecting N 0 and N 1 . Judging the position relation between auxiliary line and contour In order to judge the position relationship between the auxiliary line and the contour, it is necessary to judge the position relationship between each point of the auxiliary line and the contour. In this article, the PNPoly algorithm (Haines, 1994;Zhang & Zhou, 2022) proposed by W. Randolph Franklin is used to solve the position relationship between the point and the contour (the point to be measured can be on the contour, inside the contour or outside the contour). The idea of the algorithm is shown in Fig. 13. As can be seen from the figure, the position relationship between the point and the contour can be judged by drawing the horizontal line and finding the number of times that the line crosses the contour. Finding the true dividing line As shown in Algorithm 1, by repeatedly applying the method described in the Concave point detection section and the Concave point matching section for each candidate concave point splitting line, the true splitting line of the adhered ore can be found. Complete segmentation process The overall segmentation process is shown in the Algorithm 2, one of the segmentation examples is shown in Fig. 14. EXPERIMENTS LNPC12-80, a pseudo-dual-energy X-ray sorting device produced by Longi company, was selected for the experiment, the equipment shown in Fig. 15. In order to verify the performance of the proposed algorithm, the antimony ore dataset made by ourselves is selected for the experiment. The relevant parameters of the sample are shown in Table 1. In this experiment, the memory of the host used in the experiment is 16G, the CPU model is 28375CX2, and the windows 10 operating system is used. C++ and Opencv library are used to implement the proposed algorithm, and 134 single-channel 512  5; 632 gray scale images are tested. Analysis of results In order to better demonstrate the superiority of the algorithm in this article, the algorithm proposed in this article is compared with simple concave point matching algorithm and the watershed algorithm based on distance transform using exactly the same dataset. Several typical images are selected from the results of experiment to illustrate the performance of each algorithm. As shown in Fig. 16, if the overlapping area of two ores is large, they cannot be separated by the watershed algorithm (as shown in the first row). Even if the watershed algorithm can be used to divide the cohesive ore, the dividing line in the segmentation result is not very accurate (as shown in the second row). Due to the physical characteristics of the ore, even a single ore may have a large number of concave points on its contour, in this case, the use of simple concave matching will inevitably lead to over-segmentation (as shown in the third row). In the above cases, the algorithm proposed in this article can work well. In this article, P u (under-segmentation rate), P o (over-segmentation rate) and P a (accuracy rate) are used as performance indicator to reflect the performance of algorithms when applied in practical industrial applications and they are defined as: In this formula, N u is the number of under-segmentation ore, N o is the number of oversegmentation ore, N a is the number of ores correctly divided and M is the total number of ores tested (M ¼ 134). As can be seen from the Table 2, compared with other algorithms, the proposed algorithm can significantly reduce the probability of under-segmentation and oversegmentation and improve the accuracy of segmentation. CONCLUSION This article presents a strategy of ore segmentation based on concave point detection. This strategy contains two main innovation. Firstly, an ore segmentation framework for pseudo-dual-energy X-ray images is proposed, which is mainly composed of contour extraction module, concave point detection module and concave point matching module. Secondly, in order to reduce the influence of the concave points caused by the physical characteristics of ore on segmentation process, this article proposes a concave point matching algorithm which uses the position relationship between the auxiliary line and the contour to judge whether the candidate segmentation scheme is available. By comparing the result of experiment, it is found that the proposed algorithm can obtain a satisfactory segmentation effect, and it can be applied to the actual industrial ore separation process, and also pave the way for the ore classification.
5,475.4
2023-02-23T00:00:00.000
[ "Computer Science" ]
The Importance of Policies: It’s not just a pipeline problem A.J. Halford, NASA/GSFC<EMAIL_ADDRESS>M. Jones Jr<EMAIL_ADDRESS>A.G.Burrell, NRL<EMAIL_ADDRESS>M. S. F. Kirk, GSFC<EMAIL_ADDRESS>D. Malaspina, University of Colorado<EMAIL_ADDRESS>J.E. Stawarz, ImperialCollegeLondon<EMAIL_ADDRESS>S. Lejosne, Space Sciences Laboratory, University of California, Berkeley<EMAIL_ADDRESS>C. Dong, PrincetonUniversity<EMAIL_ADDRESS>C.Bard, NASA/GSFC<EMAIL_ADDRESS>M.W. Liemohn, University ofMichigan<EMAIL_ADDRESS>L.H. Regoli,JohnsHopkins Applied PhysicsLab<EMAIL_ADDRESS>J. L.Verniero, NASA/GSFC<EMAIL_ADDRESS>K. Sigsbee, University of Iowa<EMAIL_ADDRESS>J. Klenzing, NASA/GSFC<EMAIL_ADDRESS>L.Blum,University ofColoradoBoulder/LASP<EMAIL_ADDRESS>N. Turner,Trinity University<EMAIL_ADDRESS>J. P. Mason, Johns Hopkins University Applied Physics Laboratory<EMAIL_ADDRESS>K. Garcia-Sage, NASA/GSFC<EMAIL_ADDRESS>M. Hartinger, Space Science Institute<EMAIL_ADDRESS>N. Viall, NASA/GSFC<EMAIL_ADDRESS>L. Brandt, Aurorasaurus, New Mexico Consortium, NASA/GSFC<EMAIL_ADDRESS>S. Badman, Harvard-Smithsonian & Center for Astrophysics, SpaceSciencesLaboratory, University ofCalifornia, Berkeley<EMAIL_ADDRESS>V. Ledvina, Predictive Science Inc<EMAIL_ADDRESS>D. Turner, JHU Applied Physics Lab<EMAIL_ADDRESS>M. Zettergren, EmbryRiddle Aeronautical University<EMAIL_ADDRESS>C.A., Young, NASA/GSFC<EMAIL_ADDRESS>A.Maute, National Center for Atmospheric Research High Altitude Observatory<EMAIL_ADDRESS>S. T. Lepri, University of Michigan<EMAIL_ADDRESS>H. Connor, NASA/GSFC<EMAIL_ADDRESS>L. Habash Krause, NASA/Marshall<EMAIL_ADDRESS>J.-M. Jahn, Southwest Research Institute, UTSA<EMAIL_ADDRESS>L.Goodwin, New Jersey Institute ofTechnology<EMAIL_ADDRESS>B. Kosar, NASA/GSFC, The Catholic University of America<EMAIL_ADDRESS> Introduction For decades, a leaky pipeline analogy has been used when discussing diversity issues in STEM fields. However, this imagery is overly simplistic and does not capture critical issues that contribute to people leaving the field. It puts distance between structural issues, our actions, and why people leave the field. When we view our research structure as something more complex, we can start taking ownership and frame more impactful solutions instead of misidentifying important issues and providing ineffective short-term solutions. Many of the issues discussed in the "Cultivating a culture of inclusivity in Heliophysics" position paper have counterparts within our policies and our institutions. To fully address and mitigate the current issues within our field, we have identified a need to cultivate a positive, safe, inclusive, and effective environment. However, we need both cultural and programmatic changes. We will try to identify systemic issues that inhibit many from fully participating and potential solutions, as well as groups and fields producing best practices for creating and enabling effective environments where innovation can occur. 2 The scientific process Science occurs through collaborations, but we have not always acknowledged this [1]. Discoveries increasingly require scientists to cooperate, evidenced by the increasing size of scientific collaborations [2]. How we do science and collaborate directly impacts the results we achieve. How we build collaborative teams, mission teams, proposal teams, and even the selection of conference coordinators, chairs, and speakers impacts who can participate in science. Perhaps even more importantly this also determines who drives the conversation about how our science questions should evolve [3,4]. Open Science: Open Science has many schools of thought, but it is based on a few key ideas: open data, open code, and open journals. All of these lower barriers of entry to science and help with the reproducibility of scientific results. Some groups within our field are already adopting these best practices, and groups like TOPS are working to make the field more open [5]. The Python development community within heliophysics is one such community. Best practices identified for open code are referenced in [6]. Best practices in team formation -a move away from collaboration cliques. : Science is a team endeavor. The formation of teams impacts who participates and how science is conducted. Science of Team Science (STS) is a field of research that looks at how scientists work best within teams, and collaborative environments [7,8]. The National Academies has reviewed the STS, and best practices for different types of teams (geographically dispersed, culturally diverse, different types of leadership, etc.) [3]. The field of Team Science will allow us to more easily link the sciences to other disciplines such as industry or the humanities, which is vital to our goal of achieving a more diverse, inclusive, and safe research environment [9,10]. For instance, it matters who is invited to a given team's very first or first few meetings. Inviting only those we think of first, typically those who look like and have similar backgrounds to ourselves, when forming a collaboration or a proposal team is exclusionary. It limits knowledge transfer between groups and a team's ability to identify blind spots. If diverse people are added later in the process, they have missed out on the opportunity to become essential. Individuals added later must expend extra time and effort to catch up to the rest of the team. This may include learning the team's jargon, tools and codes, and background of the work. This inhibits an individual's ability to be a fully functioning member, and some infer an inability of new team members to be constructive contributors. Thus, new members need to have support and resources to be able to come up to speed and feel that they can be full members who belong to the team. Subsequently, when minority and underrepresented groups within our community are continually added after initial meetings they will continue to feel looked over, secondary, and that they are not fully valued. Interdisciplinary scientists and projects require a home: Interdisciplinary expertise is required to understand the interconnectedness of the heliosphere. Therefore, making it easy to participate in multidisciplinary work is necessary for Heliophysics to flourish beyond the advancements made in the past decades [4]. The high-level best practices in the Science of Team Science lead to effective teams, improved creativity, and innovative scientific results. Often, we see that individuals who do interdisciplinary work are not considered to belong to any sub-field and find themselves at times out of these close networks. It is crucial to make decisions for hiring and committee appointments where interdisciplinary expertise is considered a strength. Similarly, genuinely interdisciplinary projects often struggle to find a funding source, as funding agency divisions may not consider interdisciplinary proposals as core to their objectives. Likewise, interdisciplinary science questions are often not seen as compelling by review panels who are often looking at very focused science topics with clear outcomes. A possible way to mitigate this is to build funding sources and academic departments within the field, whose core objectives are to foster interdisciplinary projects, such as a trans-, or interdisciplinary division within NASA, recognizing the potential for scientific discoveries in our field in the vast unknowns between disciplines. Soft money science Most of us will be or have been on soft money for at least a portion of our career [11]. The Heliophysics community often regards soft-money positions as temporary, being filled by graduate students or early career researchers. However, many members of the Heliophysics workforce are supported by soft money throughout their careers. Soft money positions can have benefits, such as fewer or no teaching obligations and greater flexibility in work locations and hours, but there are also pitfalls. Some difficulties that soft-money employees encounter are directly related to HR and grant and contract policies of their employers and funding agencies. Heliophysics research can bring millions of dollars to universities and other institutions, but the departments and investigators who secured this funding often see little or no return on their overhead. For example, the facilities and administration (F&A) costs charged by universities on grants and contracts that support softmoney employees may go directly into the general education funds of these institutions. This can make it difficult for departments to provide adequate computing resources, laboratory access, office space, and furniture to soft money employees, as these things often cannot be directly paid for by grants and contracts. Additionally many institutions include a separate line item in grant/contract budgets for fringe benefits. When soft-money employees are classified as full-time, regular employees by their institutions, they usually receive these benefits. However, soft-money employees classified as temporary or independent contractors may not have access to these benefits, providing little incentive for these individuals to continue working in Heliophysics. Policies that encourage hiring full-time employees over temporary workers would contribute to a more stable, experienced Heliophysics workforce. The short time frames and budgets of grants and contracts drive the need for soft money researchers and employees working at full cost accounting institutions to write new proposals constantly. Anxiety over job security can motivate researchers to leave academia and the field. For example, researchers with Ph.D.s supported through soft money are often regarded as less capable than those holding tenure-track faculty positions even though they are equally qualified. Many soft-money researchers mentor students and post-docs, manage projects, and serve on service committees. In effect, soft-money researchers carry out many of the same duties as faculty. Still, they are often ineligible for many opportunities that support professional development, mentoring, and large-scale or long-term projects (e.g., NSF CAREER awards, Major Research Infrastructure). Including soft-money researchers in these policies and proposal calls would help ease the anxiety and improve the Heliophysics workforce morale. For example, the overhead allocation to support bridge funds could support all employees who are in between grants for a month or two. Another idea would be to return a fixed portion of each grant's overhead (2%, 5%) directly to each researcher on the grant and pooled into a discretionary 'rainy day funds' that does not expire. Every step to improve financial and funding security helps keep people in Heliophysics. 4 Accessibility and Equity across different sections of our community Many communities within heliophysics have different needs to fully participate in day-to-day science activities. For example, physics buildings at research institutions are often old and "grandfathered" into not meeting ADA requirements. Due to the lack of funding at many institutions, these challenges are not adequately addressed, and the burden falls on the disabled individual to navigate campus support. While renovating an entire building may be impossible under budget constraints, we must consider more minor things, including retrofitting automatic doors on restrooms or wheelchair lifts. Additionally, participating in conferences is physically demanding and presents limitations to many. One often must move quickly from a poster hall to another room to catch a talk. Scientists with physical limitations may stay in one area and miss out on other opportunities. If one cannot stand for several hours in a poster session, they can request chairs, but this can also cause issues. If one is in a chair, one cannot support a crowd of people visiting their poster. The standards for ADA accommodations at conferences need to change from special requests which burden the disabled individual to standards that present minimal barriers to networking. There are many more elements than conferences and building layouts that can be adapted to make community members feel welcome. Unfortunately, we are not able to list them all in this paper. Still, we have tried to highlight some key areas where more work is needed surrounding accessibility and equity across different sections of our community: • Consider the needs of those with visible and invisible disabilities in the initial phases of policy making and planning. • Accommodation for scientists with disabilities (e.g. teleworking, virtual conference participation) • Reasonable deadlines that fit into the Month long clearance processes that many within our community are tied to. 5 Promoting hybrid meetings. With the increasing pace of technology and online connections tools, we have greater flexibility than ever in how we collaborate. We are no longer limited to being in the same physical space for meaningful discussions. There are benefits and challenges unique to in-person or virtual collaboration. Hybrid meetings allow for the best of both worlds: more accessible in-person discussions and networking for those who can come on-site and the ability to contribute viewpoints and scientific debate for those unable to travel. However, we must be careful that this physical separation between on-site and online colleagues does not also produce a "participatory" bias. Care must be taken in establishing the culture/norms of these hybrid meetings ensuring online voices are adequately heard. Some possible suggestions include: • Having someone on-site with the specific responsibility for raising the voices of those not physically present (e.g. reading out questions, raising a hand on behalf of a virtual participant). • Having laptops/phones/etc out for engaging with the remote team members via chat. • Dual online/in-person poster sessions; webcams and screens for live chat with online participants • Asynchronous collaboration: e.g. recorded talks, persistently available poster access, question and answer in a message board format 6 Common, collaborative, affordable tools. Science is a collaborative endeavor and is often done best when we collaborate across institutions. However, many institutions, especially within the government and industry sectors, limit employees' access to different collaborative tools. This impacts the ease and effectiveness of collaborations across institutions. Additionally, we have many different tools for virtual collaboration available to us. Today, we can communicate and collaborate via options as diverse as Email, Google Meet, Stack Overflow, Overleaf, Github, and Jupyter Notebook. However, this also means that there are a large number of spaces we have to monitor. Finally, although internet-based collaboration tools may always be "on," we must develop a culture that does not necessarily expect us always to be on and interacting with those tools. A healthy balance between synchronous and asynchronous collaboration will maintain connection and productivity. Whether it is a feeling of isolation because your institute doesn't support a specific tool, e.g., Overleaf, or a feeling of constant work leading to burnout, our collaboration tools and relationship with them can greatly impact how welcome we feel within the community. Need to address Power imbalances In the current academic infrastructure, there is inherent unbalanced power at all career levels. Whether it is a graduate student at the mercy of their Ph.D. advisor, a postdoc who is unsupported by their supervisor, or a senior scientist who experiences unhealthy dynamics with their mission PI, these individuals deserve a structural system that allows them to report abuses and harassment safely. Everyone deserves to be able to exist in a safe environment to perform their research, see abusers held accountable, and help ensure our field is safe for those who come next. In short, they deserve a chance for justice [12]. We must build institutional systems that check power imbalance, such as dual anonymous reviews [13,14]. Accountability for Both Good and Bad Behavior Accountability is a necessary but complex topic. We want to acknowledge that people can grow and change. However, we need precise mechanisms for reporting and accountability for bad actors and continual harassers. At the moment, there is an actual quantifiable risk to their careers and reputations to people who bring forward complaints (See "Picture a Scientist", the 2017 documentary). This can include further implicit bias when the harasser, or supporters of the harasser, review papers and proposals. While the risk may never be zero, some mechanisms can help mitigate this risk and address other issues of bias. There are currently no actual accountability mechanisms in place for unethical behavior. The current institutional mechanisms are fundamentally flawed. Non-Retaliation policies only apply within an institution -but our careers require us to transcend communication across institutions and around the globe. There is currently no non-retaliation policy for influential scientist to convince their powerful peers that their subordinate is unworthy of employment. The Geoff Marcy case is just one example of how powerful scientists can maintain positions of power and continue to influence individual careers and the culture of a field [15]. Consequently, individuals have an inherent career risk of reporting harassment and seeking justice for enduring harmful working conditions. This is unacceptable and must be addressed immediately. Therefore, we recommend that government institutions like NASA and NSF create trans-institutional Human Resource (HR) support for safe, anonymous reporting. As harassment can occur and impact a person's career at any stage, scientists from all career levels would benefit from trans-institutional HR support. Funding institutions such as NSF and NASA can help hold researchers accountable is to create an ombudsperson role for missions (institutions within themselves) and non-mission-related funded projects (such as a proposal call). These ombudsperson roles can start as an extension of a Project Scientist on a mission (or equivalent point of contact on proposal calls) and eventually be integrated into a newly created position to ensure maximum accountability for unethical behavior in all forms. Recommendations Individuals need the support of organizations to help create a culture of inclusion, openness, and innovative science. The recommendations below help empower individuals and institutions to ensure our community is welcoming to all. • Work more closely with experts in the Diversity, Equity, Inclusion, Accessibility, and Justice (DEIAJ) research community and adopt the best practices they have identified for creating a positive climate and culture for our field. • Create a database of resources and models/frameworks for cultivating an open and inclusive climate. • Create and maintain clear and easily accessible tools for reporting bad conduct as well as a way to hold individuals and institutions accountable. • Coordinate across agencies to bring awareness to reports of harassment. Create and maintain a list of convicted harassers shared within the field. This is one way to address the challenge of the disconnect between institutions/societies/organizations/funding agencies etc. when it comes to reporting harassment. • Create effective and thorough protection regarding retaliation for reporting cases of harassment, especially in imbalanced power dynamics (faculty vs graduate student, civil servant vs contractor, so on). • Enable access to bystander/allyship and other types of training to encourage fundamental change by enabling people to speak up and act when they see something. • Codify codes of conduct for the field, e.g. mentoring relationships, workshops, or committees. • Address wage gaps. While not discussed here, this is an important issue as to why some people leave the field. Not everyone is yet convinced that having a culture where all are respected, accepted, and welcomed will benefit science. Likewise, not everyone is yet convinced that these issues affect them, are something they should worry about, or are something that they have control over. Therefore it is important to emphasize the following: • Equity and inclusion benefit everyone. • Both intentional and unintentional actions by peers and organizations have a major impact. • Everyone has unconscious biases. The key is to understand them and implement a conscious ethic of identification/detection and mitigation. • Antiracism is an important principle to understand. It focuses on what we are doing to address racism at all levels and encourages all to help eliminate both individuals and institutional racism. • Power imbalances, particularly indirect power imbalances, do impact careers. • People tend to interact socially (both at work and after work) with people they feel most comfortable with. This can result in exclusion from important connections, access to networking opportunities, and in severe cases, the climate phenomenon of "invisibility." • Microaggressions are commonplace, often unintentional, actions that contribute to a climate of exclusion or hostility. Studies show that many identify microaggressions integrated over time as more harmful and damaging than explicit racism or sexism. Parts of our culture and set of policies systematically push parts of our community out of heliophysics. For example, while we often use metrics such as the number of scientific publications to determine promotions and awards. Meanwhile women, non-binary, and people of color typically have disproportionate DEIAJ and service responsibilities pulling them away from their research and writing paper. If the burden of growing, supporting, and retaining an inclusive community falls disproportionately on a subgroup, it should be recognized and valued professionally. If this same subgroup is also disproportionately subjected to implicit and explicit biases, we will continue to see a leaky pipeline. Allies, and our institutions much change the culture, our policies, and our spaces to support everyone. Otherwise, these very actions aimed at improving DEIAJ are having the opposite effect and end up pushing these groups disproportionately out of the field.
4,519.2
2023-07-31T00:00:00.000
[ "Physics" ]
Phase Modulation Schemes in Food Free Space Optical Coherent System In this study, it is the most appropriate modulation format for food free space optical (FFSO) system compared with other modulation schemes. Performance of BER for BPSK, DPSK and QPSK has been detail discussed under Gamma-Gamma turbulent channel with intensity scintillation and phase fluctuations. The closed average BER expression and channel capacity expression have been derived by adopting the method of generalized hypergeometric. From experiments show that BER and transmission power degraded seriously for the three kinds of modulation format owing to intensity scintillation and phase fluctuations. Comparing performance of the three modulation methods under the atmospheric turbulence, the results for the performance of BPSK and QPSK almost unanimously, in the case of an given SNR it is obvious that BPSK with lower BER. Therefore, BPSK modulation has a good BER and effective resist turbulent but the transmission power will not be increased. INTRODUCTION In recent years, Food Free Space Optical (FFSO) communications system has take more attention and widely applied to high-speed data transmission in the food wireless communication system owing to its large space capacity, higher space communication efficiency, good secrecy and interference from users, good resistance to electromagnetic interference and low power consumption.FFSO is the future development trend for high speed, large capacity, long distance in the field of optical communication.Since the end of the 20 th century, it has made significant progress.FFSO is a kind of technology that use laser light as carrier to transfer and transmit information in the space channel.When the laser beam propagation in the atmosphere, it may suffer from power attenuation and the multipath effect that caused by gases in the atmosphere of absorption, scatter and the carrying information of signal light may also under influence of atmospheric turbulence, flashing light intensity and light beam drift caused by the atmospheric, seriously affect the receiving SNR and lead to degrade of BER performance for FFSO system (Ho, 2005;Gagliardi and Karp, 1995).Therefore, effect of atmosphere seriously restrict the development of FFSO. In order to suppress the atmospheric effect on performance of FFSO, a lot of research conducted by scholars both at home and abroad, especially, in United States and Japan have made remarkable achievements, in China, they are also made some important achievement in recent years.At receiver, decrease receiving aperture can effectively overcome the atmospheric turbulence, but this method is only applicable in good weather conditions.Shin and other scholars have experimentally verified that the MIMO technology and the space-time coding technique can improve the performance of food free space optical communication system and reduce the effects of atmospheric turbulence.Tyson puts forward that the AO technology can improve the performance of FFSO.The analysis and research of adaptive optics in optical communication system has been carried out by Wu Yunyun and other scholars at home, the experiment results show that the average receiving optical power has improved by set the AO correction in the transmitting terminal.Similarly, a large number of document shows that the phase shift keying modulation method can effectively suppress turbulence (Ales and Lubomir, 2012;Wang et al., 2009). MATERIALS AND METHODS Model for coherent FFSO system: A typical chart of an coherent FFSO optical communication system is shown in Fig. 1.The CW laser beam is modulated by the RF signal in electric-optic phase modulator and then the light waves that carrying information are extended to a certain diameter and then transmitted into the atmospheric turbulence channel by the sending a Fig. 1: Model of coherent FFSO system telescope (Kiasaleh, 2015).At the receiving end, using receive telescope to receive after a certain distance transmission.The received signal is coherent detected by balanced detector photomultiplier tubes (APD).Coherent detection is selected in this system, coherent optical communication has high sensitivity compared with Intensity Modulation/Direct Detection (IM/DD) and it has good inhibition effect of background light.Due to adjusting the APD gain can reduce the detection of thermal noise, APD is used to design the coherent detection in this study (Kiasaleh, 2009). Channel model under turbulent: Light intensity transmitting is a process of random and changeable.Usually, Logarithmic normal distribution, K distribution, Exponential distribution, I-K distribution and the Gamma-Gamma distribution model is used to describe the atmospheric turbulence channel.The Gamma-Gamma because its modeling parameters is more closed to the actual system parameters.Also, it can effectively describe the strong turbulence and weak turbulence intensity scintillation.Therefore, the Gamma-Gamma model is used for channel modeling.Under the Gamma-Gamma model, the beam intensity fluctuation Probability Density (PDF) can be expressed as (Majumdar, 2005): In the formula (1), Phase fluctuation model: At receiver, the atmospheric turbulence will cause the frequency offsetting of receiving signal, it is considered as phase fluctuation.Usually, a widely accepted theory on phase fluctuation is perturbation approximation theory which is derived by Tatarskii based on the theory of Rytov (Patnaik and Sahu, 2013).By this theory, the distribution of phase fluctuation is satisfied the Gaussian distribution. Therefore, distribution of ϕ ∆ can be defined as: . Where, f s is the signal rate that is equal to 1/T b . Phase modulation technology: Optical signal modulation has two kinds of modulation formats, which is referred as internal modulation and external modulation.The signal directly loaded into the photocurrent of laser is called internal modulation, this modulation format generally applicable to OOK modulation, but it is easy to form the frequency chirp.FFSO system is usually use phase modulation format based on external modulator.This study uses the MZM modulator to produce three kinds of PSK modulation format-the Binary Phase Shift Keying (BPSK), Differential Phase Shift Keying (DPSK) and Quadrature Phase Shift Keying (QPSK) constitute the FFSO system respectively.BPSK is the most simple modulation format of PSK, it use a binary digital signal to control the two carrier phase, usually by the symbol '1' and '0' represents respectively phase 0 and (Prabu et al., 2013).The need of bandwidth for BPSK is also equal to bit rate, theoretically, bandwidth efficiency approximate to 1 bps/Hz.Bit Error Ratio (BER) of BPSK can be expressed as: erfc SNR = DPSK using phase difference information between front and back code element, it can effectively avoid the phenomenon of phase fuzzy caused by absolute phase judgment.Also, the need of bandwidth for DPSK is equal to bit rate, theoretically, the bandwidth efficiency approximate to 1 bps/Hz.BER of DPSK modulation can be expressed as: Different DPSK from BPSK, the transmission of discrete digital signal for QPSK is by four different carrier phase.The modulated QPSK signals can be seen as linear combination of two BPSK signal.Assuming that bit rate of QPSK modulation is R b , then after serial to parallel, the transmission bit rate for the I and Q branch signal can be regard as R b /2 which is equal to BPSK signal.That is to say, in the case of the same bandwidth, QPSK modulation rate is two times than BPSK. Therefore, the bandwidth efficiency approximate to 1 bps/Hz in theoretical, but, the actual bandwidth efficiency is 1.4~1.6 bps/Hz.BER of DPSK modulation can be expressed as: ( ) ( ) ( ( ) ) . g is average APD gain, e is charge, K is additional noise factor of APD, K s is average receiving photon number related to receiving pulse strength, K b is the photon counting for average background noise, RESULTS AND DISCUSSION Performance analysis of BPSK, DPSK and QPSK: BER theoretical analysis: At receiving end, it will happen to amplitude and phase distortion after light beam transmitting in the turbulent atmosphere channel. Received signal light contains of distorted signal information after receiving telescope, two beams of light that phase difference is π can be obtained after the light mixing with the local oscillator laser in the 180° mixer.After mixing, two beams of light transmit into the balance detector, coherent homodyne detection is used and output current is . Considering the intensity scintillation and phase fluctuations, then the SNR of system can be expressed . There, we define . Assume that A r = A LO , considering the effects of light intensity and phase fluctuations for optical signals under Gamma-Gamma atmospheric turbulence channel, using generalized hyper-geometric formula to simplified calculation.The covered average BER for BPSK modulation system under the Gamma-Gamma channel can be obtained: (5) By the generalized hypergeometric Meijer-G formula, there are: Making formula (6) into formula (5), the average BER expressions is given: Here, . It can be deduced using Meijer-G: Making formula (8) into formula (7), the closed expression of average BER for homodyne BPSK modulation scheme is derived: In the same way, with the influence of atmospheric turbulence and phase fluctuations, the expression of closed average BER for DPSK homodyne detection is given by: Similarly, the expression of closed average BER for QPSK modulation under atmospheric turbulence and phase fluctuations is shown: Average optical power analysis: In FFSO system, the average received optical power for the three modulation formats be derived from BER expression, namely: From analysis of above, in the influence of atmospheric turbulence and phase fluctuations, the optical power which needed for DPSK modulation is about 2 times than BPSK, it means that under the specific BER performance, DPSK modulation need more light energy compared with the BPSK, it is about 1.505 dB higher than the BPSK.On the other hand, the required optical power for QPSK modulation is approximately equal or only about 0.5 dB higher than that of BPSK modulation.Therefore, under the same condition of BER, the required optical power of BPSK modulation is the lowest. Average capacity: The capacity is a quantitative measurement of the limiting data transmission rate that can be achieved through a non-deterministic fading channel with a minimum probability of error.It is an important index of evaluation the system link.According to the definition, under the Gamma-Gamma channel, the average channel capacity can be expressed as: where, C 〈 〉 is expectations, B is the signal transmission bandwidth.It is known by Meijer-G formula: Simplified the Eq. ( 13), the expression of the closed average capacity for BPSK modulation is expressed as: Outage probability for BPSK modulation: Outage probability is a measurement that to guarantee the reliable communication.In a slow fading channel, the amplitude and phase change forced by the channel is approximately constant over time.The possibility that the endwise output SNR falls less than a specified threshold is the outage probability.The outage probability is expressed as P out = P r (SNR(I)≤SNR th ).When considering atmospheric turbulence and phase fluctuations, using Meijer-G to simplified the analysis, the expression of the outage probability for BPSK modulation can be estimated as follows: SIMULATION AND RESULTANALYSIS Simulation is taken under Matlab software platform in order to verify the performance.Simulation parameters are selected as shown in Table 1. The loss of transmission link loss is ignored. are selected respectively to represent weak turbulence, turbulence and strong turbulence, temporarily, the phase fluctuation does not considering (we have fixed to ∆f IF = 200MHz), the relationship simulation curve between average BER and SNR under turbulence intensity change is shown in Fig. 2. We can seen that the system performance of the BER for PSK modulation is serious influenced by atmospheric turbulence.But in turbulent channel, BPSK modulation can better resist atmospheric turbulence, the average BER is almost equal for BPSK and QPSK modulation.When the system is in weak turbulent environment, simulation of BER and transmission power is shown in Fig. 3. Figure 3, performance of BER has improved with the increase of transmission power.Comparing three kinds of modulation formats, the needed transmission power is almost equal for BPSK and QPSK modulation.Under the condition of the same transmission power, the required transmission power for DPSK is about 1.5 dB higher than that of BPSK.Therefore, BPSK has better BER performance.Under weak turbulence that is σ 2 R = 0.5, the influence of phase fluctuation between the average BER curve as shown in Fig. 4 when ∆f IF is changed. Figure 4, it can be seen that when the system under the condition of weak turbulence, phase fluctuation results the performance of system degradation.When ∆f IF = 100 MHz or 300 MHz, changes in the average BER is not obvious.But when ∆f IF is more than 500 MHz, the performance of system degraded significantly.When ∆f IF = 1GHz, it can be seen that the BER of the system decreased drastically with the SNR increased.Therefore, the adaptive optics technology is needed to change the phase fluctuation and will improve the performance of the system in practical optical communication system.The simulation results show that when the light intensity is fixed in a certain condition, BPSK modulation is more robust against the impact of average BER decreased owing to the phase Figure 5 indicates that APD gain increased lead to performance of average BER stable improved for the various modulation schemes.It is clear that increase APD gain can effectively resist turbulence.And Fig. 5 shows that BPSK has the best BER performance compared with other modulations.When the channel capacity changed with turbulence and the phase fluctuation under the Gamma-Gamma channel, the simulation results between the average capacity and average SNR is shown in Fig. 6. Figure 6, it can be seen that channel capacity is declined as the effect of atmospheric turbulence is enhanced.However, channel capacity is increased when SNR is increased.When SNR is fixed 20dB, (α, β) are (7.69,4.55), (2.5, 2.63) and (4, 1.72) selected respectively, from the simulation results can be seen that channel capacity are 2.05, 2.1 and 2.81 bps/Hz, respectively.Therefore, weak turbulence has smaller influence on channel capacity.Also, when SNR is constant, increase the transmission distance is lead to channel capacity declined. Figure 7 demonstrate that outage probability in terms of the threshold SNR (SNR th ) for FFSO system under different atmospheric turbulence conditions but made ∆f IF fixed to 200 MHz.It is shown that the outage probability increased with the increase of the SNR th and the change of atmospheric turbulence also increase the outage probability, especially when α is higher.Compared with outage probability for BPSK modulation under three turbulence conditions, the CONCLUSION With consideration of atmospheric turbulence and phase fluctuation, performance of BPSK, DPSK and QPSK are discussed in terms of BER, channel capacity and outage probability.From simulation results, we can seen that performance of FFSO communication reduced seriously under the combined effects.Obviously, the combined effect has significance influence on the development of FFSO system.Besides, from the simulation results it is clearly that take the combined effect into consideration and under weak, moderate and strong turbulence situation, BPSK has the best BER performance.On the other hand, transmission power for QPSK is approximation to BPSK.Consider performance of channel capacity and outage probability, BPSK is best.It is an effective measure to resist turbulence with increasing the APD gain.Consequently, compared with QPSK and DPSK modulation, BPSK modulation shows a much better performance under various combined conditions.BPSK modulation is proved to be best modulation method for FFSO communication. K ⋅ represents the modified Bessel function of the second kind of order, ( ) Γ ⋅ is the Gamma function, α and β are the parameters of the strong and weak light intensity fluctuations, α and β are defined refractive index structure parameter, K is the optical wave number, L is the communication distance between receiver and transmitter.According to H-V turbulence model, 2 n C is determined by wind speed and altitude.Since the Gamma-Gamma model cover all possible of turbulence condition, the Gamma-Gamma model is used in this study.When 2 Boltzmann constant, temperature, delay time and APD receiver load respectively.Here,(I) 2
3,592.4
2016-06-15T00:00:00.000
[ "Physics", "Business" ]
va-Интернационализация образования . Образование в странах мира TOPICAL PROBLEMS OF MULTILINGUAL EDUCATION DEVELOPMENT IN A MULTICULTURAL ENVIRONMENT brings requirements for inevitable integration of cultures and coopera-tion within the framework of multinational or-ganizations. Multicultural environment retains workers from a variety of cultural backgrounds. The work in such environment differs in the va-Интернационализация The article studies the development of polylingual education in conditions of multicultural environment. The topicality of the knowledge of the Kazakh, Russian and English languages in the polylingual environment is proved. The programs of development of polylingual education in Kazakhstan are investigated; the brief review of “The Road map” of the development of polylingual education in the regional university is made. The results of the development of polylingual education are described and analyzed. The content of educational programs concerning polylingual education of university students has been analyzed. The critical analysis including of this program development is presented; threats in case of negation of the existing situation are described; available resources which can improve some aspects of the above mentioned problems are described. Valuable guidelines for students which they receive while mastering educational programs of the three-language content are designated. Due to creative rethinking and development of mechanisms for adaptation of the existing domestic and foreign experience in the field of language education, in which the most developed and effective method of teaching Russian as a nonnative language and the three main European languages (German, French and English) as foreign languages is considered, it is possible to create level models, define intercultural and communicative, professionally oriented language competences, international standards for language learning, monitor polylingualism [2]. What is polycultural environment? According to M. Condratus-Bacescu, during the period of European enlargement, orientation in the international environment is definitely vital not only for its members but also for other people. Many of them meet the differences of national cultures not only as tourists but also in everyday professional life, because the fast process of internationalisation and overall globalisation brings requirements for inevitable integration of cultures and cooperation within the framework of multinational organizations. Multicultural environment retains workers from a variety of cultural backgrounds. The work in such environment differs in the va-rious approaches to time, information, planning, decision making, relationships, communication style, power, resolving conflict, developing leadership and motivation [4]. Review of publications and literature In The Strategy 2050, the first President of RK N. Nazarbayev emphasized that modernization of education in the Republic of Kazakhstan shall go on the way of developing innovative education. It means that we need quite a new innovative education if we want to achieve in future modern, creative and competitive youth. The task is paramount, comparable to the transition from the raw economy to the innovative one. The essence of innovative education can be expressed by the phrase: "Not to catch up with the past but to create the future". The main mission of innovative education is training a competent, aware and moral person [6,10,15]. The availability of the competitive graduates of the Kazakhstan educational institutions nowadays is caused due to the high level of the plan set before the National Education in 2006. First of all, because of the implementation of the cultural project "The Unity of Languages" [18]. This idea is based on a simple formula: development of the state language, maintenance of the Russian language and study of English. At the XXIV session of Assembly of the Peoples of Kazakhstan, the head of state noted that "knowledge of at least three languages is important for the future of children of Kazakhstan" [20]. For today this concept has found the reflection in "the State program of functioning and development of languages 2001-2010 and 2011-2020" [12,17]. As it is known, remaining the polyethnic and poly-confessional state, Kazakhstan experiences difficult and contradictory period of its development [7]. If the program of development of a state language successfully develops in the southern and western regions of the country, in the northern and northeastern regions, the development of the state language still causes certain difficulties [19]. This is determined by the historical, geographical and other background. The state program "Serpin" [5] is intended to promote improvement and stabilization of the language situation in these regions. It is not a secret that the majority of the region's population speaks Russian and English better than the state language [1]. In the concept of the language policy of Kazakhstan, a large place is given to the role of the Russian language as the main source of information and means of communication [8]. Such as the prestige of education received in Russia, the main source of knowledge in science and technology. In this regard, it would be useful to add that the problem of polylingualism as such is not so critical in Kazakhstan. Globalization today requires a multilingual professional who can be active in a multinational and multicultural environment. In this regard it is clear that a person who has a developed sense of understanding and respect for other cultures, who can live in peace and harmony with people of different races and religious beliefs. If we adopt and reflect culture through the language, it will be quite possible to assume that language today is a social phenomenon. Multilingualism presupposes communicative skills such as the ability to listen to one's conversational partner, to engage in communication, and ultimately to have a better chance of developing a career. This policy is also grounded by the strategic objectives of the economy, which can only be promoted by highly educated people who can extract information in three languages. "We hope that this major study will receive all the attention that it merits from those who devise language syllabuses, and will provide ways of meeting some of the challenges thrown up by rapid change on our continent" [13]. In other words, the aim of language policy is the integration of Kazakhstan into the world community, which will determine the rise of culture, economy and science. In this aspect, the success of multicultural dialogue will depend on the level of multiculturalism of the participants of the dialogue. Knowledge of Kazakh, Russian and foreign languages becomes an integral component of personal and professional activity of a person in modern society. All this in general causes the need for a large number of citizens who practically and professionally speak several languages and in this connection get a real chance to occupy a more prestigious position in society both socially and professionally [16]. Research Methodology The work on the introduction of a polilingual education program was started in 2006 [11]. The first area of focus is dedicated to the normative and legal framework. According to the implementation of the requirements of the State compulsory standard of education at the university Bulletin of the South Ural State University. Ser. Education. Educational Sciences. 2020, vol. 12, no. 3, pp. 96-101 the knowledge of Kazakh, Russian and English languages is provided. For example, in the educational program 6B04102 "Management", the discipline Modern History of Kazakhstan is taught in the state language and assessed by a state exam. The discipline "Kazakh language" is taught for groups studying in Russian, and "Russian language" for groups studying in the state language. The disciplines "Information and Communication Technology" and "Professional English Language" are taught in English for each educational programme. The second area of focus is dedicated to scientific research activity. In this area, the publications dedicated to the issues of polilingual problems were analyzed. According to the analyses more than forty articles was devoted to this problem. In the direction of "Methodological and educational-methodological support" educational complexes of disciplines were developed in English, about 20 manuals on taught disciplines were published [8]. In the area of training and professional development, language courses for teachers were organized in various formats. The courses were devoted to different specialties and different directions. For example: "Cross-border Electronic Business For the Developing Countries of 2018" sponsored by the Ministry of Commerce and organized by Harbin University of Commerce, Seminar on "Agricultural and Livestock Products For the Belt and Road Countries of 2018", Professional development course on "Consecutive translation and the use of CAT -tools in the translation process" and others. Thus, more than 40 teachers have the necessary language training to participate in a bilingual education programme. Communicative competence in multicultural environment includes the proper use of verbal and non-verbal expressive linguistic messagesappellative (strive for impact on the others), informative, evaluative, and self-revealing ones. The implicit appeals in the process of communication are expressed in an indirect way by means of establishment of such emotional climate that makes the other people execute the untold wishfor example, if someone looks sad, we strive to make him/her glad [14]. On the basis of the available information on the work undertaken on the development of multilingual education the SWOT-analysis was made. Strengths: 1. Availability of faculty members who have received language training, including training in subjects to be read in English, Russian, Kazakh languages. 2. Experience in developing polylingual programs in the bachelor's and master's degrees. 3. Experience in preparing educational publications, educational and methodical sets of disciplines in English. Weaknesses: 1. Shortage of teaching staff with a high level of knowledge of Kazakh language (There is a practice of conducting classes in Kazakh groups in Russian, sometimes due to the lack of Kazakh language specialists). Insufficient knowledge of Kazakh among students and faculty. 2. Insufficient number of special adapted academic literature for the students of this program, including terminological dictionaries by specialties. 3. Insufficient study of experience of domestic and foreign countries on implementation of polylingual education. Opportunities: 1. To develop a methodology for explaining the advantages of a multilingual education program and a mechanism for selecting interested and trained students. 2. To work out a procedure for developing adaptive learning periodicals and terminological dictionaries. Risks: 1. Decrease in competitiveness in the labor market. 2. Low level of social adaptation. Thus, we consider it proved that for the effective development of multilingual education a systematic approach is necessary. Moreover, this topic is an important factor in the attractiveness of the university for applicants. As the research has shown, for this purpose it is necessary to work out the procedure describing stages of selection of freshmen; it is necessary to form the required competences before introducing the three-language disciplines; and also it is necessary to work out the careful selection of educational programs for introduction of multilingual education.
2,427.6
2020-09-13T00:00:00.000
[ "Education", "Linguistics" ]
Glutathione Supplementation of Parenteral Nutrition Prevents Oxidative Stress and Sustains Protein Synthesis in Guinea Pig Model Peroxides contaminating parenteral nutrition (PN) limit the use of methionine as a precursor of cysteine. Thus, PN causes a cysteine deficiency, characterized by low levels of glutathione, the main molecule used in peroxide detoxification, and limited growth in individuals receiving long-term PN compared to the average population. We hypothesize that glutathione supplementation in PN can be used as a pro-cysteine that improves glutathione levels and protein synthesis and reduces oxidative stress caused by PN. One-month-old guinea pigs (7–8 per group) were used to compare glutathione-enriched to a non-enriched PN, animals on enteral nutrition were used as a reference. PN: Dextrose, amino acids (Primene), lipid emulsion (Intralipid), multivitamins, electrolytes; five-day infusion. Glutathione (GSH, GSSG, redox potential) and the incorporation of radioactive leucine into the protein fraction (protein synthesis index) were measured in the blood, lungs, liver, and gastrocnemius muscle. Data were analysed by ANOVA; p < 0.05 was considered significant. The addition of glutathione to PN prevented the PN-induced oxidative stress in the lungs and muscles and supported protein synthesis in liver and muscles. The results potentially support the recommendation to add glutathione to the PN and demonstrate that glutathione could act as a biologically available cysteine precursor. Introduction Parenteral nutrition (PN) is essential for many patients with different gastrointestinal diseases. The causes are variable, ranging from intestinal insufficiency to the need for nutrition support in clinical conditions for which enteral nutrition is not indicated or limited [1]. This mode of nutrition results from advanced technologies that allow all nutrients to be in the same solution and administered intravenously. However, these nutrients are reactive and some of them give electrons to dissolved oxygen, generating oxidative molecules [2][3][4][5][6][7]. Among these oxidative molecules are the peroxides that were shown to induce oxidative stress and the loss of pulmonary alveoli in a newborn animal model [8]. They were also associated with bronchopulmonary dysplasia, characterized by low pulmonary alveolarization in premature infants receiving PN [9]. These consequences are explained by the low capacity of newborns to detoxify peroxides by glutathione peroxidases. While their levels Experimental Design Twenty-four one-month-old male Hartley guinea pigs, weighing 288 ± 3 g (Charles River Laboratories, Saint-Constant, QC, Canada), were housed in the animal facility for 5 days for acclimation (23-25 • C; 12:12 h light:dark). The PN animals were anaesthetized (87 mg/kg ketamine + 13 mg/kg xylazine and isoflurane gas for maintenance) in order to insert a catheter in the external jugular vein [8,11,15,28]. During the first two days after surgery, the animals received through the catheter 0.9% (w/v) NaCl containing 1 IU/mL of heparin i.v. and the infusion rate was gradually increased from 0.5 to 1.5 mL/hr. During this time, the animals were fed ad libitum with regular guinea pig food and had free access to tap water. The average daily caloric intake is described in Table 1. Animals who recovered 90% of their initial weight were included in the 5-day exclusive PN protocol. The representation of our study groups in a clinical situation includes: The PN with no GSSG supplementation that represents the "standard of care or control group", the PN with GSSG supplementation that represents the "intervention group", and the ad libitum animal group that represents healthy individuals "reference group": (1) Reference group: Animals of the same age without any manipulation and fed with regular guinea pig food. (2) PN: Animals exclusively fed by PN, having free access to tap water. The PN was compounded with 10% (w/v) glucose, 2% (w/v) amino acids preparation (Primene, Baxter, Toronto, ON, Canada), 2% (w/v) lipid emulsion (Intralipid 20%, Fresenius Kabi, Mississauga, ON, Canada), electrolytes, 1% (v/v), multivitamin preparation (Multi-12, Sandoz, Boucherville, QC, Canada), and 1 U/mL heparin. PN solutions were freshly prepared daily and gradually administered at an average rate of 129 mL/kg/day, giving an average caloric intake of 85 kcal/kg/day. These values are close to the recommendations in paediatric parenteral nutrition [29]. (3) PN + 10 µM GSSG: Intervention group, animals receiving PN enriched with 10 µM GSSG. GSSG was used because it has a better stability in the PN solution than GSH [30] and because its affinity for γ-glutamylcysteine transferase is similar to that of GSH [31]. The chosen concentration is the same as previously used with success to prevent pulmonary oxidative stress in neonatal guinea pigs [15]. After 5 days, the animals were sacrificed. Blood, liver, lungs, and gastrocnemius muscles were collected, processed, and kept at −80 • C until the assays. Plasma was obtained following blood centrifugation at 7200× g for 4 min. In accordance with the principles of the Canadian Council on Animal Care (CCAC), the Institutional Committee for Good Practice with Animals in Research of the CHU Sainte-Justine approved the protocol. Determinations Peroxide concentrations in the PN solutions were assessed by the FOX assay based on the colorimetric reaction (560 nm) between xylenol-orange and ferric iron generated after oxidation of ferrous iron by peroxides [32]. The measurements were made after a 3-h incubation and H 2 O 2 was used for the standard curve [23,28]. The red oil O staining was blindly assessed. Two images at two different locations were acquired for each liver under 20× magnification. Using ImageJ, each image was converted from RGB to CMYK. Cyan and magenta channels were extracted, transformed into grayscale image and then subtracted from each other (Cyan subtracted to Magenta) to subtract the background from the lipid staining. The resulting images were thresholded , and the stained area as well as the particles size (Size: 0-200 µm, Circularity: 0.50-1.00) were measured. For the determination of GSH and GSSG, immediately after sampling, the tissues were homogenized in 5% (w/v) freshly prepared metaphosphoric acid (in 5 volumes for liver, lungs, and muscle samples, and in 3 volumes for the whole blood), and centrifuged for 4 min at 7200× g. GSH and GSSG in supernatants were measured by capillary electrophoresis/UV according to the previously published method [8,11,15,28] while protein levels were measured in pellets by the Bradford method. The redox potential was calculated using the Nernst equation. The low concentration of glutathione in the plasma (µM) does not allow its determination by capillary electrophoresis. An enzymatic method based on the reduction of DTNB by GSH, generating a compound absorbing at 412 nm, was used [33,34]. The system includes the regeneration of GSSG into GSH by glutathione reductase + NADPH. Therefore, the increased absorbance over time is proportional to the level of GSH + GSSG in the sample. The standard curve was performed with GSSG and the results were reported as total glutathione expressed in GSH equivalent (1 GSSG = 2 GSH). The protein synthesis index was evaluated by measuring the incorporation of 3 H-L-leucine [35,36] into the tissue on days 3 and 4, at times where the daily caloric intake was constant (Table 1). At each of these days, 100 µCi of 3 H-L-leucine was added to the PN. The last day of infusion was without radioisotope. Five hundred mg of tissue were homogenized with 5 volumes of 5% (w/v) metaphosphoric acid and centrifuged at 13,000× g/20 min/4 • C. Radioactivity (dpm) was measured by scintillation in the supernatant and the pellet (protein fraction). The determination of haemoglobin and plasma urea values were utilized to assess the animal's overall health, including the presence of anemia, dehydration, and starvation. The plasma concentration of urea was measured by the method of Fearon [37] reviewed by Rahmatullah and Boyde [38]. The method was based on a colorimetric reaction (520 nm) following the interaction between urea and diacetyl monoxime in the presence of thiosemicarbazide. The results were extrapolated from the standard curve generated with urea. The haemoglobin level assessment was based on the oxidation of haemoglobin to methaemoglobin in the presence of ferricyanide. The complex absorbed at 540 nm. A commercial kit (B4184, Sigma-Aldrich, Saint-Louis, MO, USA) was used. Statistical Analyses All data are presented as mean ± S.E.M. The groups were compared by ANOVA, using orthogonal comparisons, after verifying the homoscedasticity by Bartlett Chi 2 . Mean daily caloric intake data were logarithmically transformed to satisfy the homoscedasticity. Pearson's correlations were used to quantify the weight gain of animals over of the 5-day duration of the experiment. The significant threshold was set at 0.05. Animal Characterization Over the course of PN, one animal was removed from the experiment due to occlusion of the jugular catheter. Thus, 23 animals were included in this study. Bodyweight (Figure 1) of the reference group increased by 14% during the five-day experiment (including all animals in this group, y = 9.7 g·day −1 + 352 g; r 2 = 0.64, p < 0.01), while it decreased over time in the PN group (including all animals in this group, y = −3.6 g·day −1 + 329 g; r 2 = 0.18, p < 0.01), and in PN + 10 µM GSSG group (including all animals in this group, y = −3.8 g·day −1 + 334 g; r 2 = 0.18, p < 0.02). The slopes and the 6% decrease over time were similar between the PN groups. Bodyweight at day 0 differed between the reference group and the PN groups (F (1,20) = 15.3, p < 0.01), they were similar between the PN group and the PN + 10 µM GSSG group (F (1,20) = 0.8). Animal Characterization Over the course of PN, one animal was removed from the experiment due to occlusion of the jugular catheter. Thus, 23 animals were included in this study. Bodyweight (Figure 1) of the reference group increased by 14% during the five-day experiment (including all animals in this group, y = 9.7 g·day −1 + 352 g; r 2 = 0.64, p < 0.01), while it decreased over time in the PN group (including all animals in this group, y = −3.6 g·day −1 + 329 g; r 2 = 0.18, p < 0.01), and in PN + 10 µM GSSG group (including all animals in this group, y = −3.8 g·day −1 + 334 g; r 2 = 0.18, p < 0.02). The slopes and the 6% decrease over time were similar between the PN groups. Bodyweight at day 0 differed between the reference group and the PN groups (F(1,20) = 15.3, p < 0.01), they were similar between the PN group and the PN + 10 µM GSSG group (F(1,20) = 0.8). PN had induced hepatic steatosis ( Figure 2). This is a well-known complication of PN [39]. The number of lipid droplets was the lowest in the reference group (F(1,18) = 55.25, p < 0.0001) compared to PN ± GSSG groups ( Figure 2). It was higher in the PN + GSSG group than in the PN group (F(1,18) = 8.13, p < 0.05). The lipid droplet average size was higher in the PN ± GSSG groups (F(1,18) = 12.27, p < 0.01). The impact of GSSG supplementation on the size of lipid droplets did not reach the statistical significance (F(1,18) = 3.23) (Figure 2). PN had induced hepatic steatosis ( Figure 2). This is a well-known complication of PN [39]. The number of lipid droplets was the lowest in the reference group (F (1,18) = 55.25, p < 0.0001) compared to PN ± GSSG groups ( Figure 2). It was higher in the PN + GSSG group than in the PN group (F (1,18) = 8.13, p < 0.05). The lipid droplet average size was higher in the PN ± GSSG groups (F (1,18) = 12.27, p < 0.01). The impact of GSSG supplementation on the size of lipid droplets did not reach the statistical significance (F (1,18) Comparisons of haemoglobin and plasma urea values (Table 2) assessed the general health status of the animal, such as anemia, dehydration, and starvation. At day 1, the haemoglobin concentrations did not differ between groups (F(1,36) < 2.9). At day 5, they were similar between reference group and PN + GSSG group (F(1,36) = 2.00) and were lower than those in the PN group (F(1,36) = 13.24, p < 0.01). Plasma urea concentrations on the last day of the experiment were not significantly different between the groups (F(1,20) < 0.1). Comparisons of haemoglobin and plasma urea values (Table 2) assessed the general health status of the animal, such as anemia, dehydration, and starvation. At day 1, the haemoglobin concentrations did not differ between groups (F (1,36) < 2.9). At day 5, they were similar between reference group and PN + GSSG group (F (1,36) = 2.00) and were lower than those in the PN group (F (1,36) = 13.24, p < 0.01). Plasma urea concentrations on the last day of the experiment were not significantly different between the groups (F (1,20) < 0.1). Table 2. Haemoglobin and plasma urea concentrations. Hb: haemoglobin measured on the first (d0) and last (d5) day of experimentation. Urea measured in plasma at the last day of experimentation. At d0, there was no difference in Hb between groups, while at d5 it was higher in PN group compared to PN+GSSG and reference groups. Urea concentrations were not significantly different between groups. Mean ± S.E.M., n = 4-8 per group; **: p < 0.01. Oxidative Stress The PN solutions of the present study were contaminated with 272 ± 14 µM peroxide. This concentration was similar to levels previously measured in such solutions [3,28]. This amount of peroxide has the potential to induce oxidative stress. Discussion The study shows that the addition of GSSG in PN may be used to prevent oxidative stress induced by PN, as has occurred in the lungs and muscles, and to support the protein synthesis in organs with the greatest need for the synthesis of protein during growth such as the liver and muscles. Even a further dose-response study is required to confirm these effects, the results already support the hypothesis that PN, as administered in a clinical setting, could be suboptimal in providing the amount of cysteine needed by the body and that GSSG supplementation could be used as pro-cysteine to support its biological activity. The design of the experimental protocol consisted of evaluating the impacts of GSSG supplementation of PN on oxidative stress and protein synthesis. Thus, the PN group can be considered as the control group. In order to appreciate the magnitude of the effect of the GSSG supplementation, animals without manipulation served as a reference group and not a control group. Indeed, there was a significant dissimilarity between the PN groups and this reference group. A main difference concerned metabolizable nutritional intakes. After an acclimatization period for all animals, the caloric intake of animals receiving PN increased gradually to reach a plateau after three days. This progression constituted a specific period of acclimatization. In addition, surgery to fix the catheter is a metabolic stress as demonstrated by a lower body weight of 5% to 7% relative to the reference group after two days of ad libitum feeding with regular guinea pig food, before PN administration. In this guinea pig model, five consecutive days of PN resulted in ~6% weight loss in all PN (±GSSG) animals, compared to 14% weight gain in the orally fed reference animals. These observations are consistent with previous studies using guinea pigs [40] or rats [14] as a PN model. One could argue that a nutritive deprivation would have led to catabolism. Urea is then measured Discussion The study shows that the addition of GSSG in PN may be used to prevent oxidative stress induced by PN, as has occurred in the lungs and muscles, and to support the protein synthesis in organs with the greatest need for the synthesis of protein during growth such as the liver and muscles. Even a further dose-response study is required to confirm these effects, the results already support the hypothesis that PN, as administered in a clinical setting, could be suboptimal in providing the amount of cysteine needed by the body and that GSSG supplementation could be used as pro-cysteine to support its biological activity. The design of the experimental protocol consisted of evaluating the impacts of GSSG supplementation of PN on oxidative stress and protein synthesis. Thus, the PN group can be considered as the control group. In order to appreciate the magnitude of the effect of the GSSG supplementation, animals without manipulation served as a reference group and not a control group. Indeed, there was a significant dissimilarity between the PN groups and this reference group. A main difference concerned metabolizable nutritional intakes. After an acclimatization period for all animals, the caloric intake of animals receiving PN increased gradually to reach a plateau after three days. This progression constituted a specific period of acclimatization. In addition, surgery to fix the catheter is a metabolic stress as demonstrated by a lower body weight of 5% to 7% relative to the reference group after two days of ad libitum feeding with regular guinea pig food, before PN administration. In this guinea pig model, five consecutive days of PN resulted in~6% weight loss in all PN (±GSSG) animals, compared to 14% weight gain in the orally fed reference animals. These observations are consistent with previous studies using guinea pigs [40] or rats [14] as a PN model. One could argue that a nutritive deprivation would have led to catabolism. Urea is then measured in plasma since it is the main nitrogen product of protein degradation. Here, the similarity of urea plasma concentrations between groups does not support the presence of catabolism. In addition, equal levels of protein per gram of muscle tissue in all groups, including reference group, suggest that the PN animals did not suffer from inadequate caloric intake leading to higher protein catabolism. Despite an apparently adequate caloric intake, this lack of weight gain could be explained by qualitative and quantitative differences in the metabolizable nutrient intake between PN groups and the reference group, or by the impact of the nutrient delivery route during growth. Another possibility is that this lack of growth in PN animals could be explained, at least in part, by a partial inability to de novo synthesize proteins. Low glutathione levels in the blood of premature infants [9] and children with chronic PN [23], combined with data indicating a shorter height in children on chronic PN [23,27] support the hypothesis of a non-optimal availability of energy and/or cysteine leading to suboptimal protein synthesis. Energy is an important factor for this synthesis. PN groups received energy intakes similar to those recommended in the recent (2018) ESPGHAN/ESPEN/ESPR/CSPEN guidelines for preterm infants [29]. However, this recommendation does not take into account the energy required for thermoregulation, and the basal metabolism and activity (~50 kcal/kg/day) as suggested by Reichman BL et al. [41]. The energy cost of these parameters is unknown in our animal model. Suboptimal energy intake can compromise protein synthesis. On the other hand, it may not be appropriate to compare the energy requirements of a month-old guinea pig to a premature newborn. The second important factor is the availability of amino acids, including cysteine. Here, GSSG has been used as pro-cysteine. The presence of cysteine in some parenteral amino acid preparations, as here with Primene, is controversial. In the presence of an oxidant, such as dissolved oxygen in a parenteral solution, cysteine oxidizes rapidly to cystine, whose solubility is about 1500 times lower (Drugbank, www. drugbank.ca). Primene contains 189 mg cysteine/ L (Product monograph, Primene 10%, Baxter Corporation, Mississauga, ON, Canada, 2015). The solubility of cystine is 190 mg/L (Drugbank). Thus, as a precautionary measure, it may be unwise to increase the concentration of cysteine in amino acid preparations. N-acetyl-cysteine has been proposed to enrich parenteral nutrition. However, a Cochrane meta-analysis found no significant effect of using N-acetyl-cysteine in order to improve low glutathione levels in premature infants and to reduce the incidence of several complications related to prematurity [42]. This report casts doubt on the usefulness of N-acetyl-cysteine in PN. It is possible, however, that this is specific to premature neonates. On the other hand, the usefulness of a glutathione supplement as a precursor of cysteine appears to be independent of age. Here, even with a concentration of 10 µM (less than one percent of the cysteine concentration initially dissolved in the PN), the GSSG added to the PN shows a biological availability of cysteine. However, the lack of correction of total glutathione in the plasma of the PN + GSSG group suggests that a concentration of 10 µM may not be optimal. Further studies need to be initiated to determine the optimal concentration of GSSG to be added to the PN before considering its clinical use. The impact of GSSG supplementation was expected to vary according to the organ. Beyond comparisons between groups, the figures illustrate the high variability between studied organs for both glutathione values and protein levels, suggesting different cysteine requirements. The liver contains the highest level of GSH. It is the only organ that actively exports GSH to plasma. Thus, its glutathione synthesis rate is high and depends mainly on the transformation of methionine into cysteine. With a redox potential of <−240 mV, the hepatic cells are proliferating [43]. Thus, protein synthesis should be high. The protein synthesis index is improved in the presence of GSSG in PN. Since glutathione data were reported as a function of protein content (GSH, GSSG) or volume (redox potential), a confounding bias in comparison with the reference group could be the presence of lipids in the PN groups ( Figure 3). This bias could also explain the lower protein content in the animals of the PN groups ( Figure 4A). Steatosis is well known to be a PN-related complication [39]. Here, the lungs are the next organ in glutathione content. It actively exports GSH into the pulmonary lining fluid [44]. The high synthesis of glutathione in the lungs depends on the presence of glutathione in the plasma. The action of γ-glutamyl transferase allows the release of cysteine into the cells. With a normal redox potential of about −220 mV, the lung cells are proliferating [43]. However, the peroxides contaminating the PN [2,3] induce a lower level of GSH and, consequently, a shift of the redox potential towards a more oxidative state, associated with the cellular differentiation [43]. Therefore, PN can have an impact on lung development. The addition of GSSG in PN prevented this change. Of the three organs studied, the muscle is the one with the lowest GSH level, at one-sixth of the amount measured in the liver. With a redox potential of~−210 mV, the muscle cells are at the limit between proliferation and differentiation [43]. Thus, with a redox potential of about −200 mV in the PN group, the cells are more differentiated. These data suggest that here too, the PN can have an impact on muscle development. In contrast to the lungs, the oxidation of the redox potential in the PN group is caused by a higher value of GSSG, suggesting the presence of high levels of peroxides. Oxidized redox and high GSSG value are associated with abnormal calcium metabolism [45][46][47][48]. By preventing oxidative stress, a supplement of GSSG could preserve muscle function. Since the need for glutathione synthesis was relatively lower, the protein synthesis index was higher in the PN+GSSG group. In whole blood, PN and PN + GSSG did not influence glutathione concentrations or the redox potential. This highlights the difficulty for clinical studies in using the values measured in the whole blood as a reflection of glutathione in different organs. On the other hand, data on glutathione and protein concentrations in the plasma suggest that the glutathione levels in the plasma are the first to be depleted in glutathione deficiency and the last to be normalized (after fulfilling all needs of different organs). Hence, it is sensitive and specific measure for deficiency but less sensitive with glutathione repletion. One of the main limitations of the study is the use of a single concentration of GSSG. The choice of this concentration of 10 µM was based on a previous study showing the correction of plasma glutathione levels and the prevention of oxidation of the pulmonary redox potential in newborn guinea pigs [15]. One-month-old animals might need more GSSG. Others [14] reported that the infusion of a high dose of GSSG (8.9 mM) in rats (~6 weeks-old) receiving PN improved the plasma cystine concentration; the plasma concentration of glutathione was not reported. Here, using 1000 times less GSSG (10 µM), it is likely that we cannot measure a difference in plasma cysteine or cystine levels. At this relatively low concentration, circulating glutathione is used to enrich the cells, not the plasma, in cysteine. Moreover, the low level of plasma glutathione was not corrected by the addition of GSSG in PN, suggesting suboptimal supplementation. Nevertheless, the present study is a proof of concept of the bioavailability of GSSG as a precursor of cysteine in growing guinea pigs that are nourished by PN. Steatosis was expected in the PN groups, but it was surprising to observe more lipid droplets in the PN group containing GSSG than in the PN group without GSSG. The size of the droplets was not influenced by the presence of GSSG in the NP. The direct impact of hepatic glutathione (GSH or GSSG) or redox potential was excluded because they were not influenced by PN or PN + GSSG (Figure 3). A future dose-response study will confirm whether the difference observed is due to a statistical error of Type 1 or whether supplementation with GSSG induces or aggravates NP-induced steatosis. In view of a possible clinical application, a subsequent study should document the effect of increasing doses of GSSG on lipid accumulation in liver as well cysteine/cystine plasma concentrations in addition to glutathione levels and protein synthesis index. With a half-life of approximately 15 min, as demonstrated in humans [49], saturation of plasma glutathione concentration could be the indicator of the optimal dose of GSSG to be added to the PN. Conflicts of Interest: The authors declare no conflict of interest.
6,060.6
2019-09-01T00:00:00.000
[ "Biology", "Medicine" ]
Investigation on deformation of DP600 steel sheets in electric-pulse triggered energetic materials forming Electric-pulse triggered energetic materials forming (ETEF) is a high-speed manufacturing process, which utilizes the chemical energy released by energetic materials (EMs) triggered by underwater wire discharge to plastically shape metals. ETEF is not fully understood, particularly in research on the discharge characteristics of energetic materials triggered by metal wires and the deformation process of metal sheets. The above two problems were investigated in this paper using experimentation and numerical simulation. For the pulse discharge characteristics, the peak values of voltage and current were reduced during the triggering process of energetic materials, and the triggering energy consumption of energetic materials was quantified to be about 200 J. The matching parameters of different capacitor-voltage devices may be insensitive to triggering the energy release of energetic materials. The maximum major strain and thinning rate of the bulged specimen under ETEF conditions were significantly reduced when compared to the quasi-static specimen with the same bulging height, and the specimen’s deformation uniformity and strain distribution were improved. The simulation results showed that the addition of energetic materials significantly improved the plastic strain energy of the blank. The deformation of the blank in ETEF can be divided into two stages: the initial chemical energy action stage and the inertia action stage. The bulging height of sheet metal increased by nearly 301% in inertia action stage, accounting for 80% of the total deformation time, and the effective plastic strain distribution was more uniform. Introduction To improve the fuel efficiency and crashworthiness of automobiles, the use of high-strength steel to develop lighter and safer cars has become a trend in the automobile industry. Advanced high-strength steel sheets have been widely used in the production of impact-resistant and energy-absorbing components. However, the use of advanced high-strength steel in auto-body components is still limited to simply shaping automobile parts due to its poor formability, which makes it difficult to use in traditional deep drawing processes for complicated auto parts. To improve its strength and formability even more, a suitable forming process must be chosen. Compared with traditional forming processes, high-velocity forming (such as electromagnetic forming and electrohydraulic forming) is very effective in improving the strength and formability of materials, so many researchers have conducted research. At high strain rates, the flow stress of many materials increases significantly with the increase of strain rate [1][2][3][4], showing strain rate sensitivity. According to Psyk et al. [5], the workpiece is accelerated to a velocity of up to several hundred m/s and strain rates of 10 3 /s in the EMF process, thereby improving the formability and strength of the material, which will help enhance the crashworthiness of automotive parts. Electromagnetic forming is a method that uses Lorentz force generated by pulsed magnetic field to deform workpiece at high speed [6,7]. Therefore, this non-contact feature of electromagnetic forming can significantly improve the surface morphology of the workpiece [8]. However, in practical applications, the forming capability of EMF is typically limited by the insulation strength and mechanical strength of the coil and the energy storage of the electric pulse generator. Increased discharge energy increases Lorentz force, but it also causes insulation breakdown and coil breakage, affecting forming results and potentially damaging experimental equipment. Electrohydraulic forming is a high-velocity forming technology in which shock waves generated by the discharge of two electrodes in a liquid medium cause the workpiece to deform plastically [9]. Water is used as a "punch" to form workpieces in electrohydraulic forming, which allows for a lot of process flexibility. Additionally, electrohydraulic forming is also more widely used than electromagnetic forming because it is not limited by material conductivity. For instance, Golovashchenko et al. [10] and Tang et al. [11] successfully applied electrohydraulic forming to trimming of advanced high-strength steel. Mamutov et al. [12] used electrohydraulic forming to manufacture a complex geometry automotive part. However, the energy utilization efficiency is extremely low due to the underwater electrode discharge, and even if the wire between the two electrodes is discharged, its energy utilization efficiency is only 24%, as concluded by Efimov et al. [13]. As a result, there will be a waste of energy. Based on the aforementioned issues, Yu et al. [14] proposed ETEF, a new high-velocity forming method that uses underwater metal wire electric explosion to ignite the chemical energy released by energetic materials to complete the workpiece's deformation. Experiments revealed that energetic materials had a high energy effect, and the energy level of energetic materials was quantified as 3.04 kJ/g. The discharge characteristics of underwater wire electric explosion have been studied by researchers. Han et al. [15] studied the underwater electrical explosion of copper wires and found that the deposited energy influenced the expansion of the discharge plasma channel and affected the shock wave characteristics. Grinenko et al. [16] conducted an experimental study on the underwater electric explosion of copper wire and discovered that the efficiency of electrical energy deposition into the mechanical energy for the fluid flow was 25% and the maximum pressure obtained at the boundary of discharge plasma channel was around 600 MPa. Therefore, the metal wire in ETEF ignites the surrounding energetic materials to release energy similar to explosive forming (EF). Traditional EF explosives have high-energy effect, which can reduce the production cost of small-batch formed parts, and are widely used in manufacturing low-volume formed parts, such as manufacturing thin-walled decorating spheres for city construction and art works of copper plate relief [17,18]. However, the research on the discharge characteristics of energetic materials triggered by metal wires and the energy release level of energetic materials under different capacitance-voltage matching parameters is unclear, and the deformation process of workpieces in ETEF is not perfect. Therefore, it is necessary to conduct a comprehensive study of the above contents. A better understanding of the discharge characteristics of energetic materials triggered by metal wires, as well as the dynamic deformation process of the ETEF sheets, would aid in the implementation of this forming process in the automotive industry. As a result, the discharge characteristics of energetic materials triggered by metal wires under different capacitance-voltage matching parameters, as well as the influence of energetic material energy release level on sheet bulging, were investigated in this work. Experiment and numerical analysis were used to investigate the deformation process of DP600 steel sheet in ETEF, such as strain distribution characteristics, deformation uniformity, and dynamic deformation process. Material description The as-received material was cold-rolled DP600 steel sheet with a thickness of 0.8 mm and provided by Baoshan Iron & Steel Co., China. Table 1 shows the quasi-static tensile mechanical properties and main chemical compositions (wt%) of this material. The new energetic materials selected in this study were aluminum (Al) particles and ammonium perchlorate (AP) particles. Al is a smooth sphere with an average particle size of 1-3 μm, and agglomeration occurs because of the small size of Al. AP particles showed irregular spheres with an average particle size of 140 μm. Physical mixing was used to create the energetic materials used in this experiment, Al/ AP (10 wt% Al, 90 wt% AP). Energy release during ETEF The process of instantaneous melting and vaporization of metal wires to form plasma under high-voltage pulsed current is referred to as electrical explosion of metal wires. Plasma is heated and expanded by intense Joule heating, resulting in the formation of a plasma channel filled with high pressure and heat. Strong shock waves are radiated during plasma diffusion, which are quickly converted into sound pressure pulses and then spread to the surrounding medium, as described by Timoshkin et al. [19]. The ETEF method ignited energetic materials and released energy in water by using an electric explosion of metal wire to form plasma. The energy release process of plasma-triggered energetic materials can be divided into three stages: heating, ignition, and detonation ( Fig. 1). (I) Heating stage: The temperature of solid energetic materials is rapidly heated from the initial temperature T 0 to the decomposition temperature T d by heat conduction of high temperature plasma. During the process, no chemical reactions occur. (II) Ignition stage: As plasma continues to diffuse, the temperature of energetic materials rises from T d to T S (the burning surface temperature) and ignites. At this stage, energetic materials go through a phase transition from solid to liquid and then to vapor, producing high-temperature and high-pressure gas products on the surface. (III) Detonation stage: As energetic materials surrounding the metal wire ignite, the gas temperature in stage II rapidly rises from T S to T f , and more energetic materials are ignited to release energy. Energetic materials decompose rapidly within a brief period to produce more gases. These gases expand quickly within a limited space, evolve into shock waves, and compress surrounding media to complete detonation. Energetic materials react chemically and release high energy after ignition, described as heat and shock waves by Pagoria et al. [20]. Heat and shock waves locally heat up inside energetic materials to form "hot spots," causing the entire energetic material to release energy quickly [21]. Energetic materials are characterized by high energy, a wide pulse, and a strong shock wave during energy release. Thus, they are widely used for infrared pulse radiation, fossil energy extraction, and rocket propulsion. Experimental setup for free bulging tests To investigate the deformation of a DP600 steel sheet (with a diameter of 220 mm) under a biaxial stress state, free bulging tests were performed. The schematic of experimental setups for bulging tests are illustrated in Fig. 2. DP600 steel sheet was deformed in the ETEF process by discharging metal wire with an electric pulse generator, instantly igniting energetic materials, releasing chemical energy, and generating a shock wave in a liquid chamber (Fig. 2a). The Rogowski current waveform transducer (Power Electronic Measurements Ltd, Nottingham, UK) and the P6015A high voltage probe (Tektronix, USA) were used to measure the current and voltage generated by the electric pulse generator discharging the metal wire, respectively. Energetic materials were placed in a 40-mm-long EMs cylinder. The top die was an open with an inner diameter of 100 mm and an entry radius of 10 mm. Figure 2b shows the displacement variation with time during the sheet bulging process measured by position sensitive detector (PSD), namely a Laser Sensor M70LL (MEL Mikroelektronik GmbH, Germany). As shown in Fig. 2c, it was quasi-static forming (QSF) and quasi-static hydraulic forming (QSHF). The diameter of the punch was 100 mm, and the filet radius of the bottom die opening was 10 mm. The punch was replaced by highpressure liquid for QSHF. The plastic strain distribution of the deformed specimen was measured using an optical threedimensional (3D) deformation measuring system ARGUS-V6.3.1 (GOM GmbH, Germany). First, an electrochemical etching was used to create circular array grids with a diameter of 1 mm and an adjacent center distance of 2 mm on the surface of the initial specimens. A GOM system was then used to calculate the strain data of the deformed grids. Capacitance-voltage matching parameter tests The energetic materials in ETEF are triggered by plasma generated by wire explosion to release energy. Therefore, the effect of capacitance-voltage matching parameters in various electric pulse generators (EPG) on the variation characteristics of current and voltage after discharge, as well as the energy release of energetic materials, must be thoroughly investigated. Table 2 lists the discharge parameters of different electric pulse generators and the quality of ignited energetic materials. The relationship between discharge energy E, equipment capacitance C i , and discharge voltage U can be expressed as: At the same discharge energy (1.37 kJ), the discharge test of the same energetic materials (2 g) triggered by different electric pulse generators was carried out. The effect of equipment parameters on the changes in current and voltage waveforms following the underwater discharge of pure metal wire and the discharge of energetic materials triggered by metal wire was investigated. The current and voltage waveform data were obtained using a Rogowski current waveform transducer and a Tektronix P6015A high voltage probe, respectively, and the waveform results were displayed using an oscilloscope (Fig. 2a). To study the influence of capacitance-voltage matching parameters in EPG on energy release of energetic materials, we used different equipment parameters to conduct discharge tests on energetic materials, which were evaluated by the final bulge height, deformation speed, and effective plastic strain of the specimens during ETEF. The acquisition parameters were set as follows: displacement range, 0-50 mm; sensitivity reached 0.4 V/mm; and acquisition frequency, 500 KHZ. The effective plastic strain of the deformed specimens was measured by the ARGUS-V6.3.1 testing system. Influence of capacitance-voltage matching parameters on discharge characteristics The effect of capacitance-voltage matching parameters on the discharge characteristics of metal wires and its triggering energetic materials is a critical link in the study of energetic materials energy release during the ETEF process. A metal wire (molybdenum wire with a diameter of 0.2 mm and a length of 45 mm) and an EMs cylinder were used as the discharge object under different equipment parameters (EPG-A, EPG-B), and the discharge voltage U(t) and current I(t) were obtained, as shown in Fig. 3a-b. According to Eqs. (2) and (3), the waveforms of instantaneous power P t and the deposited energy W t were calculated respectively (Fig. 3c-d). According to the current and voltage curves presented in Fig. 3a, b, at the same electrical pulse generator parameters, the addition of energetic materials resulted in a decrease in the maximum voltage of the wire before breakdown, and the peak value of current waveform decreased significantly after breakdown discharge. Generally, wire explosion will undergo a series of physical changes, that is, the phase transition from solid, liquid, gas to plasma. The physical process changed after the energetic materials were added, and the plasma formed by wire explosion heated and ignited energetic materials for chemical reaction. In this process, the ignition of energetic materials occurred at the peak of voltage, which reduced the peak of current compared with the Mo wire explosion in water, indicating that the electrical conductivity changed during the ignition of energetic materials by metal wires. There are two possible explanations for this phenomenon. One is that energetic materials are ignited, and the other is that after wire explosion forms plasma, the nearby energetic materials are heated by thermal radiation to form a conductive layer, and the extra conductive layer (gas products produced by vaporization of energetic materials) increases the resistance of the discharge channel [22]. Both of these factors can reduce conductivity between electrodes and thus reduce current in the circuit. Furthermore, the introduction of energetic materials reduced the maximum electric power and the electric energy deposited in the discharge channel, as calculated by Eqs. (2) and (3). This phenomenon could be explained by the use of some energetic materials with high temperatures as extra conductive substances, which accelerates the breakdown process of the wire vaporization discharge channel, resulting in a decrease in deposition energy. According to Fig. 3, the energy consumed during the ignition of energetic materials was approximately 200 J, implying that energetic materials were ignited during the wire explosion, followed by chemical reactions and shock waves. Although energetic materials consumed plasma energy during the ignition process, the addition of energetic materials provided an additional shock wave amplitude, namely the secondary shock wave peak effect, which increased the impulse of the entire system, as demonstrated by Zhou et al. [23]. According to our previous studies [14], compared with pure electrohydraulic forming (discharge voltage 3 kV), the bulging height of sheet Influence of capacitance-voltage matching parameters on sheet bulging In this section, the influence of energy release from energetic materials triggered by metal wire on sheet bulging is discussed under the matching parameters of capacitance-voltage of electric pulse generator. Figure and then rose slightly to maintain high speed movement, and the deformation speed was close to zero at about 300 μs, finally until the end of deformation. Therefore, under different capacitance-voltage matching equipment parameters, the apex velocity of bulged sheet has the same trend with time in ETEF process. Additionally, according to our previous research [14], the variation trend of the peak velocity of the specimen obtained in ETEF numerical simulation under the parameter of 3 kV/2.0 g was in good agreement with the experimental results (Fig. 4a). Figure 5 shows the bulged specimen and effective plastic strain under different equipment parameters. In the deformation zone of φ100mm, the distribution trend of effective plastic strain of the sheets under EPG-A and EPG-B equipment was similar, and the maximum effective plastic strain values were 49.3% and 50.4%, respectively. Table 3 lists the final bulging height, maximum deformation speed, and maximum effective plastic strain obtained on the sheet under different equipment parameters, and their values are at the same level. According to the deposition energy curves in Sect. 3.1 (Fig. 3c, d), the deposition energy consumed by the ignition of energetic materials by metal wires was about 200 J under different capacitance and voltage matching parameters. Based on our previous studies [14], the chemical energy per gram of energetic materials was 3.04 kJ. Using the parameters of EPG-A equipment as an example, in the energy system of 3.0 kV/2.0 g, the energy deposited after the wire triggered the energetic materials was 1.07 kJ. Consequently, the energy released by the energetic materials accounted for 86% of the total energy system, indicating that the chemical energy released by energetic materials was primarily responsible for the sheet's bulging height. The bulging results obtained under different equipment parameters were essentially consistent in terms of final bulging height, velocity variation trend, and effective plastic strain value. Therefore, the initial energy storage of electric pulse generators plays a role in triggering energetic materials to release energy, and the capacitance-voltage matching parameters of different electric pulse generators may have no effect on the energy level released by energetic materials. In other words, energetic materials were insensitive to the initial equipment conditions of the electrical pulse generator and had low requirements for the matching parameters of the equipment's capacitance-voltage. The electric pulse generator can provide enough system triggering energy, which can trigger energetic materials to release energy stably, thereby increasing the flexibility of initial equipment condition triggering. This will be beneficial to the popularization and application of ETEF. Subsequently, we select EPG-A equipment parameters to study the deformation of the sheet under ETEF in detail. Analysis of deformation results of sheet metal The bulging height, maximum strain value, and maximum thinning rate of the bulged specimen during ETEF were used to analyze the deformation of DP600 sheet, as shown in Table 4. Figure 6 shows the specimens and profiles obtained from ETEF, QSF, and QSHF tests with the same bulging height. It can be seen that the non-uniform deformation of the QSF specimens occurred in the deformation zone 20-40 mm from the apex of the sheet. The deformation of the specimen obtained by QSHF was more uniform than that of QSF. Likewise, compared with QSF, the profile of the specimen under ETEF condition was more uniform. When the energetic materials were triggered to release energy by the metal wire, it pressed the surrounding water medium to obtain kinetic energy and pushed the sheet to complete high-speed deformation. The water medium has a certain fluidity as a flexible "punch", which improves the specimen's profile uniformity. The thickness distribution is an important index for determining the deformation uniformity of deformed specimen. Figure 7 shows the thickness distribution of the bulged specimens. In the deformation zone of φ100mm, the thickness distribution of the specimens under ETEF condition was relatively uniform, and the maximum .4), respectively. Additionally, the thickness reduction area of QSHF was concentrated in the central area of the specimen, and the thinning was severe, with the maximum thinning rate of 31.4%. Therefore, compared with the quasi-static bulged specimens with the same bulging height (24 mm, 29 mm), the maximum thickness rate of the specimens under the conditions of ETEF/3.0 kV/1.0 g and ETEF/3.0 kV/1.5 g was reduced by 30.8% and 13.8%, respectively. According to Table 4, the maximum major strain and the maximum thinning rate of the specimens obtained under ETEF were lower than those of quasi-static conditions, which inevitably affected the strain distribution in the deformation zone of the specimens. Figure 8 exhibits the strain distribution and thinning rate of ETEF/1.5 g, QSF/29 mm, and QSHF/29 mm specimens with the same bulging height. The maximum major strain and the maximum minor strain of the specimen under the QSF condition were located 20 mm from the apex of the sheet, and their values were 21.6% and 13.7%, respectively, and distributed symmetrically. Under QSHF condition, the maximum major strain and the maximum minor strain of the specimen were located at the apex of the sheet, and their values were 20.9% and 19.6%, respectively, resulting in severe strain concentration. Clearly, the maximum strain obtained by ETEF was distributed at the apex of the specimen, and its maximum major strain and maximum minor strain were 15.2% and 14.2%, respectively (Fig. 8a). Under the ETEF condition, in the φ 60 mm deformation zone, the strain in two principal in-plane directions was almost equiaxial, and the strain distribution was obviously improved. The maximum major strains were 29.6% and 27.3% lower than that of QSF and QSHF, respectively. Moreover, the thinning rate also showed similar distribution characteristics, and the thinning rate of the specimen under ETEF conditions was significantly reduced compared with that of quasi-static conditions (Fig. 8b). According to our previous tests [14], the specimen also cracked here under quasi-static conditions, mainly because the contact friction between the specimen and punch increased in the deformation zone, which resulted in a large deformation and serious thickness thinning in this zone [24]. Therefore, the maximum strain and thinning rate of ETEF specimens decreased, which significantly improved the uniformity of strain distribution in the deformation zone. Dynamic deformation process of sheet metal LS-DYNA simulation software was adopted to simulate the dynamic deformation process of the sheet in ETEF. A quarter geometric model (including Mo wire, EMs, water, air, blank, blank holder, and liquid chamber) was established based on the test tooling in Fig. 2a. Then, the energy input into ETEF was preset, including the electrical energy input by metal wire (Fig. 3c) and the chemical energy of energetic materials. The former was the electric energy predetermined by the metal wire via the electric pulse generator, which primarily served the purpose of igniting energetic materials; the latter was the energy released by the chemical reaction of energetic materials after they had been ignited by metal wire. The chemical energy released by energetic materials was primarily responsible for the sheet's deformation. Our previous work [14] contains a detailed implementation process of ETEF numerical simulation. According to the description in Sect. 2.2, energetic materials mainly produce heat energy, light energy, and mechanical energy after releasing energy and form shock waves to work on the surrounding water medium, resulting in plastic deformation of the workpiece. Therefore, the plastic strain energy was used to evaluate the contribution of energy released by energetic materials to the plastic deformation of the blank [25]. Figure 9 shows the change in plastic strain energy of the blank over time following energy release by energetic materials in the ETEF process. It was discovered that the addition of energetic materials significantly increased the plastic strain energy of the blank. Compared with the final plastic strain energy of EHF/3 kV, the plastic strain energy obtained under the conditions of ETEF/3 kV/1.0 g and ETEF/3 kV/1.5 g contributed 60% and 74% to the plastic deformation of the blank, respectively. Specifically, according to the research in Sects. 3.1 and 3.2, it was found that the deposition energy consumed by energetic materials during ignition was about 200 J, which was relatively small in the whole energy system and even negligible, but it reduced the deposition energy under EHF/3 kV conditions. Therefore, the plastic strain energy obtained under the conditions of ETEF/3 kV/1.0 g and ETEF/3 kV/1.5 g contributed slightly more than 60% and 74% to the plastic deformation of the blank, respectively. As a result, the energy released by energetic materials during the ETEF process played a significant role in the blank's plastic deformation. Furthermore, the changing trend of the blank's plastic strain energy shows that the increase in plastic strain energy can be divided into two stages. Using ETEF/3 kV/1.5 g as an example, the plastic strain energy increased slightly within 60 μs, and the plastic strain energy of the blank increased significantly in 60-300 μs. Therefore, after the energetic materials released energy, the shock wave pressure and the stress and strain on the blank must change Hence, we take the ETEF/1.5 g parameter as an example for subsequent numerical simulation analysis. Figure 10 exhibits the change of the shock wave pressure generated by the elements on the metal wire and energetic materials with time during the ETEF process. Elements A, B, and C were on metal wire, and elements D, E, and F were on energetic materials. After the electric pulse generator discharged, the shock wave pressure of the elements on the metal wire and energetic materials were generated almost simultaneously, and the duration from the generation of the pressure to the rapid drop were about 10 μs. Remarkably, from the peak pressure on the elements, it can be found that the maximum value of the shock wave pressure generated by the elements on the energetic materials was greater than that on the metal wire, indicating that the energetic materials were ignited by the metal wire and increased the peak value of the shock wave. Therefore, according to the analysis in Sect. 3.1, the addition of energetic materials increased the total energy of the system, that is, increased the pressure of shock wave, which is consistent with the conclusion of Zhou et al. [23]. After 10 μs, the pressure on the elements decreased slowly, reaching only 8 MPa at 50 μs and close to zero at 60 μs. Therefore, the total duration of the electrical energy of metal wire and the chemical energy generated by energetic materials was 60 μs. Figure 11 presents the result velocity and effective stress of the elements on the sheet over time. First, the metal wire and energetic materials released energy within 0-60 μs. At 24 μs, the shock wave pressure was transmitted to the sheet, which caused the effective stress on the sheet to increase rapidly, and the speed of the element L rapidly increased to the maximum value of 188 m/s. Following that, due to the weakening of the initial electrical and chemical energy within 24-60 μs, the deformation speed of the sheet decreased. However, the effective stress on the sheet continued to increase, and the increase became slow at 50 μs, and then after 60 μs, the speed of the sheet increased again under the action of water flow pressure and inertia. Eventually, the effective stress decreased rapidly when it increased to 250 μs, and the deformation speed of the sheet also decreased rapidly at 200 μs, with the deformation ending at 300 μs. Therefore, the deformation process of sheet in ETEF can be divided into two stages: (i) the early stage of deformation (within 0-60 μs) and the initial chemical energy action stage of energetic materials and (ii) the late deformation period (within 60-300 μs) that belongs to inertia action stage. Figure 12 shows the contours/vector of the bulging height (Y-displacement) of the tested specimen during the ETEF process. In the initial chemical energy action stage of energetic materials, the bulging height of the specimen at 60 μs was only 7.5 mm, presenting a conical bulging profile, as shown in Fig. 12a. At 120 μs, the specimen showed an approximately ellipsoidal bulging profile (Fig. 12b), and the deformation profile was further improved. At 200 and 300 μs, the profile of the bulged specimen was hemispherical, and the bulging height of the final specimen is 30.1 mm, with an error of only 3.8% from the experimental bulging height of 29 mm (Table 4). Therefore, in the inertia action stage (within 60-300 μs), the bulging height of the specimen increased by 301% compared with the initial chemical energy action stage of the energetic materials. The inertia effect accounted for 80% of the total deformation time, which significantly increased the bulging height of the sheet metal and played a leading role in the plastic deformation. The profile change of the bulging specimen during the ETEF process will inevitably affect the distribution of the effective plastic strain. The variation of the effective plastic strain of the deformed specimen at different times is shown in Fig. 13. At 30 μs, the effective plastic strain with elliptical Fig. 9 Variation of plastic strain energy of the bulged specimen with time in ETEF process Fig. 10 The curves of the element shock wave pressure on metal wire and energetic materials annular distribution appeared on the specimen; at 60 μs, the effective plastic strain presented a rectangular distribution in the central deformation zone of the specimen; at this time, the width of the strain concentration zone was parallel and equal to the geometric dimension of EMs cylinder (Fig. 2a). At 100 μs, the effective plastic strain concentration area was approximately elliptical (the ratio of long axis to short axis: 1.6), while the overall effective plastic strain on the blank was elliptical, and the strain distribution was extremely uneven. At 120 μs, the effective plastic strain in the central deformation zone was close to a circle (the ratio of long axis to short axis: 1.1), and the effective plastic strain was significantly improved. Within 200-300 μs, the effective plastic strain in the central deformation zone was uniformly distributed. These results further indicate that the energy released by energetic materials during the ETEF process can significantly improve the distribution of effective plastic strain, which is of great significance for forming axisymmetric parts. Conclusions In this research, the technological characteristics of ETEF were revealed from the aspects of pulse discharge characteristics and dynamic deformation process of sheet metal. To achieve this goal, experiments and numerical simulations were carried out. The conclusions of this study can be summarized as follows: 1. In the process of ETEF, due to the addition of energetic materials, the waveform amplitude of discharge voltage and current decreased, and the peak value of current decreased significantly. Furthermore, the electric power and deposition energy generated by various pulse equipment discharges decreased, and the triggering energy consumption of energetic materials was estimated to be around 200 J; the capacitance-voltage matching parameters of various electric pulse generators may be insensitive to the energy release level of energetic materials. 2. Compared with the quasi-static specimen with the same bulging height (29 mm), the maximum major strain and the maximum thinning rate of the bulged specimen under ETEF/3 kV/1.5 g decreased by 29.6% and 13.8%, respectively, which significantly improved the strain distribution, thickness distribution, and deformation uniformity of the sheet. 3. The simulation results showed that the addition of energetic materials significantly improved the plastic strain energy of the blank. Compared with EHF/3 kV, the final plastic strain energy obtained under the conditions of ETEF/3 kV/1.0 g and ETEF/3 kV/1.5 g contributed 60% and 74% to the plastic deformation of the blank, respectively. 4. (i) In the initial chemical energy action stage of energetic materials, the effective stress on the blank increased rapidly, and the maximum speed reached 188 m/s. (ii) In the inertia action stage, the bulging height of the specimen increased by nearly 301%, and the error between the bulging height of numerical simulation and experiment was 3.8%. During ETEF, the effective plastic strain of sheet metal was significantly improved, and inertia effect accounted for 80% of the total deformation time, which played a leading role in plastic deformation. Author contribution Xueyun Xie: Investigation, writing (original draft), writing (review and editing), and validation. Haiping Yu: Writing (review and editing), investigation, supervision, and funding acquisition. Yang Zhong: Reviewed and improved the manuscript. Funding This work was supported by the National Natural Science Foundation of China [Grant No. 52175304]. The authors would like to take this opportunity to express their sincere appreciation. Data availability All data generated or analyzed during this study are included in this manuscript. Declarations Ethical approval Not applicable. Consent to participate Not applicable. Consent to publish Not applicable. Fig. 13 The contours of the effective plastic strain of the tested specimen during the ETEF process
7,569.8
2021-12-15T00:00:00.000
[ "Materials Science", "Physics" ]
Comparison of Sampling Methods for Annual Industry and Service Statistics Survey by TURKSTAT : The Annual Industry and Service Statistics is one of the largest surveys, conducted by Turkish Statistical Institute, which aims to determine changes in economic structure in Turkey. Both full enumeration and sampling methods are used in this survey. Nevertheless, the percentage of full enumeration increases every year. Even though efforts have been made in order to be used administrative records in recent years, this could not satisfy all of the necessary information needed. Hence, it is believed that there is a requirement to decrease the size of the survey. In this study, it is aimed to propose a sampling method for part of the Annual Industry and Service Statistics Survey conducted with the enumeration and to compare the suggested methods. For that purpose, in the first phase, stratified sampling is used and then the comparison is made by using three different sampling methods within the strata, namely poisson, systematic and simple random sampling. The size of the survey is reduced by using sampling methods, but the economic activity classification together with the level of estimation to the regions increase. It is concluded that the best estimations and minimum variances are obtained when poisson and simple random sampling methods are applied together. Introduction The Annual Industry and Service Statistics Survey is one of the largest sample size of surveys conducted by the Turkish Statistical Institute (TURKSTAT).The main purpose of this survey is to determine changes in the social and economic structure in the country.This survey is conducted in every European Union country.The countries, both members of the Union and the candidates to the Union, send their own results to the Statistical Office of the European Union (EUROSTAT) at the end of the survey.Each country publishes own results, and EUROSTAT shares all the countries' results through its website.In order to compare results, the questionnaire contains common questions for all of the countries; however, it is also necessary for the local survey to contain additional questions.The frame of this study is based on the business registers of the TURKSTAT, and those registers have used some administrative records. In recent years there have been some studies used administrative records for the Annual Industry and Service Statistics without any fieldwork, but, the information obtained solely from administrative records does not wholly satisfy the information needed for this survey.As Brick [1] says, "The purpose of the administrative records may not require the same level of quality as is needed for sampling purposes."Furthermore, the quality of data obtained from administrative records is also questionable.Thus, an additional mini-survey is needed to get information that could not be obtained from administrative records. Full enumeration and sampling methods are both used for the Annual Industry and Service Statistics.Each year there exists some changes, but generally 60-65% of the frame consists of full enumeration.Some activity codes must be used with full enumeration because of the small size, but the others may be estimated by using statistical methods within a short time.This is one of the purposes of this study.As Brick [1] mentions, "The twentieth century saw a dramatic change in the way information was generated as probability sampling replaced full enumeration." Another purpose of this study is to compare the suggested sampling methods.Mostly, stratified sampling is used.However, in some strata, full enumeration suggests due to the small population size.Except for these strata, simple random, systematic, and poisson sampling methods are used within the strata and then the results of these sampling methods are compared. Currently, the results of this survey are given as the NACE Rev 2.2 classification (Nomenclature of Economic Activities) at the four-digit level for Turkey and NUTS2 (Nomenclature of territorial units for statistics) at the two-digit level for the regions.Giving NUTS2 estimations in four digits requires a much larger sample size in the current structure and it requires additional time, cost, and labor.Another purpose of this study is to give the results at four digits level NACE Rev 2.2 codes not only for Turkey but also for NUTS2 regions.This is important for determining regional policies and making decisions.This information is needed in the face of regional developments and it is an aid tool increating regional policies.In the future, it is expected that it could be possible to discuss about the estimations given for NUTS3 (province) level. The data used in this study is micro data and belongs to TURKSTAT.The allowance to be accessed and the usage of micro data depend on a protocol signed between the user and TURKSTAT.Use of direct results obtained from data by this protocol is restricted.So, while calculated statistics could be given in this study, unfortunately the value of the parameters could not be given due to the restriction mentioned. Description of data Approximately half of the total turnover is supplied from approximately 5-7% of enterprises as shown in Table 1.This information is calculated from data obtained from the database on the TURKSTAT website.The number of the enterprises is relatively small, and any change in the structure of these enterprises directly affects the economic structure.This importance and the relatively small number are the reasons why full enumeration is suggested for enterprises having more than 250 employees.This separation is also compatible with the European Union practices.In fact, Giovanninni [2] says one should regard "enterprises with fewer than 250 employees as small and medium-sized enterprises.". Sample size Chambers [4] indicates, "In practice, surveys are concerned with many population variables.However, most of the theory for sample surveys is developed for a small number of variables, typically one or two".As he says, in this study, only one variable should be chosen to determine sample size.Firstly, the sample size is calculated for all five variables that would be estimated.In Table 2 the sample sizes calculated for D12110, D12120, D12150, D13110, and D16110 variables is given.The largest sample size is obtained for the D12110 variable, and also turnover is one of the most important economic indicators.So, the D12110 variable is chosen for calculations, but the results are given for all five variables.Calculations are made by formulas of stratified sampling and Neyman allocation.Supposed cost is not important, but variances are considered as the opposite.Therefore, it is decided that the Neyman allocation is the most appropriate allocation.Yamane [5] shows that the efficiency of the Neyman allocation is more than the optimum allocation if sizes and variances of strata have large differences.There are also some studies about how to choose allocation methods.Mathew, Sola, Oladiran, & Amos [6] and Barnabas & Sunday [7] study the efficiency of allocation methods; both studies conclude that the Neyman allocation is the most efficient allocation. Winkler [8] says, "The Neyman allocation is known to be theoretically optimal in comparison with proportional allocation". The sample sizes are calculated by formulas of the Neyman allocation in the strata using poisson, simple random, and systematic sampling because of the randomness of the sample size in poisson sampling and the need for a fixed sample size. Cochran [9] point out that "The specification of the degree of precision wanted in the results is an important step,".Since the variance in the data is sometimes very large, this causes large bounds, so to maintain the sensitivity, after many trials it is decided to use another calculation method instead of variance.The bound on the error of estimation is changed in each of the activity codes, but a standard is needed for every activity code using the previous year data (in this study, data from the year 2012).5% of the turnover mean is calculated.If this value is greater than or equal to 150,000 TL, the bound on the error of estimation is accepted as 200,000.If this value is smaller than 150,000 TL, the calculated value is rounded down.For example, if the calculated value is 125,000, the bound on the error of estimation is accepted as 120,000. The approximate sample size is where is the fraction of observations allocated to stratum i, and 2 is the population variance for stratum i.Since the costs per observation are ignored, is and when estimating μ when estimating τ, where B is the bound on the error of estimation. The stratum size is Poisson sampling applications Inside strata, simple random, systematic and poisson sampling are used.Except for poisson sampling, other methods are well-known and renowned methods.As Lohr [10] says, a simple random sample "provides the theoretical basis for the more complicated forms".For this reason, only the poisson sampling applications and their formulas are given in this article. Poisson sampling was introduced into the literature by H jek [11,12].Williams, Schreuder, & Terraza [13] define poisson sampling "as a sampling design in which the sample units have unequal probabilities of selection, .In addition, the units in the population are independent and the sample size, n, is a random variable" using H jek's work.Aires' [14] definition is as follows: "A poisson sample may be realized by using N independent Bernoulli trials to determine whether the individual under consideration is to be included in the samples or not." When using poisson sampling, deciding on the sample size presented some difficulties due to the randomness of the sample size.n may take values between 0 and N, according to inclusion probabilities and random numbers which are used for calculations. For that reason, it is decided to use Conditional Poisson Sampling, since it has a fixed n.Grafström [15] says that "if a fixed sample size n is desired, it is possible to generate poisson samples and to accept the sample only if the sample size is n.The resulting design is called conditional poisson sampling".Also, Grafström [16] define conditional poisson sampling as "a modification of Poisson sampling.Each unit i in the population is included with a given probability but only samples of size n are accepted."In conditional poisson sampling, n is fixed, and it is tried to find this fixed n.In this case the number of trials is uncertain.For example, finding 100 samples with size two and population size 14, 342 samples must be drawn.100 of 342 has the sample size two, and the others' size changes between zero and 14. A characteristic which is easily observed or previously known and existing in each unit of the population ( ) is selected.This value and the previously decided n value are used to calculate inclusion probabilities.Ghosh & Vogt [17] give the definition of inclusion probability as "the probability that an individual unit will be in the final sample."If a unit has to be in the sample, its inclusion probability is determined as 1 without making any calculations.Saavedra [18] says, "There is no known analytic formula that permits us to calculate probabilities of selection".The most important point before selecting a sample is to decide the values.Williams, Ebel, & Wells [19] say, "In the development of poisson sampling, it was mentioned that the characteristic is chosen so that it is positively correlated with ", and Brewer, Early, and Hanif [20] also say, "If the are roughly proportional to the , it is more efficient for samples of any size."Lundquist [21] define auxiliary variables as "variables which are not our primary interest, but it is reasonable to assume they are connected to our study variable in some way."In this study, value is a kind of transformation of the previous year's mean and variance of the turnover data.Then, inclusion probabilities 's are calculated using the formula = = 1, … , and = =1 .For determining the inclusion probabilities, a dummy variable ( ) that is calculated by using the mean and variance of D12110 (turnover) variable is used.Here, is the turnover value, and is the dummy variable that is produced via turnover value's mean and variance for the previous year. Then, random numbers which come from uniform distribution are generated, and every unit is assigned to a random number.If the random number is smaller than the inclusion probability of case i, case i is selected in the sample.If the number of selected units is smaller or larger, second order inclusion probabilities are calculated by using the new n value which is the number of selected cases in the first iteration.This procedure is repeated until a constant n value is obtained.That means that at this point, whatever the iteration number is, n cannot be changed.If this n value equals to the previously decided n value, the selected units consist of the sample, and it can be calculated with statistics from this sample.But, if this n value does not equal to the previously decided n value, this sample was rejected.In this case, new random numbers are generated, and all the procedure are done again. S rndal, C.E., Swenson, B. & Wretman, J. [22] gives the formulas of estimators of poisson sampling as follows; The estimator of population total = is Hajek [12] say Horwitz and Thompson show that is an unbiased estimator of population total if > 0, i=1,...,N, for any sampling design. An unbiased variance estimator is Ardilly & Tille [23] give the proof of formulas and some examples of poisson sampling and the other methods above. An activity code estimated total is An activity code estimated mean is An activity code estimated variance of is where N= If there exists a full enumeration strata n accepted as N-1 in formulas so as not to lose the variation, the calculation variance of an activity code comes from the full enumeration strata, and the effects of variances of full enumeration strata are visible. Results The estimated values of all variables are calculated for all NUTS2 regions and all activity codes.Since the number of activity codes (121) and the number of NUTS2 regions (26) is large, for Turkey only the total estimation is given.For estimations of Turkey stratified sampling formulas are used with all activity codes used as strata. The estimated values of the Turkey totals for all variables are given in Table 3.These values from the poisson sampling and the simple random sampling together are closer to population values except for a few points and small variances.All variables are estimated with approximately 0.1% or more smaller difference with real values.Variances of for all Turkey are calculated using with poisson and simple random sampling together, systematic and simple random sampling together, and only simple random sampling in strata are given in Table 4.For variables D12110, D12120, and D13110, the total estimations obtained from poisson and simple random sampling are the closest estimations and also have the minimum sample variances.For D12150, the simple sampling total estimation is the closest, but the poisson and simple random sampling variance are still minimum.For D16110, the systematic and random estimation is the closest, and simple random sampling variance is the minimum.It should also be considered that all calculations are made for estimating variable D12110. Table 1 . [3]centage It is hard to divide 26 strata, so these activity codes are accepted as full enumeration data.For the remaining 127 activity codes, the sample size is calculated, but the results of six of them are almost equal to the population size, so it is decided to accept these codes as a full enumeration.The number of outliers to which are added to the full enumeration data is 1,520 over 110,420 units.Each of the NACE Rev 2.2 (4-digits) class is accepted as a discrete population.There are 121 discrete populations in this study.Besides, there are 26 NUTS2 regions in Turkey.Each of the NUTS2 regions have similar economic and geographic characteristics within itself, but at the same time they are significantly different from the other regions.This is one of the reasons of the fact that stratified sampling is used.Each population has 26 strata consisting of 26 regions.Scheaffer, Mendenhall, & Ott[3]say that "generally, more than five or six strata are not chosen when using this method", but in the Annual Industry and Service Statistic Survey, it is asserted that estimations must be given for all regions and all NACE Rev 2.2 (2-digits) divisions.The reasons of having 26 strata and not including cluster sampling method in the study can be summarized as follows: One of the aims of this study is to give the estimations for all regions and all NACE Rev 2.2 (4 digits) classes instead of just NACE Rev 2.2 (2-digits). Before selecting samples, outlier values are determined and the data is removed.The values, far from the mean falling out of minus or plus 3σ, are accepted as outliers in this study.The reason of this elimination is due to the fact that the interval between -3σ and +3σ covers 99.7% of all data and achieves a desired minimum data loss as much as possible. Table 2 . Sample sizes calculated from variables (for year 2013) Table 4 . Variances of for Turkey (for 2013)
3,879.4
2018-02-09T00:00:00.000
[ "Computer Science" ]
Study on Dual Channel Lateral Field Excitation Quartz Crystal Microbalance for Measuring Liquid Electrical Properties Lateral field excitation quartz crystal microbalance (LFE-QCM) can detect both the electrical properties (conductivity and permittivity) and mechanical properties (viscosity and density) of the liquid. In practical applications for detecting electrical properties, the viscosity and density of the liquid will also change. This research proposed a dual-channel LFE-QCM for reducing the influence of density and viscosity. The sensing layer of one resonant element is almost bare, and the other is covered by a metal film as a reference. Different organic solutions and NaCl solution were used to study the influence of mechanical properties and the temperature on electrical properties. The experimental results demonstrate that the dual-channel LFE-QCM is necessary for properly detecting electrical properties of the liquid. Introduction Quartz crystal microbalance (QCM) is a piezoelectric device based on the piezoelectric effect of materials. QCM sensitively measures the mass change on the surface of the electrode. QCM is a well-known high precision resonator with a measurement accuracy in the nano-gram range. The sensor has a high sensitivity, simple structure, and can be used in real time without interruption in gas or liquid environments [1][2][3]. In the thickness shear mode (TSM), AT-cut quartz crystal microbalance sensors with standard electrode geometry have been widely used in liquid phase chemical sensing applications [4]. The detection mechanism is mainly based on mechanical loading effects such as mass, density, and viscosity. According to research by Sauerbrey and Kanazawa, the resonant frequency shifts under the influence of mechanical load, viscosity, and density of a Newtonian liquid, which is expressed as [5][6][7][8]: where ∆f represents the frequency shift; f 0 is the fundamental resonant frequency; µ q and η q represent shear modulus of the AT-cut (2.947 × 10 11 g·cm −1 s −2 ) and the density of crystal for quartz (2.648 g·cm −3 ), respectively; ∆m is the attached mass; A is the piezoelectrically active area; and ρ l and η l represent the liquid density and viscosity, respectively. Until 2000, QCM was usually used as a mass frequency sensor. Since then, studies reported that QCM can directly detect liquid mechanical properties and electrical properties. Hempel et al. [9], Wang et al. [10], and other groups [11][12][13] conducted a series of theoretical analysis and research. A literature review also introduced many practical applications of and prospects for lateral field excitation QCM [14]. There are two different QCM structures: thickness field excitation QCM (TFE-QCM) and lateral field excitation QCM (LFE-QCM), as shown in the Figure 1. The piezoelectric bulk acoustic wave sensor of the lateral field excitation mode has different electrode structures in addition to the thickness field excitation sensor, and its two electrodes are on the same principal plane of the crystal substrate. This kind of electrode structure enables the electrical field to penetrate into the liquid. Therefore, LFE-QCM has extremely high sensitivity to electrical properties in the liquid, and the mechanical properties similar to the TFE sensor [15][16][17][18]. At present, many studies have been completed on LFE-QCM, which has been shown to have many advantages over traditional QCM in practical applications. The study of the lateral field excitation electrode gap by Hu et al. proved that the wider the gap, the lower the sensitivity to the solution properties, and has the greater electric field penetration into the liquid when the sensing layer is completely exposed [19]. Abe et al. confirmed this finding. A lateral field excitation QCM with a metal film on the sensing layer was designed [20]. This QCM could not detect the electrical properties of the solution when compared with the ordinary lateral field excitation QCM. literature review also introduced many practical applications of and prospects for lateral field excitation QCM [14]. There are two different QCM structures: thickness field excitation QCM (TFE-QCM) and lateral field excitation QCM (LFE-QCM), as shown in the Figure 1. The piezoelectric bulk acoustic wave sensor of the lateral field excitation mode has different electrode structures in addition to the thickness field excitation sensor, and its two electrodes are on the same principal plane of the crystal substrate. This kind of electrode structure enables the electrical field to penetrate into the liquid. Therefore, LFE-QCM has extremely high sensitivity to electrical properties in the liquid, and the mechanical properties similar to the TFE sensor [15][16][17][18]. At present, many studies have been completed on LFE-QCM, which has been shown to have many advantages over traditional QCM in practical applications. The study of the lateral field excitation electrode gap by Hu et al. proved that the wider the gap, the lower the sensitivity to the solution properties, and has the greater electric field penetration into the liquid when the sensing layer is completely exposed [19]. Abe et al. confirmed this finding. A lateral field excitation QCM with a metal film on the sensing layer was designed [20]. This QCM could not detect the electrical properties of the solution when compared with the ordinary lateral field excitation QCM. Excitation electrodes Quartz (a) Thickness field excitation (TFE) (b) Lateral field excitation (LFE) The temperature effect cannot be ignored during measurement of a liquid [21]. Liquid mechanical properties such as density, viscosity, and conductivity, are greatly affected by the temperature of the liquid. When used in a liquid, the QCM resonator's quality factor (Q value) drops sharply due to damping. Liquid viscosity reduces frequency stability. Small mechanical properties and environmental fluctuations result in large frequency drifts, and thereby inaccurate measurements. In theory, the impact of environmental factors can be eliminated by arranging another QCM as a reference [22]. Both Winters et al. [23] and Abe et al. [24] designed two or more independent QCMs on the same quartz blank, etching a quartz groove between two QCM bays. In another study the QCM was individually designed as an inverted step structure to isolate coupling [25]. However, the two channels were not provided for reference. Although the above structure has been proven to have a good effect, in practical applications, the influence of the liquid electrical properties is generally greater than the influence of the liquid mechanical properties to the resonant frequency, so the influence of mechanical properties is often neglected in the research on detecting electrical properties. To reduce the influence of mechanical properties to the resonant frequency and separately study the influence of the electrical properties of the solution on the resonator, we designed a dual-channel LFE-QCM. One QCM, as a reference, was set to measure only the mechanical properties of the liquid and the other QCM, as a test, was set to measure all the properties from the liquid. When detecting liquids, factors such as ambient temperature can easily affect the nature of the liquid itself, thereby affecting the resonant frequency of the QCM. Therefore, the dual-channel resonant elements on the same wafer can reduce these effects. Metamorphism of oil in industrial production and daily life affects our lives, such as through transportation, especially as the moisture content of oil likely causes metamorphism of oil. Therefore, the oil needs to be replaced or supplemented regularly. Oil detection technology can be used to evaluate the state of the oil quality and judge whether it needs to be replaced, but few sensors are The temperature effect cannot be ignored during measurement of a liquid [21]. Liquid mechanical properties such as density, viscosity, and conductivity, are greatly affected by the temperature of the liquid. When used in a liquid, the QCM resonator's quality factor (Q value) drops sharply due to damping. Liquid viscosity reduces frequency stability. Small mechanical properties and environmental fluctuations result in large frequency drifts, and thereby inaccurate measurements. In theory, the impact of environmental factors can be eliminated by arranging another QCM as a reference [22]. Both Winters et al. [23] and Abe et al. [24] designed two or more independent QCMs on the same quartz blank, etching a quartz groove between two QCM bays. In another study the QCM was individually designed as an inverted step structure to isolate coupling [25]. However, the two channels were not provided for reference. Although the above structure has been proven to have a good effect, in practical applications, the influence of the liquid electrical properties is generally greater than the influence of the liquid mechanical properties to the resonant frequency, so the influence of mechanical properties is often neglected in the research on detecting electrical properties. To reduce the influence of mechanical properties to the resonant frequency and separately study the influence of the electrical properties of the solution on the resonator, we designed a dual-channel LFE-QCM. One QCM, as a reference, was set to measure only the mechanical properties of the liquid and the other QCM, as a test, was set to measure all the properties from the liquid. When detecting liquids, factors such as ambient temperature can easily affect the nature of the liquid itself, thereby affecting the resonant frequency of the QCM. Therefore, the dual-channel resonant elements on the same wafer can reduce these effects. Metamorphism of oil in industrial production and daily life affects our lives, such as through transportation, especially as the moisture content of oil likely causes metamorphism of oil. Therefore, the oil needs to be replaced or supplemented regularly. Oil detection technology can be used to evaluate the state of the oil quality and judge whether it needs to be replaced, but few sensors are sufficiently small and can monitor the oil in real time, which makes the current sensors' use inconvenient. The dual-channel LFE-QCM designed in this paper not only meets the requirements of miniaturization and real-time monitoring, but also eliminates the interference of environmental temperature and other factors, providing a good method for oil detection. Based on the high sensitivity of LFE-QCM to the electrical properties of the liquid, the water quality of groundwater or rivers and lakes can be judged using this device via the all-weather monitoring of the liquid conductivity. The design of the reference resonator can eliminate the influence of different temperatures in one day. In the experiment, the organic solution was selected as the test solution because its relative permittivity is different, and its viscosity and density are not directly related to the relative permittivity, and they are widely used in the research with lateral field excitation QCM on electrical property as the detection solution. Either NaCl or KCl are the commonly used solution for studying electrical properties about conductivity, so NaCl was selected as the temperature test solution [18,26]. Design and Fabrication The two resonant elements of the dual channel LFE-QCM are, respectively, QCM-T (test) resonant element and QCM-R (reference) resonant elements. Both resonant elements adopt the lateral field excitation mode, and the metal excitation electrodes are all on the same side. There are many kinds of lateral field excitation electrode geometry, such as the semicircular, half-moon, and T structures. Different electrode structures have different sensitivities to the frequency response. According to the research, the excitation electrode of the half-moon structure is the most sensitive and excited electrode for detection. The best sensitivity occurs when the electrode gap is parallel to the x-axis of the quartz crystal [20,27,28]. Therefore, we chose the half-moon structure and the gap was designed parallel to the x-axis. The QCM-R resonant element covered the quartz blank with an all-metal film to form an electric field shield such that the electric field did not penetrate into the liquid medium [20]. Therefore, the frequency shift caused by the QCM-R resonator was only affected by the viscosity and density of solution. The lateral electric field can easily penetrate into the liquid medium because the upper surface of the QCM-T resonator was approximately bare [19]. The frequency shift caused by the QCM-T resonator not only contained the information about the viscosity and density of the solution, but also the information about the permittivity and conductivity. The influence of the liquid electrical properties on the QCM in the experiment was due to the difference in the frequency between the two resonant elements. Figure 2 depicts the schematic structure of the dual-channel LFE-QCM. The dotted parts show the bottom of the quartz blank. The orange grid area was the metal film on the surface of the quartz blank, which was the QCM-R. The quartz crystal had a single-sided, concave structure, where a quartz groove was etched in a non-oscillating region between the two excitation electrodes to suppress frequency interference. The separate AT-cut quartz crystal chip used in this study was designed in a rectangular shape with dimensions of 13.5 mm × 8 mm. The starting blank was 40 mm × 40 mm, and 100 µm thick with double polished planar faces. The fundamental frequency of the QCM resonator was about 16 MHz. The metal excitation electrode had a half-moon structure with a radius of 1.25 mm. Both the metal excitation electrode and the metal film were composed of an Au/Cr double-layer metal film, and gold film was used as a main electrode and a thin chromium film as an adhesion layer between the quartz and the gold film. The quartz groove was 3 mm × 50 µm and 20 µm deep. Five single dual-channel QCM chips could be cut out from one quartz blank. All quartz blanks were processed with the quartz wet etching process, and Figure 3 depicts the process diagram. (1) The quartz blank was washed using piranha solution (H 2 SO 4 :H 2 O 2 = 3:1); (2) Au/Cr bi-layer films were sputtered on both sides of the quartz blank, followed by the photoresist coating; (3) the Au/Cr bi-layer films on the area of quartz groove were patterned for etching the quartz groove; (4) a resist pattern was formed on both sides of the blank, followed by wet-etching the quartz groove using a saturated ammonium bifluoride solution at 85 • C; (5) then, the Au/Cr bi-layer films were etched to form electrodes and the photoresist was removed, followed by dicing the quartz blank. frequency shift caused by the QCM-R resonator was only affected by the viscosity and density of solution. The lateral electric field can easily penetrate into the liquid medium because the upper surface of the QCM-T resonator was approximately bare [19]. The frequency shift caused by the QCM-T resonator not only contained the information about the viscosity and density of the solution, but also the information about the permittivity and conductivity. The influence of the liquid electrical properties on the QCM in the experiment was due to the difference in the frequency between the two resonant elements. Figure 2 depicts the schematic structure of the dual-channel LFE-QCM. The dotted parts show the bottom of the quartz blank. The orange grid area was the metal film on the surface of the quartz blank, which was the QCM-R. The quartz crystal had a single-sided, concave structure, where a quartz groove was etched in a non-oscillating region between the two excitation electrodes to suppress frequency interference. QCM-T Quartz groove Electrode Electrode The separate AT-cut quartz crystal chip used in this study was designed in a rectangular shape with dimensions of 13.5 mm × 8 mm. The starting blank was 40 mm × 40 mm, and 100 μm thick with double polished planar faces. The fundamental frequency of the QCM resonator was about 16 MHz. The metal excitation electrode had a half-moon structure with a radius of 1.25 mm. Both the metal excitation electrode and the metal film were composed of an Au/Cr double-layer metal film, and gold film was used as a main electrode and a thin chromium film as an adhesion layer between the quartz and the gold film. The quartz groove was 3 mm × 50 μm and 20 μm deep. Five single dual-channel QCM chips could be cut out from one quartz blank. All quartz blanks were processed with the quartz wet etching process, and Figure 3 depicts the process diagram. (1) The quartz blank was washed using piranha solution (H2SO4:H2O2 = 3:1); (2) Au/Cr bi-layer films were sputtered on both sides of the quartz blank, followed by the photoresist coating; (3) the Au/Cr bi-layer films on the area of quartz groove were patterned for etching the quartz groove; (4) a resist pattern was formed on both sides of the blank, followed by wet-etching the quartz groove using a saturated ammonium bifluoride solution at 85 °C; (5) then, the Au/Cr bi-layer films were etched to form electrodes and the photoresist was removed, followed by dicing the quartz blank. (1) Wash the quartz blank using the piranha solution Au/Cr photoresist S1808 Figure 3. Quartz wet-etching process flow diagram. Evaluation Flow cells were fabricated from polydimethylsiloxane (PDMS) or metal material referencing from Michalzik et al. [29,30] and Sagmeister et al. [31]. We chose to use polymethyl methacrylate (PMMA) to produce the flow cell, referring to the work from Liang et al. [32], because PMMA has better light transmission than metal for observing the flow of liquid, and the produced flow cell can be reused many times. The structure of the flow cell was divided into three parts: the upper cover, the middle platform, and the lower bottom, as shown in Figure 4. Evaluation Flow cells were fabricated from polydimethylsiloxane (PDMS) or metal material referencing from Michalzik et al. [29,30] and Sagmeister et al. [31]. We chose to use polymethyl methacrylate (PMMA) to produce the flow cell, referring to the work from Liang et al. [32], because PMMA has better light transmission than metal for observing the flow of liquid, and the produced flow cell can be reused many times. The structure of the flow cell was divided into three parts: the upper cover, the middle platform, and the lower bottom, as shown in Figure 4. During installation, a quartz blank was first placed in the rectangular recess in the center of the middle platform, and both long sides of the rectangular recess had two electrode access apertures. The side of the quartz blank with the excitation electrode was close to the middle platform, and the other side with the floating metal electrode was connected upward to the upper cover and the silicone gasket. The depth of the rectangular recess ensured the degree of the tightness and reduced the high mechanical strength of the PMMA material itself to the quartz blank. The upper cover had a sample inlet and an outlet with plastic hollow hoses to facilitate the connection of the syringe to allow the solution to circulate. Four small holes corresponded to the electrode access apertures in the lower bottom to lead out the electrodes. The electrodes were extracted using contact spring pins to avoid damaging to the quartz blank or electrode using conventional welding methods, which was beneficial to the reusability of the flow cell. An impedance analyzer 4294A (Agilent Technologies Inc., Santa Clara, CA, USA) was used to measure the vibration properties including resonance frequency, Q value, conductance, and equivalent circuit parameters. A high frequency dual-channel oscillator system was developed as introduced before [22]. The flow injection system consisted mainly of a syringe pump, a loop, an injector, a flow cell, and a measuring instrument. The measuring instrument can be composed of an impedance analyzer alone or the oscillating circuit with an external frequency counter. Figure 5 depicts the dual-channel lateral field excitation QCM. On one side of the sensing layer, the QCM-R was shielded by a metal film. The sensing layer, also composed of Au/Cr films, of the QCM-T added a small rectangular film to stimulate the energy trapping and improve the stability of resonance ( Figure 5a) [33,34]. Figure 5b shows the back electrode, which was composed of four half-moon electrodes. The excitation electrode was first connected to the impedance analyzer through spring pins to measure the corresponding parameters of each resonant element of the dual-channel LFE-QCM, During installation, a quartz blank was first placed in the rectangular recess in the center of the middle platform, and both long sides of the rectangular recess had two electrode access apertures. The side of the quartz blank with the excitation electrode was close to the middle platform, and the other side with the floating metal electrode was connected upward to the upper cover and the silicone gasket. The depth of the rectangular recess ensured the degree of the tightness and reduced the high mechanical strength of the PMMA material itself to the quartz blank. The upper cover had a sample inlet and an outlet with plastic hollow hoses to facilitate the connection of the syringe to allow the solution to circulate. Four small holes corresponded to the electrode access apertures in the lower bottom to lead out the electrodes. The electrodes were extracted using contact spring pins to avoid damaging to the quartz blank or electrode using conventional welding methods, which was beneficial to the reusability of the flow cell. Results and Discussion An impedance analyzer 4294A (Agilent Technologies Inc., Santa Clara, CA, USA) was used to measure the vibration properties including resonance frequency, Q value, conductance, and equivalent circuit parameters. A high frequency dual-channel oscillator system was developed as introduced before [22]. The flow injection system consisted mainly of a syringe pump, a loop, an injector, a flow cell, and a measuring instrument. The measuring instrument can be composed of an impedance analyzer alone or the oscillating circuit with an external frequency counter. Figure 5 depicts the dual-channel lateral field excitation QCM. On one side of the sensing layer, the QCM-R was shielded by a metal film. The sensing layer, also composed of Au/Cr films, of the QCM-T added a small rectangular film to stimulate the energy trapping and improve the stability of resonance (Figure 5a) [33,34]. Figure 5b shows the back electrode, which was composed of four half-moon electrodes. During installation, a quartz blank was first placed in the rectangular recess in the center of the middle platform, and both long sides of the rectangular recess had two electrode access apertures. The side of the quartz blank with the excitation electrode was close to the middle platform, and the other side with the floating metal electrode was connected upward to the upper cover and the silicone gasket. The depth of the rectangular recess ensured the degree of the tightness and reduced the high mechanical strength of the PMMA material itself to the quartz blank. The upper cover had a sample inlet and an outlet with plastic hollow hoses to facilitate the connection of the syringe to allow the solution to circulate. Four small holes corresponded to the electrode access apertures in the lower bottom to lead out the electrodes. The electrodes were extracted using contact spring pins to avoid damaging to the quartz blank or electrode using conventional welding methods, which was beneficial to the reusability of the flow cell. Results and Discussion An impedance analyzer 4294A (Agilent Technologies Inc., Santa Clara, CA, USA) was used to measure the vibration properties including resonance frequency, Q value, conductance, and equivalent circuit parameters. A high frequency dual-channel oscillator system was developed as introduced before [22]. The flow injection system consisted mainly of a syringe pump, a loop, an injector, a flow cell, and a measuring instrument. The measuring instrument can be composed of an impedance analyzer alone or the oscillating circuit with an external frequency counter. Figure 5 depicts the dual-channel lateral field excitation QCM. On one side of the sensing layer, the QCM-R was shielded by a metal film. The sensing layer, also composed of Au/Cr films, of the QCM-T added a small rectangular film to stimulate the energy trapping and improve the stability of resonance ( Figure 5a) [33,34]. Figure 5b shows the back electrode, which was composed of four half-moon electrodes. The excitation electrode was first connected to the impedance analyzer through spring pins to measure the corresponding parameters of each resonant element of the dual-channel LFE-QCM, The excitation electrode was first connected to the impedance analyzer through spring pins to measure the corresponding parameters of each resonant element of the dual-channel LFE-QCM, ensuring that the two resonant elements could vibrate independently and oscillate in isolation. The energy loss of mechanical vibration of the QCM in the liquid phase was huge, and the vibration attenuation was serious. Therefore, whether stable vibration could be maintained in the liquid phase is crucial for later research with other solutions. Results and Discussion The two resonant elements of LFE-QCM could oscillate independently, indicating that the design of the chip structure and the selection of the flow cell were successful and could be used for subsequent test work. The quality factor (Q value) in the air was 30,000 or more, as shown in Table 1, which proved that the resonant element could generate stable oscillation in air. The Q value of the QCM-R resonant element was higher than that of QCM-T because the QCM-R resonant element's sensing layer was completely covered by the metal film and the shear wave vibration was concentrated in the quartz blank, resulting in a larger energy-trapping effect. The QCM-R resonant element covering the metal film comprehensively enhanced the vertical component in the lateral electric field, resulting in an order of magnitude difference in conductance values. The Q value of LFE-QCM under deionized water was more than 1100. According to experience, the Q value of QCM generally needs to be greater than 1000 when used in a liquid phase. Therefore, the above experiments verified that the chip used in this experiment can be used for detection in liquid environments. Comparing the conductance values of the two resonant elements in the air, the conductance of the QCM-R was 200 times higher than that of the QCM-T. In water, the conductance of QCM-R was also higher. Both situations were in line with the results reported by Abe et al. [20]. The QCM-T resonant element reduced the resonant frequency in air by more than 30 kHz after working in the deionized water. The shift in the resonant frequency of the QCM-R was only about 5 kHz because the QCM-R resonant element had a full metal film as a sensing layer on the back of the excitation electrode. This resulted in the metal film acting as a shield such that the electric field could not pass through it into the liquid, which means the frequency variation of the QCM-R resonator was only caused by viscosity and density. As mentioned above, the electrical properties of the solution were mainly reflected in the permittivity and conductivity, whereas the LFE-QCM was more sensitive to electrical properties than mechanical properties. Measurement of organic solutions of known viscosity, density, and relative permittivity enabled obtaining the frequency shifts of the two resonators. For the experiment, we selected 12 kinds of solutions from dodecane (relative permittivity is 2.012) to pure water (relative permittivity is 80.2), as shown in Table 2. Figure 6 shows the relationship between the measured frequency shift ∆f and the permittivity. ∆f is the fundamental frequency in air minus the resonant frequency in the liquid at room temperature (25 • C). The frequency shift of QCM-T generally shows a certain trend, but several solutions differ from the frequency shift of the adjacent solution. The measured frequency shift of the QCM-R was proportional to the square of the product of the viscosity and density. By carefully comparing the two figures, the deviation point occurred in several solutions where the QCM-R resonator was most sensitive to the viscosity. The points marked with red in Figure 6b are methyloleate, dibutyl sebacate, and 1-octanol. The viscosity of these three solutions was five times more than the viscosity of water, so the frequency shift was higher than that of the other solutions. The frequency difference caused by the simple permittivity could be obtained by subtracting the frequency difference of the QCM-R resonator from the frequency difference of the QCM-T resonator in the same solution. Figure 6 shows the relationship between the measured frequency shift Δf and the permittivity. Δf is the fundamental frequency in air minus the resonant frequency in the liquid at room temperature (25 °C). The frequency shift of QCM-T generally shows a certain trend, but several solutions differ from the frequency shift of the adjacent solution. The measured frequency shift of the QCM-R was proportional to the square of the product of the viscosity and density. By carefully comparing the two figures, the deviation point occurred in several solutions where the QCM-R resonator was most sensitive to the viscosity. The points marked with red in Figure 6b are methyloleate, dibutyl sebacate, and 1-octanol. The viscosity of these three solutions was five times more than the viscosity of water, so the frequency shift was higher than that of the other solutions. The frequency difference caused by the simple permittivity could be obtained by subtracting the frequency difference of the QCM-R resonator from the frequency difference of the QCM-T resonator in the same solution. The blue curve shown in Figure 7 is the relationship between the frequency shift caused by the permittivity and the relative permittivity itself, and the fitting degree R 2 was 0.9975 after cubic polynomial fitting for a wide range from 2 to 80. In the low permittivity range, the measurement frequency shift of LFE-QCM had a good linear relationship with the relative permittivity, which also indicated that the QCM was suitable for use as a permittivity sensor in the low permittivity range. The relative permittivity caused a frequency change from 151 to 2157 ppm. Therefore, the above experiments proved that the influence of the viscosity and density could not be neglected when measuring the frequency shift with the lateral field excitation QCM. We also proved that our designed QCM not only had good research value, but also provided a reference for future practical application. According to the Kanazawa formula and the solution viscosity and density, the theoretical frequency shift could be calculated. The yellow curve in Figure 7 represents the measured value minus the theoretical frequency shift calculated using the Kanazawa formula. The coincidence of the two curves also proved that the frequency shift of the QCM-R resonator was only affected by the viscosity and density of the liquid. This result was caused by the smooth gold shield on the QCM-R, which followed well with the Kanazawa formula. Otherwise, a rough gold surface (such as porous gold layer) will induce an increased surface area, which will cause large frequency shift [35]. The blue curve shown in Figure 7 is the relationship between the frequency shift caused by the permittivity and the relative permittivity itself, and the fitting degree R 2 was 0.9975 after cubic polynomial fitting for a wide range from 2 to 80. In the low permittivity range, the measurement frequency shift of LFE-QCM had a good linear relationship with the relative permittivity, which also indicated that the QCM was suitable for use as a permittivity sensor in the low permittivity range. The relative permittivity caused a frequency change from 151 to 2157 ppm. Therefore, the above experiments proved that the influence of the viscosity and density could not be neglected when measuring the frequency shift with the lateral field excitation QCM. We also proved that our designed QCM not only had good research value, but also provided a reference for future practical application. According to the Kanazawa formula and the solution viscosity and density, the theoretical frequency shift could be calculated. The yellow curve in Figure 7 represents the measured value minus the theoretical frequency shift calculated using the Kanazawa formula. The coincidence of the two curves also proved that the frequency shift of the QCM-R resonator was only affected by the viscosity and density of the liquid. This result was caused by the smooth gold shield on the QCM-R, which followed well with the Kanazawa formula. Otherwise, a rough gold surface (such as porous gold layer) will induce an increased surface area, which will cause large frequency shift [35]. Similar changes in the frequency shift occurred in the conductivity solution. Especially in practical applications, when the temperature of the liquid or the environment changes, the frequency shift measured by the QCM also changes because the conductivity, liquid viscosity, and density are all affected by temperature. The viscosity and density of the liquid decrease with increasing temperature, and the conductivity increases with increasing temperature within a certain temperature range. Therefore, a 0.01% NaCl solution was selected for this experiment, and the temperature ranged from 5 to 45 • C. The experimental results are shown in Figure 8a,b. The frequency shift of the QCM-R resonant element decreased with the increase in temperature, which agrees with the relationship between viscosity and temperature. The total shift was about 600 Hz. The frequency shift of the QCM-T resonant element caused by temperature did not show an obvious pattern, and the maximum shift difference was about 400 Hz. The total conductivity difference detected by the portable conductivity meter was about 0.1 mS/cm. Similar changes in the frequency shift occurred in the conductivity solution. Especially in practical applications, when the temperature of the liquid or the environment changes, the frequency shift measured by the QCM also changes because the conductivity, liquid viscosity, and density are all affected by temperature. The viscosity and density of the liquid decrease with increasing temperature, and the conductivity increases with increasing temperature within a certain temperature range. Therefore, a 0.01% NaCl solution was selected for this experiment, and the temperature ranged from 5 to 45 °C. The experimental results are shown in Figure 8a,b. The frequency shift of the QCM-R resonant element decreased with the increase in temperature, which agrees with the relationship between viscosity and temperature. The total shift was about 600 Hz. The frequency shift of the QCM-T resonant element caused by temperature did not show an obvious pattern, and the maximum shift difference was about 400 Hz. The total conductivity difference detected by the portable conductivity meter was about 0.1 mS/cm. However, Figure 8c shows the frequency shift caused by the conductivity alone when the frequency shift between the two resonant elements was subtracted. The frequency shift caused by the conductivity showed a monotonous change with the increase in temperature. Due to the low solubility of NaCl, the content of conductive ions was less. When the temperature rose to 40 • C, the conductivity change was no longer obvious compared with the low temperature, accompanied by a slower frequency shift. The variation pattern also matched the numbers provided in Figure 8c, which were the conductivity values measured using a portable conductivity meter. The conductivity value had a good linear relationship with the frequency shift, which was also consistent with the linear relationship exhibited by the QCM in the low concentration conductivity solution in the literature [16,36]. Therefore, this experiment also proved that the temperature effect had a considerable influence on the measurement of the electrical properties of QCM, and it was not possible to simply neglect the influence of the liquid viscosity and density during the measurement in the liquid. Similar changes in the frequency shift occurred in the conductivity solution. Especially in practical applications, when the temperature of the liquid or the environment changes, the frequency shift measured by the QCM also changes because the conductivity, liquid viscosity, and density are all affected by temperature. The viscosity and density of the liquid decrease with increasing temperature, and the conductivity increases with increasing temperature within a certain temperature range. Therefore, a 0.01% NaCl solution was selected for this experiment, and the temperature ranged from 5 to 45 °C. The experimental results are shown in Figure 8a,b. The frequency shift of the QCM-R resonant element decreased with the increase in temperature, which agrees with the relationship between viscosity and temperature. The total shift was about 600 Hz. The frequency shift of the QCM-T resonant element caused by temperature did not show an obvious pattern, and the maximum shift difference was about 400 Hz. The total conductivity difference detected by the portable conductivity meter was about 0.1 mS/cm. However, Figure 8c shows the frequency shift caused by the conductivity alone when the frequency shift between the two resonant elements was subtracted. The frequency shift caused by the conductivity showed a monotonous change with the increase in temperature. Due to the low solubility of NaCl, the content of conductive ions was less. When the temperature rose to 40 °C, the conductivity change was no longer obvious compared with the low temperature, accompanied by a slower frequency shift. The variation pattern also matched the numbers provided in Figure 8c, which were the conductivity values measured using a portable conductivity meter. The conductivity value had a good linear relationship with the frequency shift, which was also consistent with the linear relationship exhibited by the QCM in the low concentration conductivity solution in the literature [16,36]. Therefore, this experiment also proved that the temperature effect had a considerable influence on the measurement of the electrical properties of QCM, and it was not possible to simply neglect the influence of the liquid viscosity and density during the measurement in the liquid. Conclusions The design and fabrication of a dual-channel LFE-QCM was reported in this study. A suitable sensing layer was selected to separate the liquid mechanical properties and electrical properties. The complete QCM chip was designed to be smaller than 13.5 mm × 8 mm and had a fundamental frequency of around 16 MHz. The resonator was characterized by two resonant elements, and its function in air and liquid was studied experimentally. The necessity of the double resonant elements was discussed in terms of the permittivity, conductivity, and temperature of the liquid. Finally, we proved that the influence of liquid viscosity and density should be considered in the application of lateral field excitation QCM and that the structure of the double resonator could eliminate the influence of the environmental temperature on the measurement. Our designed dual-channel LFE-QCM overcame the drawbacks of single channel QCM, which neglects the liquid viscosity and density, and provides a good research basis for the future liquid applications. Author Contributions: The contributions from J.L. encompassed the structure design and the theoretical analysis. D.K. mainly worked on the measurement and analysis. C.L. was responsible for the fabrication process. Conclusions The design and fabrication of a dual-channel LFE-QCM was reported in this study. A suitable sensing layer was selected to separate the liquid mechanical properties and electrical properties. The complete QCM chip was designed to be smaller than 13.5 mm × 8 mm and had a fundamental frequency of around 16 MHz. The resonator was characterized by two resonant elements, and its function in air and liquid was studied experimentally. The necessity of the double resonant elements was discussed in terms of the permittivity, conductivity, and temperature of the liquid. Finally, we proved that the influence of liquid viscosity and density should be considered in the application of lateral field excitation QCM and that the structure of the double resonator could eliminate the influence of the environmental temperature on the measurement. Our designed dual-channel LFE-QCM overcame the drawbacks of single channel QCM, which neglects the liquid viscosity and density, and provides a good research basis for the future liquid applications.
9,102.8
2019-03-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Differential effects of acute and chronic antagonist and an irreversible antagonist treatment on cocaine self-administration behavior in rats According to pharmacological theory, the magnitude of an agonist-induced response is related to the number of receptors occupied. If there is a receptor reserve, when the number of receptors is altered the fractional occupancy required to maintain this set number of receptors will change. Therefore, any change in dopamine receptor number will result in a change in the concentration of cocaine required to induce the satiety response. Rats that self-administered cocaine were treated with the irreversible monoamine receptor antagonist, EEDQ, or were infused continuously for 14 days with the D1-like antagonist, SCH23390, treatments known to decrease or increase, respectively, the number of dopamine receptors with a concomitant decrease or increase in response to dopaminergic agonists. The rate of cocaine maintained self-administration increased or decreased in rats treated with EEDQ or withdrawn from chronic SCH23390 infusion, respectively. After EEDQ treatment, the effect ratio of a single dose of SCH23390 or eticlopride were unchanged, indicating that the same SCH23390- and eticlopride-sensitive receptor populations (presumably dopamine) mediated the accelerated cocaine self-administration. Changing the receptor reserve is a key determinant of the rate of cocaine self-administration because the resulting increased or decreased concentration of cocaine results in an accelerated or decelerated rate of cocaine elimination as dictated by first-order kinetics. concentration (level) of cocaine at which the probability of self-administration approximates one and above which the probability of self-administration is low 6 . While the satiety threshold model was established using an FR1 schedule, it has also been used to describe the progressive ratio schedule 7 . This model assumes constant PK elimination parameters for cocaine, resulting in a constant relationship between the unit of self-administered cocaine and the levels of cocaine produced in the rats. This allows us to predict cocaine levels at satiety threshold and to make specific predictions about changes in cocaine levels as a function of dopamine receptor antagonism. Specifically, both D 1 -like 8,9 and D 2 -like 2,10,11 competitive dopamine receptor antagonists accelerate cocaine selfadministration behavior in rats. According to the theory of competitive antagonism, receptor antagonists increase the agonist concentration required to produce a defined magnitude of the response 12 . This equiactive agonist concentration is assumed to correspond to the occupancy of a specific number of receptors. This pharmacological theory of competitive antagonism is applicable to cocaine maintained self-administration behavior in rats. In this case the satiety threshold (D ST ) is assumed to represent an equiactive agonist concentration that should be increased in the presence of a competitive antagonist 13 . Consequently, it was proposed that the decrease in inter-injection interval (T) of a unit dose of cocaine is caused by a PK/PD interaction where the absolute rate of cocaine elimination is faster at higher concentrations, as dictated by first-order kinetics, so that cocaine levels decline more rapidly to the elevated satiety threshold 13 . Although there is a substantial literature on the acute effects of competitive dopamine receptor antagonists in this model, the effects of protracted treatments with these compounds has been neglected, despite the clear clinical significance. To our knowledge only one study of chronic treatment with a dopamine receptor antagonist on cocaine self-administration has been reported, which studied the selective D 1 -like competitive antagonist, SCH23390, in non-human primates 14 . It was reported that in some monkeys there were decreases in the rate of responding to a moderate dose of cocaine after the withdrawal from chronic SCH23390 treatment, though the significance of this change was not emphasized. Typically, not all of the receptors in a population need to be occupied by an agonist in order to induce a maximum response, and these represent a receptor reserve. This receptor reserve (sometimes referred to as spare receptors) is the mechanism by which the sensitivity to an agonist is increased by reducing the concentration of agonist required to occupy the necessary number of receptors to induce any defined magnitude of response 15 . The aim of this study was to determine the effects of treatments that have been shown to increase the dopamine receptor populations and/or increase sensitivity to dopamine agonists, or to decrease the dopamine receptor population in order to determine the relationship between estimated dopamine receptor activity on the rates of cocaine self-administration in rats. Receptor inactivation was achieved using the irreversible receptor antagonist N-ethoxycarbonyl-2-ethoxy-1,2-dihydroquinoline (EEDQ), which has been shown to inactivate dopamine receptors in the brain in vivo without affecting the number of dopamine transporters 16,17 . EEDQ has been shown to deactivate both D 1 -like 16 and D 2 -like 18 dopamine receptors. Receptor supersensitivity was achieved through a chronic infusion of the D 1 -like competitive antagonist, SCH23390. It has been shown that daily injections of SCH23390 (0.5 mg/kg/day) for 21 days resulted in a significant up-regulation of D 1 receptor binding activity in the rat brain [19][20][21] . Chronic administration of SCH23390 has also been shown to cause supersensitivity to dopamine receptor agonists demonstrated in electrophysiological and behavioral studies 21,22 . Animals. Male Sprague-Dawley rats between 200 and 500 g during the course of the study were purchased from Harlan Laboratories (Indianapolis, IN). Rats were housed individually on a 14/10-h light/dark cycle with unrestricted access to food and water. All studies were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and under a protocol approved by the Institutional Animal Care and Use Committee at the University of Cincinnati, and reported in accordance with ARRIVE guidelines. Self-administration training. Rats were implanted with indwelling catheters into the right jugular vein under isoflurane anesthesia, followed by the left jugular and femoral veins which were catheterized as needed throughout the study. Bupranex (0.03 mg/rat s.c.) was administered for pain relief and gentamycin (25 mg/ animal s.c.) was used to prevent infection following surgery. Detailed protocols for cocaine self-administration can be found in Tsibulsky and Norman 23 . In brief, beginning 5 days after surgery, rats were trained to selfadminister cocaine HCl. Rats were weighed daily immediately prior to each self-administration session. Animals were placed in isolated chambers containing an active and inactive lever. During training, a unit dose of 3 μmol/ kg was delivered on a fixed-ratio 1 (FR1) schedule with a timeout period equal to the time of the injection or 5 s, whichever is longer. A cue-light was illuminated for the duration of timeout. Rats had access to cocaine for 3-4 h a day, 5 days a week. Training was considered complete when inter-injection intervals did not deviate significantly and systematically from the mean for three consecutive sessions. Self-administration procedures. The self-administration protocol used here was identical to that used previously in this laboratory 24 . In short, session began between 8:00 and 10:00 a.m., 6 days a week (Monday through Saturday). First, rats were placed in the chamber, and a cue-light associated with cocaine injection was illuminated after every active lever press and at variable intervals of 100-600 s until no lever presses occurred for 30 min. This was done to eliminate the interference of cue-induced lever pressing with the measurement of cocaine-induced pressing. Once lever-pressing was extinguished, programmed non-contingent injections of cocaine were given every two minutes at escalating doses in order to gradually raise the concentration of cocaine in the rat. When the rat pressed the active lever 5 times with each interval of less than 1 min, it was assumed that self-administration had been reinstated. If the calculated cocaine concentrations reached 10.0 μmol/kg it was assumed that the animal could not be safely primed and the session was terminated. If the animal was primed, www.nature.com/scientificreports/ it was allowed to receive 20 injections of a 3 μmol/kg unit dose. After that, the lever was deactivated and animals were left in their chambers until 30 min had passed since their last lever press, at which time animals were returned to their home cages. Estimations of cocaine level in the body. Cocaine level in the animal was calculated during each selfadministration session. Complete protocols for the calculation of cocaine level in the rat's bodies were can be found in Tsibulsky and Norman 23 . Briefly, the cocaine level in the body was calculated every second using a onecompartment pharmacokinetic model and assuming 500 s elimination half-life. The effects of SCH23390 on cocaine self-administration. The baseline values of the inter-injection intervals at 0.3 and 3.0 μmol/kg unit doses were collected for at least 3 weeks prior to the Alzet osmotic pump implantations. Alzet pumps (0.5 µl/h for 14 days, model 2002) were implanted subcutaneously into the back, slightly posterior to the scapulae in the rats under isoflurane anesthesia. These pumps were filled with SCH23390 solutions in saline at three concentrations producing three rates of drug infusion: 26.7 ± 1.7 nmol/kg/h (n = 3), 52.5 ± 3.6 nmol/kg/h (n = 4) and 69.6 ± 2.1 nmol/kg/h (n = 9). For comparison with other published results, the infusion rates were 208, 408 and 541 μg/kg/day, respectively. Self-administration sessions were conducted on Day 1, 3, and 10 after the pumps were implanted. 14 days after implantation, pumps were extracted under isoflurane anesthesia. Daily self-administration sessions were resumed 1 day after pump extractions and continued for at least 4 weeks. The effects of EEDQ on cocaine self-administration. Rats were primed using the procedure stated above. After rats had reinstated self-administration, they were allowed to self-administer until stable baseline was established for about 1 h (10-13 self-injections). Immediately following an injection, the rats were removed from the chamber, detached from the syringe, and injected with EEDQ (1 mg/kg in 10% ethanol in saline i.v.) or vehicle (10% ethanol in saline). The animal was immediately reattached to the syringe with cocaine, and put back into the chamber. The rats were allowed to continue self-administration for about 1 h, or if the animal ceased self-administration (determined by no lever-pressing for at least 30 min) they were removed from the chamber and returned to home cages. Self-administration sessions were conducted at 8, 16,24,32,40,48,56,68,80,92 and 96 h after injection and then every 24 h until inter-injection intervals returned to baseline. Inter-injection intervals and calculated cocaine levels at the time of lever press during maintenance were recorded during every session. Determination of the potency of SCH23390 and eticlopride before and after EEDQ treatment. To determine the continued involvement of D 1 -like and D 2 -like dopamine receptors in the mediation of satiety threshold following EEDQ treatment, the potencies of SCH23390 (D 1 -like selective competitive antagonist) and eticlopride (D 2 -like selective competitive antagonist) were measured. Another group of trained rats were allowed to self-administer until stable baseline was established for about 1 h. Immediately following a cocaine injection, the rat was removed from the chamber and injected with either eticlopride or SCH23390 (each at 20 nmol/kg i.v.) via the same i.v. catheter. Rats were reattached to the cocaine-containing syringe, placed back in the chamber and self-administration resumed. Rats were allowed to continue self-administration for 3-4 h until inter-injection intervals approached baseline. Injections of each antagonist were repeated 3-4 times for each rat, with at least 2 days between sessions. Following these baseline experiments, the same rats were injected with EEDQ (1 mg/kg). Beginning 4 h after EEDQ injection, attempts to prime the rats were made every 8 h. As soon as self-administration was reinstated, they were allowed to self-administer for approximately 1 h to establish the new stable baseline intervals. Animals were removed from the chamber, detached from the syringe, and injected with either eticlopride or SCH23390 (20 nmol/kg i.v.) as done prior to EEDQ treatment. Rats were reattached to the syringe and placed back in the chambers and allowed to continue self-administration until inter-injection intervals returned to the elevated baseline. On five occasions, competitive antagonist treatment abolished self-administration behavior or the elevated cocaine concentrations induced seizures. In these cases, the antagonist injections were repeated every 24 h until reliable self-administration following treatment was achieved. The ratio of the highest level of cocaine at the time of lever-presses after antagonist injection to the mean baseline levels during the same session in the same rat was calculated. All injections of dopamine antagonists were given within 4 days after EEDQ administration. Drugs. (−)-Cocaine HCl was provided by the Research Triangle Institute (Chapel Hill, NC) under the National Institute on Drug Abuse drug supply program. EEDQ, SCH23390 HCl and S(−)-Eticlopride HCl were purchased from Sigma-Aldrich, St. Louis, MO. Cocaine was dissolved in saline at the concentration of 40 µmol/ ml. EEDQ was dissolved in 95% ethanol at the concentration of 2.0 mg/ml immediately or not more than 3 days before the injection. Stock solution of SCH23390 and Eticlopride were dissolved in 95% ethanol at the concentration of 20 μmol/ml, were stored at − 20 °C. Solutions were further diluted before animal injections for a maximum ethanol concentration of 10%. The dose of ethanol injected along with eticlopride or SCH23390 was 0.02 mg/kg and did not affect cocaine self-administration behavior 26 . Data analysis and statistics. Baseline inter-injection interval values within sessions were typically lognormally distributed. Therefore, all statistical analyses of these two parameters were performed using their logarithmic values. Inter-injection intervals were averaged for each session. Baseline trends were determined for at least 3 weeks prior the vehicle, EEDQ, or SCH23390 injections. Baseline values of mean inter-injection intervals www.nature.com/scientificreports/ were extrapolated for each rat using linear regression analysis. Following treatment, the values of mean interinjection intervals were compared with respective baseline values expected on the same day using a paired t-test, as previously reported 24 . Nonlinear regression analyses of the recovery of inter-injection intervals and the estimation of treatment effect half-lives were calculated according to a mono-exponential equation in each individual rat. The effects of competitive dopamine receptor antagonists on inter-injection intervals before and after EEDQ injections were statistically assessed using a paired t-test comparing mean ratios before EEDQ and ratios in the session immediately following EEDQ. Graphic and statistical analyses were conducted using SigmaPlot (Systat Software Inc., San Jose, CA). Multiple comparison correction was performed according to the False Discovery Rate (FDR) method 25 . The significance level was set at p = 0.05. Significance statement. Irreversible and competitive antagonist treatments that reduce or increase dopamine receptor number in the brain accelerate or decelerate, respectively, cocaine self-administration in rats. While the acute effect of competitive dopamine receptor antagonists is to accelerate self-administration behavior, withdrawal from chronic dopamine receptor antagonist treatment has the opposite effect. Dopamine receptor concentrations vary in a number of situations, including substance use disorders, and as a result of the natural aging process. Changes in receptor numbers in individual humans could influence cocaine use. Results Effect of SCH23390 infusion on Inter-Injection Intervals. Representative cocaine self-administration sessions from the same rat before, during and after a 2-week infusion of SCH23390 are shown in Fig. 1. In all sessions, after self-administration was reinstated by the programmed injections of cocaine there was a brief loading period characterized by very short inter-injection intervals (Fig. 1A). Subsequently, the self-administration of 0.3 µmol/kg of cocaine was characterized by short and regular inter-injection intervals. When the unit dose of cocaine increased tenfold, there was an abrupt increase in inter-injection intervals, and these intervals were also regular. This same pattern was seen in all three sessions. However, after implantation of the Alzet pumps the inter-injection intervals were significantly shorter compared with the baselines at both unit doses. After the www.nature.com/scientificreports/ withdrawal of the constant SCH23390 infusion, the inter-injection intervals at each unit dose were considerably longer than those observed before the beginning of the antagonist infusion. In both sessions, despite the large increase in inter-injection intervals when the unit dose was increased tenfold, there was little change in the calculated cocaine concentrations at the time of each lever press (Fig. 1B). However, after implantation of the Alzet pumps the minimal maintained cocaine level increased significantly at both unit doses. After the withdrawal of the constant SCH23390 infusion, the calculated cocaine level at the time of each lever press at both unit doses were considerably lower than that observed before and during the antagonist infusion. During the infusion of SCH23390, the mean inter-injection intervals were significantly decreased at both cocaine unit doses ( Fig. 2A,B) across the 14 days of infusion. Analyses showed significant effects of treatment on the inter-injection intervals at 0.3 μmol/kg (the one-way ANOVA F 1,15 = 136.3, p < 0.001) and at 3.0 μmol/ kg unit dose (F 1,15 = 138.95, p < 0.001). The acceleration of cocaine self-administration was similar to the acute effects of a single dose of SCH23390 8,26 . After removal of the Alzet pumps the mean inter-injection intervals increased to 49.9 s (increase over the baseline by 49.9% at 0.3 μmol/kg) and to 447.4 s (by 32.6% at 3.0 μmol/kg), then gradually returned to the baseline levels over the next 2 weeks. The one-way ANOVA showed a significant effect of withdrawal on the inter-injection intervals both at 0.3 μmol/kg (the one-way ANOVA, F 1,17 = 264.8, p < 0.001) and 3.0 μmol/kg (F 1,17 = 91.86, p < 0.001). The half-life of the recovery rate was in the range between 8 and 14 days (Fig. 2). The effects of EEDQ on maintenance of self-administration. Due to the lack of difference in the effect on intervals at the 0.3 and 3.0 µmol/kg doses in the previous experiment with chronic SCH23390, only the 3.0 µmol/kg unit dose was used in the following experiments. This shortened the self-administration session, thus minimizing exposure to cocaine while maximizing data collection. After EEDQ exposure rats were able to Intervals gradually returned to the baseline levels within 7-10 days (Fig. 4A). This recovery process was approximated by the equation for mono-exponential growth to the maximum. The average recovery half-life was 2.9 ± 0.3 days. The effects of a frequent access protocol on self-administration behavior. A group of six animals (three of them were the same rats as in EEDQ group) was injected with a vehicle control, but exposed to the same frequent access self-administration schedule as the EEDQ treated animals. Initially, the frequent access protocol caused a significant decrease in inter-injection intervals. However, this effect was variable between rats (Fig. 4B). The one-way ANOVA showed that the frequent access protocol significantly decreased inter-injection intervals (F 1,18 = 16.78, p < 0.001). The effect reached the peak of − 28.7% on Day 5 after the beginning of the frequent access protocol. The recovery started immediately after the interval between sessions returned to the standard 24 h and was complete in 5 more days. The average recovery half-life was 12.3 ± 4.6 days. The magnitude of response to SCH23390 and eticlopride after EEDQ treatment. Following the injection of the competitive antagonists eticlopride or SCH23390, the inter-injection intervals were shorter (data not shown). This acceleration of cocaine self-administration behavior is consistent with previously published observations 9,11,13 . The rate of self-administration plateaued in approximately 25-30 min for both competitive antagonists, and then gradually returned towards the pre-injection rate. The mean baseline inter-injection intervals for sessions before eticlopride injection was 341.4 ± 20.2 s and the plateau interval was 191.6 ± 8.7 s. The mean baseline inter-injection intervals for sessions before SCH23390 injection was 395.9 ± 17.4 s and the plateau interval was 181.1 ± 9.7 s. In rats administered EEDQ, the competitive antagonists also produced an acceleration of self-administration behavior, in addition to that produced by EEDQ, with the same pattern of a plateau and subsequent return to the pre-injection levels. The time-course of the competitive antagonist effects was similar to that observed in the animals when not treated with EEDQ. The mean pre-eticlopride inter-injection interval was 230.0 ± 21.2 s and the plateau interval was 119.0 ± 8.2 s. The mean pre-SCH23390 inter-injection interval was 291.4 ± 23.5 s and the plateau interval was 127.6 ± 9.8 s. The ratios of the plateau rates compared to pre-injection rates are presented in Table 1. There were no significant differences in these ratios for each competitive antagonist in rats administered EEDQ or not administered EEDQ. www.nature.com/scientificreports/ The ratios of the peak satiety threshold to baseline satiety threshold is an indication of the potency of dopamine antagonists. A representative session is shown in Fig. 5. The ratio between baseline and the peak effect was measured before and after EEDQ injection. This was used as an indicator of the antagonist potency when the total number of dopamine receptors had been significantly reduced. EEDQ significantly increased the satiety threshold. Despite a large increase in baseline satiety threshold from 4.6 ± 0.2 μmol/kg to 8.1 ± 0.9 μmol/kg on the first successful priming session after EEDQ injection, there was no significant difference in the ratio of baseline to the peak effect after SCH23390 or eticlopride injection (Table 1). Discussion Acute reversible antagonist treatment. The acceleration of cocaine self-administration after presession systemic injections of selective D1 and D2 dopamine receptor antagonists are well established 8,27 . The acceleration, plateau and subsequent slowing of cocaine self-administration after a single i.v. injection of reversible dopamine receptor antagonists during the maintenance phase of a session were observed in this study 9 . The cocaine level at the time of lever press during cocaine maintained self-administration represents the satiety threshold and is assumed to be an equiactive agonist concentration 13 . Competitive antagonists increase the equiactive agonist concentration, and the ratio of this concentration before and after antagonist treatment is a measure of antagonist potency 28 . Therefore, the ratio of the cocaine satiety threshold before and after eticlopride or SCH23390 is a measure of their potencies. A dose of antagonist that produces a twofold increase in the equiactive agonist concentration represents the Kdose, which is approximately 20 nmol/kg for both eticlopride and SCH23390 9 . DAYS AFTER VEHICLE INJECTION Chronic reversible antagonist treatment and withdrawal. During the continuous infusion of SCH23390, cocaine self-administration was accelerated with a decrease in intervals at a similar magnitude observed after a single injection of SCH23390. This indicates that SCH23390 was actively antagonizing dopamine receptors throughout the infusion. Withdrawal from chronic treatment with SCH23390 reveals an upregulation of D 1 -like receptors 20 and produces an increased behavioral response to dopamine receptor agonists 29 . In the present study, the effects of supersensitive dopamine systems are characterized by a marked decrease in the rate of cocaine self-administration on Days 2-7 after discontinuing the chronic antagonist infusion (Fig. 2), which is consistent with the report of the effects of chronic SCH23390 on cocaine self-administration in monkeys 14 . This deceleration of self-administration behavior resulting from a supersensitive system is opposite to the acceleration induced by a single dose of SCH23390. We have previously proposed that the SCH2330-induced acceleration of cocaine selfadministration behavior is the result of a PK/PD interaction where an increase in the satiety threshold results in an increase in the rate of elimination of cocaine 13 . Similarly, a supersensitive dopamine system would result www.nature.com/scientificreports/ in a decrease in the cocaine satiety threshold. Consequently, the rate of cocaine elimination would be slower at the lower concentrations, as dictated by first-order kinetics, and it would take longer for cocaine concentrations after injection to fall to the lowered satiety threshold. Figure 6 illustrates this model. The magnitude of the deceleration of cocaine self-administration behavior was substantial and the effect was measurable for more than a week before returning to baseline. Therefore, the observed effect on cocaine self-administration behavior during and after chronic antagonist treatment is opposite when the antagonist concentration declines and uncovers a supersensitive dopamine system. The supersensitivity of agonist-induced responses after chronic antagonist treatments is typically assumed to be due to the observed increase in the number of receptors 5 . Indeed, the treatment of rats with adenovirus that carried the D 2 receptor gene to upregulate D 2 dopamine receptors in the nucleus accumbens resulted in a significant decrease (75%) in cocaine consumption. The duration of this effect corresponded to the time needed for the number of D 2 receptors to return to baseline 30 . It is likely that the increase in receptor number results in an increased receptor reserve, which enhances the sensitivity of the system to receptor agonists by reducing the concentration of agonist required to induce a defined magnitude of response 15 . Acute irreversible antagonist treatment. The injection of a single dose of EEDQ resulted in immediate acceleration of cocaine self-administration. This effect remained long after the EEDQ would have cleared from the rat (Fig. 4A), for which the longest estimates are about 24 h, consistent with the irreversible antagonism of the receptors underlying this behavior 31 . If so, the observed acceleration of self-administration behavior was due to the decreased total number of receptors. The time course of recovery of the rate of self-administration behavior is consistent with the rate of recovery of both D 1 -and D 2 -like dopamine receptor populations in rat striatum after a single treatment with EEDQ 16,18 . This implies that the satiety response requires only a relatively low number of dopamine receptors. This indicates that there is a substantial receptor reserve in the systems underlying cocaine maintained self-administration behavior. If the effect of EEDQ is similar to that observed in previous studies 16,18 , then the satiety response may be observed when only approximately 20% of D 1 -and D 2 -like dopamine receptors are present. It is possible that receptor transduction efficiency is also changed after EEDQ treatment, which might account for the rapid reappearance of cocaine self-administration behavior, typically within a day after EEDQ. The observed acceleration of cocaine self-administration after EEDQ treatment is consistent with the report that in mutant mice lacking D 2 receptors the rate of cocaine self-administration was accelerated 32 . In the case of the EEDQ-induced acceleration of cocaine self-administration, the reduction of the number of receptors reduced the receptor reserve, so a higher concentration of agonist is required in order to occupy the fixed number of receptors required to produce a particular magnitude of response. It is a fundamental principle of pharmacology that a set number of receptors that are occupied by an agonist will induce a particular magnitude of response. Therefore, the number of receptors occupied by an agonist at the satiety threshold should also be constant. This number may be constant under any situation, whether there is a normal, a depleted or an Figure 6. A pharmacokinetic/pharmacodynamic (PK/PD) interaction model. To summarize these results, a model was generated of the effects of irreversible antagonism and supersensitivity on self-administration behavior of the same cocaine unit dose, and assuming that the first-order elimination rate constant of cocaine was unaltered by the treatments. Compared to baseline (green line), supersensitivity of receptors (blue line) results in a decreased satiety threshold. At the lower concentrations, the rate of elimination of cocaine is lower, as dictated by first-order kinetics, and it takes longer for the concentration to decline back to the satiety threshold, resulting in a longer inter-injection interval. Receptor antagonism by both reversible and irreversible antagonists (magenta line) results in an increased satiety threshold. At the higher concentrations, the rate of elimination of cocaine is faster and it takes a shorter time for the concentration to decline back to the elevated satiety threshold resulting on a shorter inter-injection interval. The horizontal lines represent the satiety threshold under each condition. The arrows represent the inter-injection interval duration for each condition. www.nature.com/scientificreports/ increased receptor population. If so, the mechanistic definition of the satiety threshold would be the minimum number of receptors required to induce the satiety response. In contrast to this mechanistic definition of the satiety threshold, the operational definition of the satiety threshold was based on the minimum dose of cocaine required to produce a cocaine concentration that induced the satiety response 6 . More frequent access to cocaine. The increase in the rate of cocaine self-administration observed in the control (vehicle treated) rats in the days following the vehicle injection is likely due to the increased exposure to self-administered cocaine. During this time, the sessions were run every 8 h, rather than daily, and the effect is similar to the reported escalation of cocaine intake observed for ten days under a frequent access daily regimen of self-administration 33 . For the first 3 days after injection of either vehicle or EEDQ, the total duration of three self-administration sessions was about 6 h. It has been demonstrated that the single long access session of 6 h also results in significant decrease of inter-injection intervals at a wide range of cocaine unit doses 34 . It is possible that this phenomenon can be explained by the development of tolerance to cocaine with more frequent access. More cocaine would be required to induce the same magnitude of response, and would be consistent with a down regulation of the number of dopamine receptors. Change in fractional occupancy. If the number of occupied receptors required to induce the satiety response is constant, a change in the total number of receptors results in a change only in the receptor reserve and, therefore, in the fraction of the total receptor population required to induce the satiety response. This fraction is increased by EEDQ because the total number of receptors is decreased, and decreased by chronic SCH23390 because the total number of receptors is increased. Fractional occupancy by a ligand is dependent on a ligand's affinity and concentration. Assuming unchanged affinity, the increased fractional occupancy will require a higher concentration of a ligand. Consequently, if it is assumed that the number of occupied receptors required to induce the satiety response is constant, then fractional occupancy of the remaining receptor population after EEDQ must be increased. At the elevated satiety threshold concentration, the cocaine elimination rate is increased as dictated by first-order elimination kinetics. As a result, the cocaine concentration produced by a unit dose of cocaine decreases more rapidly to the elevated satiety threshold concentration, thereby shortening the inter-injection interval, similar to the effect of a single injection of a competitive antagonist. This explanation is illustrated in Fig. 6. The response to irreversible antagonism of dopamine receptors was similar to that produced after acute treatment with the competitive D 1 -like receptor antagonist, SCH23390 ( Fig. 2 and Table 1), and the competitive D 2 -like receptor antagonist eticlopride (Fig. 5). Despite the similar acceleration of self-administration behavior, these two classes of antagonists have distinct mechanisms of action, with only EEDQ changing the number of available receptors. The SCH23390-and eticlopride-induced acceleration of cocaine self-administration was previously explained by an increase in the cocaine concentration required to induce the same magnitude of a quantal response, corresponding to the satiety threshold 9 . Importantly, it is the change in cocaine concentration that results in the change in intervals in the presence of competitive antagonists, EEDQ, or in a supersensitive system. It has previously been shown that the potencies of competitive dopamine receptor antagonists can be determined using Schild analysis of the increase in satiety threshold as a function of antagonist dose 9 Since the cocaine concentration ratio for the same dose of SCH23390 or eticlopride was not altered after a treatment with EEDQ it is concluded that the pharmacology of the receptor populations underlying cocaine self-administration were unaltered. Since EEDQ is not selective for subtypes of dopamine receptors, or for several monoamine receptors, the continued involvement of dopamine receptors following this non-selective receptor knock-down was confirmed by the unchanged satiety threshold ratios, and therefore relative potencies, of the selective D 1 -like (SCH23390) and D 2 -like (eticlopride) competitive antagonists (Fig. 5, Table 1). Summary and limitations There are a few key limitations of this research. First, only male rats were included. The lack of inclusion of female rats is a weakness. Additionally, all experiments were done using an FR1 schedule and animals were primed using non-contingent doses of cocaine. This may limit the translation of this work into humans. Lastly, cocaine levels in the animals were all calculated and not measured. In summary, treatments that have been shown to produce an increase or a decrease in receptor number result in opposite effects on the rate of cocaine self-administration behavior. At the changed cocaine concentration required to occupy the same number of receptors, the rate of elimination of cocaine is changed according to the law of first-order kinetics. The change in inter-injection interval is a direct consequence of this pharmacokinetic/ pharmacodynamic interaction. The role of receptor number (or efficiency/sensitivity) and receptor occupancy play a key role in regulating the rate of cocaine self-administration behavior. These findings could have clinical relevance. Dopamine receptor concentrations vary in a number of situations, including substance use disorders, and as a result of the natural aging process 35 . Changes in receptor numbers in individual humans could influence cocaine use after protracted antagonist treatment.
7,655.2
2022-01-04T00:00:00.000
[ "Biology", "Psychology" ]
Exact finite volume expectation values of conserved currents The vacuum expectation values of conserved currents play an essential role in the generalized hydrodynamics of integrable quantum field theories. We use analytic continuation to extend these results for the excited state expectation values in a finite volume. Our formulas are valid for diagonally scattering theories and incorporate all finite size corrections. Introduction Recently there have been interesting developments in calculating expectation values in integrable finite temperature/volume systems. The motivation came from statistical physics [1] as well as from the AdS/CFT duality [2]. In the AdS/CFT duality heavy-heavy light three-point functions can be mapped to expectation values of local operators in finite volume multiparticle states [2,3,4,5]. In statistical physics the recent developments of the generalized hydrodynamics requires the knowledge of the finite temperature expectation values of conserved charges and currents as they are the key inputs in formulating the Euler type hydrodynamic evolution [1,6,7]. There were interesting direct calculations [8,9], which expressed the current expectation values in spin chain Bethe states in terms of the charge eigenvalues and the inverse of the Gaudin matrix. These remarkable compact and simple expressions are also valid in quantum field theories for finite volume expectation values in multiparticle states once the exponentially small vacuum polarization effects are neglected. The aim of our paper is to provide a simple derivation of this result and to extend it in order to incorporate all the finite size corrections. As a result we describe exactly the finite volume excited state expectation values of conserved currents. In doing so we continue analytically the structural equations of generalized hydrodynamics [1] and interpret the result in the finite volume setting. The paper is organized as follows: In the next section we recall the results of the generalized hydrodynamics, which can be interpreted as finite volume vacuum expectation values. We formulate the results in terms of a pairing between functions, which includes the occupation number of the quasiparticles as the integration measure. In section 3 we use analytical continuation to modify the pairing to include also the discrete contributions of physical particles. All formulas remain the same only the pairing has to be exchanged. Finally we perform various tests of our results and conclude. Vacuum expectation values of conserved currents In the generalized hydrodynamics of integrable models [1] conservation laws play a crucial role. Local thermal equilibrium can be characterized by temperature like quantities β i coupled to the infinite family of conserved charges, Q i = q i dx, leading to local averages Here Q 1 is the energy and β 1 is the inverse of the temperature (or volume β 1 = L in the finite volume situation). The collection of the β i "temperatures" can be traded for the expectation values of the conserved charges q i ∝ ∂ βi log Z, implying that the expectation values of the currents j i depend on q i which is the equation of state j i = F i ( q ). Assuming local thermodynamic equilibrium and that these quantities vary slowly in space and time they satisfy the continuity equation ∂ t q i + ∂ x j i = ∂ t q i + j J ij ∂ x q j = 0, an Euler type hydrodynamic equation. Normal fluid modes diagonalize J ij = ∂ qj F i and propagate as ∂ t n i + v eff i ∂ x n i = 0. We focus on a relativistic integrable theory of a single particle which scatters on itself with the S-matrix S(θ 1 − θ 2 ), where θ is the rapidity which parametrize the energy and momentum as E(θ) = m cosh θ, p(θ) = m sinh θ. In thermal equilibrium the expectation values of charges can be calculated from the densities of quasi-particles ρ(θ) and the charge eigenvalue on a one-particle state h i (θ) 1 as Here and from now on all integrals go from −∞ to ∞. Thus the state in a thermal equilibrium can be represented either by β i or by q i or alternatively by ρ. The normal modes, however are neither of these, instead they are related to the occupation number n: which can be calculated from the Thermodynamic Bethe ansatz (TBA) equation [10] (θ) = where ϕ(θ) = −i∂ θ log S(θ). For later generalizations we introduce a pairing including the occupation number as The TBA equation after integration by parts takes the form It is also useful to introduce dressed quantities which satisfy since then the particle density can be written in terms of the occupation number as where p (θ) = dp(θ)/dθ. This leads to the charge expectation value In the second equality we used the fact that the dressing operator (1 − ϕ•) −1 is symmetric wrt. the pairing. From relativistic invariance it follows [1] that the current expectation values take the form Comparing j i to q i we can extract the effective velocity of the quasiparticles v eff (θ) = (E ) dr /(p ) dr . These results can also be obtained from the Leclair-Mussardo (LM) formula [11] by taking into account that the connected form factors of the conserved charges and currents are Indeed, expanding the dressing operator (1 − ϕ•) −1 in (10,11) leads to the LM formula. Using the fact that we can express the charge and current expectation values as These expectation values are valid in a local thermal equilibrium specified by the "temperatures" β i . To make contact with the finite volume description in the crossed channel we need choose β 1 = L to be the volume and put all other β i to zero. Thus the TBA equation is understood as the generating function of the expectation values of conserved quantities where, after differentiation in (16,17), we have to take β i = δ 1i L. In this simplified situation ∂ θ (θ) = L(E ) dr (θ) and we can simplify the current expectation values as but the same is not true for the charges. From the relativistic invariance we can reformulate the finite temperature partition function and averages in the mirror channel. In the Euclidean version it is obtained by a π 2 rotation. This is an imaginary Lorentz transformation with rapidity i π 2 : θ → θ γ = θ+ iπ 2 , for which coordinates transform as (x, t) → (it, ix), while currents and charges as (j, q) → (iq, ij), in particular (p, E) → (iE, ip). This transformation squares to the crossing transformation, which acts as (j, q) → −(j, q) and changes particles to antiparticles (in general). In the finite volume channel, indicated by a subscript L, the LM formula takes the form where by j γ k we mean that the we use h γ k (θ) = h k (θ γ ) for the corresponding charge eigenvalue. In particular, the finite volume vacuum expectation value of the conserved charges is Evaluating this expression for the energy, h 1 (θ) = m cosh θ, gives which agrees with the groundstate energy E 0 (L) coming from the saddle point value of the partition function [10]. Similarly, the finite volume vacuum expectation value of the currents can be expressed as In the following we generalize these results for finite volume excited states. Excited state expectation values of conserved currents It was observed in [12] that excited state TBA equations can be obtained from the ground-state one by analytical continuations. The idea is that by doing an analytical continuation in the volume/temperature to complex values a pole singularity of n(θ) might cross the real integration contour whose residue should be picked up and added as a source term even when the volume is continued back to its physical value. The resulting TBA equation describes excited multiparticle states in the finite volume channel. In the thermal channel the situation might be interpreted as the presence of some defect lines which correspond to physical particles propagating in the crossed channel. These defects then modify the thermal equilibrium and change the quasiparticle density [13]. As a result we need to use the new densities and occupation numbers to calculate averages in this situation, which we denote by the same symbol as before. In analyzing the finite volume excited state expectation values in the sinh-Gordon model [14] it turned out that all effects coming from the analytical continuation can be encoded into the pairing. Thus we expect that all formulae remain the same as the groundstate ones except that the pairing has to be replaced with a new pairing: Formally we can represent the effect of the continuation with a modified contour as shown on Figure 1. The residues η j are 1 for poles on the upper and −1 for the lower half plane. The rapidities θ i are determined by (θ i ) = iπ(2m i +1) and we have to take ∂ θ (θ)| θj = i β i h i (θ j )+ ϕ(θ j − u) • ∂ u (u). In the modified convolution the occupation number is n = 1/(1 + e ), where now satisfies the excited state TBA equation, which again can be obtained via the new convolution: The excited state expectation values of the conserved charges and currents are simply Figure 1: Schematical integration contour for excited states. In the sinh-Gordon model the singularities have imaginary part i π 2 . In the scaling Lee-Yang model they are symmetric for the real line. where the dressed quantities, indicated by boldface, are obtained by means of the new convolution Even simpler expressions can be obtained for the expectation values as These are the main results of the paper. Since they were not really derived, merely conjectured based on previous experiences, we perform various consistency checks and elaborate the details. As a start we separate the contribution of the physical particles and the quasiparticles. In doing so we rewrite the new convolution in terms of the old one. In order to calculate ∂ βi θ k we take the quantization condition (θ k ) = iπ(2n k + 1): and differentiate wrt. β i . We can do it in two different ways. In the first we differentiate (θ) by keeping θ k independent of β i and then we take into account the β i dependence of all θ j s: In the second term we recognize the Gaudin matrix where Alternatively, we can separate the β i -dependence of θ k in the argument: This formula can be used to show that ∂ βi (θ) = h dr i (θ). Putting together these contributions we obtain In order to have a complete separation into physical and quasiparticles we rewrite the new dressing in terms of the old one. In doing so we note that In each term we can use either the discrete or the continuous parts of the convolution. The continuous part dresses up g(θ) and ϕ j (θ) leading to Here we used that G jk = D dr (θ k )(δ jk − ϕ dr j (θ k )/D dr (θ k )) in recognizing its inverse. Plugging this back into the current expectation value we obtain a form involving only the old dressing and convolutions: We note that one can show in general that f (θ) • h dr i (u) and the same for (E ) dr (θ k ) we can observe that the leading part of the result, i.e. the term without any integration, is the same which was obtained in [8,9] in a more complicated way. An analogous calculation results in the charge expectation value In the following we use these results to calculate the finite volume excited state expectation values. For this reason we again take β i = Lδ i,1 and use the relation to obtain In the finite volume interpretation the expectation values correspond to excited states diagonal matrix elements where {θ} ≡ {θ 1 , . . . , θ n } represents the excited state. The parameters θ i appearing in the formulas above are not the rapidities of the particles, but they are related to them, although in a modeldependent way. In the following we elaborate further these results. For the expectation value of the conserved charge we can write In the sinh-Gordon model η k = 1 and θ k =θ k + iπ 2 , whereθ k is the rapidity of the particle, thus our formula reproduces the charge eigenvalue correctly, which asymptotically takes the form Q i = k h i (θ k ). For the current expectation value we have no such a simplification: where in the last terms h k (θ + iπ 2 ) dr means that h k (θ + iπ 2 ) is dressed. This formula is the main result of our paper, which describes the exact finite volume expectation value of conserved currents. It is equivalent to (45) but written in the form where the polynomial and exponential finite size corrections are separated. Indeed, since the convolution kernel n is exponentially small we can forget the dressing operator in each term to obtain the asymptotic results, which in the sinh-Gordon case, reads as Recall that the Gaudin matrix was also the dressed version of its asymptotic formḠ jk = δ jk D(θ k ) − ϕ j (θ k ). This formula agrees with the recent direct calculations in [8,9]. We also checked these formulas in the sinh-Gordon theory against the generalization of the LM formula for excited states [15]. In doing so we had to take into account that [15] is valid in the thermal channel for operators with spins. In the finite volume channel the quasiparticle arguments of the connected form factors should be shifted, similarly how (19) is shifted compared to (12), while the discrete rapidites take their physical values. Let us finally point out that in deriving our result we used the analytical continuation of the charge eigenvalue (3) in the thermal channel and not the current eigenvalue. Conclusions Using the analytical continuation method for the vacuum expectation values of conserved charges and currents we managed to derive exact excited state expectation values. We performed this calculation both in the thermal and finite volume settings, where the role of the currents and charges are exchanged. In the finite volume situation the charges act diagonally and have simple eigenvalues, while currents act nondiagonally and have more complicated expectation values. In the asympotic limit, when vacuum polarization effects are neglected the currents expectation values can be expressed in terms of the charge eigenvalues and the inverse of the Gaudin matrix in agreement with previous calculations [8,9]. Our results provide all the finite size corrections to the asymptotical formulas valid in a diagonally scattering integrable theory with a single species. Multiparticle generalizations for diagonal scatterings are straightforward as well as the extension for flows generated by other conserved charges. It would be very nice to derive similar formulas for non-diagonally scattering theories. The simplest of such results was obtained for the topological current in the sine-Gordon theory in [16].
3,543.4
2019-11-20T00:00:00.000
[ "Physics" ]
3D-printed graphene/polymer structures for electron-tunneling based devices Designing 3D printed micro-architectures using electronic materials with well-understood electronic transport within such structures will potentially lead to accessible device fabrication for ‘on-demand’ applications. Here we show controlled nozzle-extrusion based 3D printing of a commercially available nano-composite of graphene/polylactic acid, enabling the fabrication of a tensile gauge functioning via the readjustment of the electron-tunneling barrier width between conductive graphene-centers. The electronic transport in the graphene/polymer 3D printed structure exhibited the Fowler Nordheim mechanism with a tunneling width of 0.79–0.95 nm and graphene centers having a carrier concentration of 2.66 × 1012/cm2. Furthermore, a mechanical strain that increases the electron-tunneling width between graphene nanostructures (~ 38 nm) by only 0.19 Ǻ reduces the electron flux by 1e/s/nm2 (from 18.51 to 19.51 e/s/nm2) through the polylactic acid junctions in the 3D-printed heterostructure. This corresponds to a sensitivity of 2.59 Ω/Ω%, which compares well with other tensile gauges. We envision that the proposed electron-tunneling model for conductive 3D-printed structures with thermal expansion and external strain will lead to an evolution in the design of next-generation of ‘on-demand’ printed electronic and electromechanical devices. Results shows the depth profiling of device 1 where the G-peak position spatial mapping and the spectra were obtained at different heights, and we observed: (a) the intensity of Raman signal from the photodetector reduces as we scan deeper into the composite (the focal plane it moved into the composite from the surface), www.nature.com/scientificreports/ and 2,692 ± 1 cm −1 ) bands, and (c) the arrangement of the graphene platelets is random (see supplementary information); however, its interface with PLA is consistent. The decrease in Raman signal intensity with depth ( Fig. 1) is attributed to the absorption of the scattered light by the material that the light passes through before reaching the objective lens. It is known that different amounts of polymer dopes graphene differently to change its Raman peak positions 22,23 . The Raman G-peak position (scale: 1,570 to 1585 cm −1 ) is maintained at different depths in the printed graphene/PLA structure, (even at 80 m deep) (Fig. 1); this implies that the relative concentration of graphene with respect to interfaced-PLA remains nominally unchanged. Further, since the composite is conductive (further explained in the next section), the graphene-network is percolating, implying that the microscale composition of graphene is also uniform. The Raman mapping also shows that the arrangement of graphene sheets at every depth-section is random with similar coverage of graphene ( Fig. 1 and supplementary figure). Importantly, the printing process does not modify the dispersion of graphene within the PLA matrix. When the measurements were made at different depths, we confirmed that graphene exists at every depth. However, since graphene is a large nanomaterial (several microns in lateral size), its presence in a composite cannot be uniform in the micron scale. We found that its composition is uniform in the 50 × 50 micron 2 area-scale. Using the Tuinstra and Koenig 24 relationship, the graphitic size of the samples were calculated using the I D / I G intensity ratio: where L a is the in-plane correlation length or cluster diameter, C ′ ( ) is the variable scaling coefficient, I(D), and I(G) are the intensity of D and G peak, respectively. C ′ ( ) ∼ 19.22 nm was calculated according to Cançadoa et al. 25 , I(D), and I(G) was obtained from the Raman graph (from the integration of the area under the curve), indicating that the ordered graphitic region with sp 2 hybridized carbon atoms in the graphene sheets is in the order of 37.75 ± 2.42 nm (L a ). In addition, using the relationship 26 : as h is the Planck's constant, Pos(G) is the position of the G peak (derived from the Lorentz fitting), Pos(G) 0 is the position of the G peak without doping, Ŵ is the dimensionless electron-phonon coupling for the LO phonons where v F is fermi level velocity (1.1 × 10 6 m/s), and n is the carrier concentration. Equations 2 and 3 were solved in Matlab using the values from the Raman spectra, obtaining the value of the carrier concentration average of 2.63*10 12 /cm 2 . This order of magnitude of doping is consistent with that of other graphenic composites. Equations 2 and 3 are derived for graphene with large number of sp 2 hybridization of carbon atoms (infinite sp 2 carbon lattice); however, the relatively sp 2 domain size of graphene in this study is ~ 37.75 nm (or ~ 54,000 sp 2 carbon atoms per domain). Therefore, the validity of these equations and the derived charge density is limited. The electron transport for the percolating network of the 3D printed structure of device 1 was studied using a cryo-probe-station under vacuum (0.75 mTorr), acquiring the current-voltage (I-V) data of device 1 at different temperatures. Figure 2 shows the I-V characteristic of the device measured at 75 K, 100 K, 125 K, 150 K, 175 K, and 200 K. To obtain the overall transport thermal-barrier of the 3D printed devices for electron transfer between graphene platelets, we applied Arrhenius Law to fit the I-V data. This was done to determine the mechanism that most appropriately describes the electron transport (electron tunneling or thermal hopping) (shown in Supplementary v4Information) : where Ea is the thermal barrier height, k B is the Boltzmann constant, and T is the temperature (details of this calculation are shown in supplementary information). Using the I-V (Fig. 2a) data obtained at different temperatures under high vacuum (0.75 mTorr) and fitting the impedance with the Arrhenius equation (Fig. 2b) we found the thermal barrier height of 0.15 meV. This is smaller than k B T at room temperature (25 meV). Since thermal emission occurs with thermal barriers higher than k B T at room temperature, the mechanism of carrier transportation for this device is electron tunneling 37,38 . Electron-tunneling is a phenomena that occurs when the electron potential is below the barrier height and therefore, can occur at low-electrical fields (or low electron potentials) as shown by Nakatsuji et al. 39 , Takayanagi et. al. 40 , Wang et. al. 41 , and Nakatsuji et al. 42 . In this work, the thermal expansion equation applied on the interparticle polymer layers governs the tunneling distance: where a 0 and a are the average tunneling distance at zero-Temperature and at any other temperature, and is the thermal expansion coefficient. 48 Combing with the FNET equation with Eq. 5, we get www.nature.com/scientificreports/ where m is the mass of an electron, t is the FNET constant, and is the tunneling barrier height. Since the electron must dissociate from graphene before tunneling into the next graphene platelet, the tunneling barrier is assumed to be graphene's work function: 4.85 eV. Fitting the data, as shown in Fig. 2c, the tunneling distance at absolute zero temperature was found to be 0.78 nm (calculation-details are provided in Supplementary Information). The strain was induced on device 2 (the channel length, width, and height are 8 mm, 0.8 mm, and 0.4 mm respectively, and the electrode dimensions are 2 × 3 × 6 mm 3 ) by a motion controller driver and the strain analysis of the 3D printed graphene/PLA structure was performed. Here, the device was fixed on a lever with the motioncontroller driver pushing on the graphene/PLA to generate strain, and two electrodes were connected to the edges of the devices, as shown in Fig. 3. The strain percentage was calculated via: where ε is the strain, l 0 is the length of the device at rest (with no strain applied), and h is the bending-distance after strain is applied. Details of this calculation and a figure to illustrate it is shown in the Supplementary Information. The I-V measurements on the strained device were performed for different values of strain (Fig. 4). The electron-transport on this device also follows FNET, and it was combined with the strain equation to obtain: Discussion In conclusion, we demonstrate 3D printed structures of graphene/PLA, applicable as components of on-demand electronic devices. We show the operation of a tensile gauge functioning via the modification of electron-tunneling width between graphenic-centers. For the graphene/PLA system, the thermal barrier to electron transport was 150 µeV (much smaller than energy at room temperature), and the electron tunneling distance was 0.78 nm at cryo-temperature and 0.95 nm at room temperature. The mechanical strain that increases the electron-tunneling width between graphene nanostructures (~ 38 nm) by an average of 0.19 Angstrom reduces the electron flux from 18.51 to 19.51 e/s/nm 2 for the 3D-printed heterostructure. Our work shows that a 3D printable filament with a network of 2D nanomaterials (with low percolation threshold) within the polymer matrix can be building blocks for on-demand electronic devices. Methods In this report, we fabricated two different devices for investigating the effects of temperature (device 1) and mechanical strain (device 2) on the electrical conductivity of the 3D printed graphene/PLA nano-composite structures. The devices were designed in Autodesk Inventor and loaded to the CTC Bizer series Dual Nozzle 3D Printer (0.4 mm nozzle size, 1.75 mm filament size (GRPHN-PLA, Black Magic 3D), Printer setting: stagetemperature at 60 °C, and extruder temperature at 190 ̊ C) for device-fabrication. The schematic of the mechanism to 3D printed devices is shown in Fig. 5. The printing time for the graphene/PLA structure of device 1 was 9 min (18 min for the PLA support structure), the channel length, width, and height are 8 mm, 1 mm, and 0.4 mm respectively, and the electrode dimensions are 5 × 5 × 5 mm 3 . (Follow the same process for device 2). The structure and spatial distribution of graphene in the printed composite devices were characterized employing confocal Raman spectroscopy (WITEC Alpha-300-RA system with 532 nm incident laser and 100X objective lens). The Raman spectra of the devices also provided information on the doping levels of the graphene as well as the size of the ordered graphitic regions.
2,331
2020-07-09T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Solitary Wave Formation from a Generalized Rosenau Equation A generalized viscous Rosenau equation containing linear and nonlinear advective terms and mixed thirdand fifth-order derivatives is studied numerically by means of an implicit second-order accurate method in time that treats the first-, second-, and fourth-order spatial derivatives as unknown and discretizes them by means of three-point, fourth-order accurate, compact finite differences. It is shown that the effect of the viscosity is to decrease the amplitude, curve the wave trajectory, and increase the number and width of the waves that emerge from an initial Gaussian condition, whereas the linear convective term pushes the wave front towards the downstream boundary. It is also shown that the effect of the nonlinear convective term is to increase the steepness of the leading wave front and the number of sawtooth waves that are generated behind it, while that of the first dispersive term is to increase the number of waves that break up from the initial condition as the coefficient that characterizes this term is decreased. It is also shown that, for reasons of stability, the second dispersion coefficient must be much smaller than the first one and its effects on wave propagation are relatively small. Introduction In his 1986 and 1988 papers, Rosenau [1,2] developed a formalism to treat the dynamics of discrete dense systems that can deal with wave-wave and wave-wall interactions that cannot be treated with the Korteweg-de Vries (KdV) equation.Such a formalism leads to a quasi-continuum which is endowed with the leading effects due to discreteness. Rosenau's original equation may be written as where the subscripts and denote partial differentiation with respect to the time and the spatial coordinate, respectively.In the absence of the fifth-order derivative term, (1) reduces to a first-order nonlinear wave equation which has analytical solutions and may have shock wave solutions if (0, ) < 0. The Rosenau equation has been the subject of several analytical and numerical studies.Barreto et al. [3] prove the existence of solutions of (1) with the plus sign in the advection-like term in moving domains by making use of the Galerkin method, multiplier techniques, and energy estimates.They also proved analogous results for the Benjamin-Bona-Mahony or regularized long-wave (RLW) equation with a linear advective term; that is, which models long waves in a nonlinear dispersive medium. A one-dimensional generalization of the Rosenau equation may be written as where (, ) denotes a forcing function and Equation (4) with () = ±( + 2 ) + and (, ) = 0 corresponds to the original Rosenau equation and includes the linear (first-order) wave equation, the inviscid and viscous Burgers equations, the RLW, modified RLW, and generalized RLW equations, the Korteweg-de Vries (KdV) equation, the (, ) Rosenau-Hyman equation, the Camassa-Holm equation, the Olver-Rosenau equation, the Kawahara equation, the Cooper-Shepherd-Sodano equation, and combinations thereof; these equations are presented in Table 1. Park [4] studied the global existence and uniqueness of solutions of the following multidimensional generalized Rosenau equation: where Δ ≡ ∇ 2 is the Laplacian or harmonic operator, while the same author [5] provided pointwise decay estimates for (3) with () = +1 /( + 1) + , that is, a Rosenau-Burgers equation.Wang and Xu [6] studied the existence and global solution of the second-order in time Rosenau equation + − = (()) and proved the existence of finite-time blowup for () = −|| for > 0 and > 0, by means of a potential well technique, while H. Wang and S. Wang [7] studied the long-time behavior of small solutions of the Cauchy problem for a (second-order in time) Rosenau equation, obtained its global small solution, and analyzed the decay and scattering of such a solution.A similar study for the (first-order in time) Rosenau-Burgers (R-B) equation was performed by Liu and Mei [8] who proved that the solution of a nonlinear parabolic equation is a better asymptotic profile of the R-B equation.H. Wang and S. Wang [9] proved the global existence of the solution to the Cauchy problem for the th dimensional Rosenau equation with a damping term when the initial data are small, while Kim and Lee [10] analyzed the convergence of a semidiscretization of + ((, ) ) = (). Esfahani [11,12] obtained solitary wave solutions to the generalized Rosenau-KdV and Rosenau-RLW equations, respectively, by means of the sech and trigonometric function methods, respectively, while Razborova et al. [13] studied the solitons, shock waves, and conservation properties of Rosenau-KdV-RLW equations with power nonlinearities by means of a semi-inverse variational method.Razborova et al. [14] used a perturbation method to study shallow water waves governed by the same equation.Choo et al. [15] obtained a posteriori error estimates of the Rosenau equation by means of a discontinuous Galerkin method and analyzed the stability of the dual problem. In Table 2, some numerical methods that have been used to study one-dimensional Rosenau equations are summarized.These methods include finite difference and finite element techniques for the space discretization, linear and iterative implicit time discretizations, and time integration with explicit Runge-Kutta procedures of third-and fifthorder accuracy. Most of the methods presented in Table 2 are secondorder accurate in both space and time, while the one presented in this paper is fourth-order accurate in space and second-order accurate in time.Moreover, the accuracy of most of the methods presented in Table 2 has been assessed by what is referred to as the method of manufactured solutions; that is, the coefficients of ( 3) and ( 4) are found so that Viscous GRLW equation Rosenau's original equation Rosenau-Hyman's (, ) equation the solution of (3) is of a specified travelling-wave, for example, hyperbolic secant, type and the initial condition corresponds to the exact solution at = 0.For such an initial condition, (, ) is very smooth and does not exhibit steep gradients; as a consequence, a relative small number of grid points are required to obtain very accurate numerical solutions.However, if the initial profile does not correspond )+ = 0, their acronyms, and numerical methods for their solution.[36] Equation acronym (E.A.): R = Rosenau, RLW = regularized long-wave, B = Burgers, and KdV = Korteweg-de Vries.Numerical method: dGM = discontinuous Galerkin method, FDM = finite difference method, FEM = finite element method, QBSPCM = quintic B-splines collocation method, SOR = successive overrelaxation, CN = Crank-Nicolson, RK3 = third-order accurate Runge-Kutta method, IL2TL = implicit linear two-time level, IL3TL = implicit linear threetime level, and EB = Euler's backward time discretization.Unless stated otherwise = 1. to an exact solution of (3) and (0, ) is negative, the leading part of the initial condition steepens due to the nonlinear advective term and, in the absence of dispersion and diffusion, would result in the formation of a shock wave [16,17].For the same type of initial conditions but with dissipation and no dispersion, the initial steepening is eventually balanced by viscous dissipation and a travelling wave which may be referred to as Taylor's wave is formed [17][18][19]; such a wave has a finite thickness.On the other hand, for the same initial conditions as the ones discussed above but with dispersion and no diffusion, the initial steepening of the leading part of the initial profile caused by the nonlinear advective term is eventually somewhat balanced by dispersion and a dispersive shock wave or conservative undular bore may form [17,20,21]. In this paper, a generalized viscous Rosenau equation that includes linear and nonlinear advective terms is studied numerically by means of a second-order accurate, linearized Crank-Nicolson method and three-point, fourth-order accurate, compact operator discretizations for the first-, second-, and fourth-order spatial derivatives.The breakup of the initial condition is studied as a function of the linear and nonlinear convective terms and the viscous and two dispersive terms that appear in the equation. The paper has been arranged as follows.In the next section, the generalized Rosenau equation considered in the study reported here is presented and the linear stability of its linear counterpart is analyzed.This is followed by a section where the numerical method employed to solve the equation and its linear stability are considered.The fourth section presents an exhaustive numerical study of the effects of the linear and nonlinear advective, viscous, and dispersive terms and initial conditions on wave breakup, generation, and propagation.Finally, a short concluding section summarizes the most important findings reported in the paper. Governing Equation In this paper, the following generalized Rosenau equation is considered: where , , , , and are constants that are associated with the linear and nonlinear advective, dissipative, and thirdand fifth-order dispersive terms, respectively.Equation ( 6) is a particular case of (3) and (4), includes both linear and nonlinear terms, and will be solved subject to the following initial (0, ) = () (7) and boundary conditions Before proceeding with the discretization of (6), it is convenient to analyze its linear counterpart. Linear Analysis. The linear counterpart of (6) may be written as which, upon introducing (, ) = exp(( − )), where 2 = −1, and denote the (angular) frequency and wavenumber, respectively, and is the amplitude, becomes which, for real and complex , that is, = + , where and denote the real and imaginary parts, respectively, of , becomes, upon separating the real and imaginary parts, The above relations indicate that instabilities arise whenever > 0 and this condition requires that (1+ 2 − 4 ) < 0 for > 0 which is not fulfilled if ≥ 0 and ≤ 0. Furthermore, (11) and (12) indicate that neither nor is defined if (1 + 2 − 4 ) = 0 provided that neither nor is zero, respectively. For small wavenumber, that is, ≪ 1, ( 11) and ( 12) indicate that = + ( 3 ) and = − 2 + ( 4 ), thus showing that long waves are stable (the viscosity coefficient ≥ 0).On the other hand, for very large wavenumbers, that is, very small wavelengths, these equations show that = −(/) −3 + ( −5 ) and = (/) −2 + ( −4 ) which indicate that short waves are unstable if > 0. From this analysis, it may be concluded that no linear instabilities occur if and are positive and negative, respectively, and, for these values, (; , ) ≡ 1 + 2 − 4 is a positive monotonously increasing function of the wavenumber and achieves its minimum value of one for = 0.Moreover, these conditions on and also imply that the phase velocity, that is, V ph ≡ / = /(; , ), is a monotonously decreasing function of and is equal to , the linear wave speed in (6), for = 0; on the other hand, = 0 for = 0; that is, there is no damping for = 0. Finite Difference Discretization Equation ( 6) may be written as the following system of equations: Using a second-order accurate time discretization and linearizing the nonlinear terms with respect to the previous time level, (16) becomes where Δ is the time step, and the superscript denotes the th time level.Equation (20) together with ( 17)-( 19) represents a linear fourth-order ordinary differential equation for Δ at each time level and is subject to the boundary conditions specified in (8); note that, for example, Δ = Δ .However, (20) may not be solved analytically due to the dependence of , , , , , and on , but it becomes a discrete equation upon approximating these variables and , , and by finite differences. It has been found that provided that the solution of ( 6) is sufficiently smooth near the boundaries of the truncated domain, this overspecification of the boundary conditions does not affect the numerical solution of that equation.A similar behavior was also observed in numerical experiments where 1 and +1 were determined by means of fourthorder accurate, one-sided finite difference approximations that make use of five grid points, one located at the boundary and four interior ones, expanding the values of, say, with = 2, 3, 4, 5, in Taylor's series expansions about 1 = and eliminating , , and at 1 from these expansions; an analogous procedure was also used to determine a fourth-order accurate approximation to at the boundaries, except that, in this case, the Taylor series expansions of with = 2, 3, 4, 5, 6, 7 were performed with respect to , say, at 1 and , , , , and at 1 were eliminated from these expansions.These one-sided approximations, however, result in nonsymmetric, nontridiagonal matrices for and i (cf.( 22) and ( 24)) and were found to be less robust than those obtained with ( 22)- (24) and 1 = +1 = 1 = +1 = 0, especially when the solution of ( 6) approached the boundaries. The finite difference method presented in this section is linear at each time step and second-and fourth-order accurate in time and space, respectively, and its stencil consists of only three grid points.Moreover, (20) at with ( 22)-( 24) may be written as a block tridiagonal matrix for ( i , , , ) with = 2, 3, . . ., , where the superscript denotes transpose that may be solved by means of the block tridiagonal matrix method.Alternatively, one may easily determine F = Au, G = Bu, and H = CG = CBu from ( 22)- (24), respectively, where the matrices A, B, and C can be easily determined from those equations and, for example, G = ( 2 , 3 , . . ., ) ; the values of F, G, and H thus obtained can then be substituted into (20) applied at all the interior grid points to obtain a tridiagonal matrix for u. Although the three-point compact operator method presented here is formally fourth-order accurate in space, it must be noted that this scheme as well as many higherorder ones may result in oscillations in regions where steep gradients exist or the solution is not properly resolved.These oscillations are a consequence of the fact that compact and higher-order methods result in finite difference equations which have more solutions than those of the continuous problem.The spurious solutions of these methods may not cause numerical problems if their magnitude is smaller than those associated with the main roots of the characteristic polynomial of the finite difference equation. Linear Stability of the Finite Difference Discretization. Consider the linear counterpart of (6), that is, (9), which upon time discretization may be written as (cf.(20)) and assume that Substitution of the above expressions in ( 22)-( 24) yields where = (1/2)ℎ.Substitution of ( 27) and ( 29) into (26) yields where and the condition of linear stability, that is, is satisfied provided that Δ ≥ 0 which is indeed the case because > 0 and ≥ 0. The method is, however, dispersive and the phase of +1 / is given by Results In this section, some sample results illustrating the numerical solution to (6) are presented for several values of the parameters that appear in that equation.Unless otherwise stated, the initial condition used in the study is which corresponds to a Gaussian function of amplitude = 1 and standard deviation 2 = 20. The numerical experiments reported here were performed with = 0 and = 100, and the numerical method was stopped whenever the influence of the boundaries on the solution was noticeable.This explains why, in some presented graphs, the time axis is smaller than in other ones. Most of the calculations were performed with = 1000, that is, ℎ = 0.1, and Δ = 0.0001 and correspond to nearly grid-independent results.When either steep gradients or radiation tails were observed, calculations were performed with smaller time steps and grid spacings in order to ensure almost grid-independent results. The accuracy of the results was assessed in terms of the discrete 1 and 2 norms of the differences between two solutions corresponding to different time steps and/or grid sizes; they were also assessed in terms of the invariants reported before, but the preservation of invariants was found not to be a good indicator of the accuracy of the method because the invariants represent global quantities that are obtained from a spatial integration of the solution at each time level, and numerical quadrature is a smoothing operation. In many of the results presented here, it was observed that | 1 (30) − 1 (0)|/ 1 (0) was less than 10 −4 for ℎ = 0.1 and Δ = 0.0001, when the invariants were evaluated by means of a second-order accurate trapezoidal rule (cf.( 13)).Further comments on the first invariant are made at the end of this section when assessing the effects of the initial conditions on wave formation. Before proceeding with the presentation and discussion of results, it is convenient to analyze the different terms that appear in (6) even though the superposition principle is not applicable to such a nonlinear equation.By considering only the first and second terms of (6), that is, + = 0, one obtains the wave solution (, ) = ( − ) that indicates that (, ) remains constant along the characteristic lines − = , where is a constant and denotes a function.If only the first and third terms of ( 6) are considered, that is, + 4 3 = 0, the solution of the resulting equation may be written as (, ) = ( − 4 3 ) provided that shock waves are not formed; since the speed of the waves is equal to 4 3 and, therefore, increases with , wave steepening and shock formation may occur if (0, ) < 0. If only the first and fourth terms of ( 6) are considered, that is, = , the solution of the resulting equation may be written as (, ) = ∫ ∞ −∞ Θ( − )(), where Θ(, ) = (1/√4)exp(− 2 /4) and (0, ) = (), and decays in both space and time.On the other hand, if only the first and fifth terms of ( 6) are accounted for, that is, = , an integration of this equation yields = + (), the integration of which yields (for > 0) where and are integration constants; if < 0, the solution to = contains trigonometric terms and is not reported here.Note that the homogeneous part of (33) increases as || increases unless = = 0. Finally, if only the first and sixth terms of ( 6) are considered, that is, = , an integration of this equation yields = + (), the integration of which (for > 0) may be written as (, ) = cosh () + sinh () + cos () where , , , and are integration constants and 4 = −1 , and indicates that (, ) increases as || increases unless = = 0.The particular solution () may be found without much difficulty by means of Lagrange's variation of parameters, but it is not reported here. For < 0, the homogeneous solution of = +() is where A comparison between (33) and (34) indicates that the former grows faster than the latter as || increases if 1/ √ > 1/ 1/4 (with > 0 and > 0), that is, if ≫ 2 , whereas the opposite holds if ≪ 2 .On the other hand, a comparison between (33) and (35) shows that the former grows faster than the latter if 1/ √ > 1/ √ 2(−) 1/4 (with > 0 and < 0), that is, if − ≫ 2 /4, whereas the opposite holds if − ≪ 2 /4.Since, according to the analysis performed in Section 2, < 0 for (linear) stability, the contribution of the third-order derivative term of ( 9) is much more important than that of the fifth-order derivative one if ≫ − 2 ; moreover, since, the shortest wave that can be represented in a grid is 2ℎ and the largest wavenumber that may be resolved is, therefore, /ℎ, the condition ≫ − 2 requires that ≫ −1000.This implies that must be much larger than ||.For > 0, a similar condition holds. Effect of .Figures 1 and 2 show the solution of (6) for two different values of the viscosity coefficient . Figure 1 indicates that the initial Gaussian profile undergoes a large change initially, and its leading edge steepens whereas its amplitude decreases as time increases.The wave profile also widens as time increases; this widening is a consequence of the value of considered in Figure 1, while the steepening of the leading front is a consequence of the nonlinear advective terms, as discussed at the beginning of this section.Figure 1 also shows that, for = 0.5, (, ) ≥ 0. It should be pointed out that the combs/peaks observed in the amplitude of the leading wave in some figures presented in this paper are due to the fact that only a limited number of grid points and time levels are represented. As the value of is decreased, the effects of viscosity become less important than the advective and dispersive ones, the wave's leading front steepening increases, and sawtooth waves may form behind the leading front; these waves have a typical -structure that has been observed, for example, in nonlinear acoustics [18], and correspond to dispersive shock waves [17,20,21].The number and amplitude of these sawtooth waves increase as is decreased, and a radiation tail may form as indicated in Figure 2 which corresponds to = 0.01 and the values of the parameters shown in Table 3. Figure 2 also shows the presence of four waves whose amplitude decreases slowly with time. A similar structure to that of Figure 2 has also been observed for = 0, thus indicating that the number of waves increases from 1 to 4 as is decreased from unity.The width of these waves increases whereas their height decreases as the viscosity coefficient is increased.Figure 2 also shows that the waves formed from the breakup of the initial Gaussian condition used in this study exhibit curved trajectories.Solitary waves of the EW, RLW, and GRLW have also been found to follow curved trajectories in viscous media [45][46][47]. In Figure 3, the profile is shown at = 2 and = 20 in order to illustrate the initial adjustment of the wave profile from its initial Gaussian shape, the formation of or dispersive shock waves behind the leading front, and the number of waves that are formed as functions of the viscosity coefficient.In particular, Figure 3 shows that the amplitude of the leading wave that emerges first from the initial Gaussian profile has a larger amplitude than those that emerge later.Effect of .Figures 4 and 5 illustrate the effects of the linear advection term on the solution of (6).These figures show that there is an initial adjustment of the profile that generates a steep leading front that becomes a solitary wave at later times. The profile behind the trailing edge of the leading wave also steepens and results in the formation of another wave that travels at a smaller speed than the leading one.Behind the second leading wave, a third wave is formed and this process continues as indicated in Figures 4 and 5 until the nonlinear advective terms are smaller than the linear ones.The amplitude of the leading wave is larger than those of the trailing ones; the amplitude of the latter decreases as the distance to the upstream boundary decreases.1 and 2) that the trailing waves are initially of the or dispersive shock wave type and are caused by the steepening of the profile associated with the nonlinear advective term and dispersion.This is more clearly visible in Figure 5 that corresponds to a linear convective field with = 1. As is increased, the speed of the leading wave front increases but the number of waves that emerge from the breakup of the initial Gaussian profile decreases; this effect is due to the pushing effect associated with the linear advective term.Although not shown here, similar results to those presented in Figures 4 and 5 have been observed for = 0 and the same values of the parameters as those of Figures 4 and 5. Although not shown here, numerical experiments performed with the same values of the parameters as those of Table 3 but with < 0 show similar behavior to that observed in Figures 4 and 5, except that the leading wave speed increases and the leading wave's front becomes steeper as is increased.This is illustrated in Figure 6 that shows the profile at = 2 and 20 for positive and negative values of and indicates that the amplitude of the leading wave is greater than those of the trailing ones.Figure 6 also shows that as the magnitude of for < 0 is increased, it takes a longer time to break up the initial Gaussian profile due to the opposing linear convective term. Effect of .The effects of the nonlinear advective term on wave breakup and propagation are illustrated in Figures 7 and 8.For = 0, ( 6) is linear and may be solved analytically by means of the Fourier transform in .This solution is analogous to the initial Gaussian condition and moves at a speed approximately equal to , and its amplitude decreases while its width increases for ̸ = 0.The numerical solution obtained with the compact operator method presented in this paper may, however, exhibit a small oscillation at the wave's trailing edge if the number of grid points is not enough to ensure an adequate spatial resolution there; such an oscillation does not occur if the trailing edge is properly resolved.As is increased, the magnitude of the nonlinear convective terms also increases; this results in a steepening of the leading front and the formation of an or dispersive shock wave as illustrated in Figure 7 which corresponds to = 0.1.This figure also shows a small radiation tail behind the wave. For = 0.5, the results presented in Figure 8 show that the initial Gaussian profile generates three solitary waves, an or dispersive shock wave, and some radiation which are clearly visible at = 30.This figure also indicates that the amplitude of the waves decreases from the leading to the trailing one, and the amplitude of each wave also decreases slowly with time due to the value of = 0.005 employed in the simulations. In Figure 9, (2, ) and (20, ) are presented as functions of for several values of .This figure shows that, for = 0, the profiles of are almost identical to the initial one and that the wave propagates at a speed approximately equal to unity, as discussed previously.The (2, ) profiles corresponding to = 0.5 and 1 exhibit similar trends but the amplitude of the secondary wave is smaller for = 1 than for = 0.5; this is due to the fact that the wave steepening and wave breakup increase as is increased as discussed at the beginning of this section.The profiles at = 20 show that the number of dispersive shock waves that result from the breakup of the initial Gaussian condition increases as is increased. Effect of .Figures 10 and 11 illustrate the effects of here referred to as the first dispersion coefficient on wave breakup.For = 0.5, the initial Gaussian profile splits into two rightpropagating waves; the leading one propagates at a higher speed than the trailing one, and some radiation is observed between the upstream boundary, that is, = , and the For = 0.05, the results presented in Figure 11 indicate that six waves are formed; the leading wave propagates at a faster speed than the trailing ones, and the speed of the latter decreases from the downstream to the upstream boundary.Figure 11 also shows that the leading wave follows a curved trajectory, and the curvature of the trailing waves decreases from the downstream to the upstream boundary.The separation between waves also decreases from the downstream to the upstream boundaries. Although not shown here, it has been observed that five waves are present at = 30 for = 0.1, thus indicating that the number of waves generated from the breakup of the initial Gaussian distribution increases as is decreased from unity.Furthermore, for the same values of the parameters as those for Figures 10 and 11 but = 0.005, it has been observed that the leading part of the initial Gaussian condition steepens as a consequence of the nonlinear convective terms and results in large gradients of at the wave front.The interaction among the steepening associated with the nonlinear convective terms and the third-and fifth-order derivative ones results in the formation of sawtooth or dispersive shock waves in the trailing part of the leading wave.The amplitude of the sawtooth waves decreases as the gradients of decrease and no sawtooth waves are observed behind the last trailing wave.Although not shown here, similar results to those reported in Figures 10 and 11 have been observed for the same parameters as those of these figures except that = 0.00001, thus indicating that the fifth-order derivative term that appears in (6) does not play an important role in wave propagation for = = 1, = 0.005, −0.00001 ≤ ≤ 0.00001, and ≥ 0.01. For the same parameters as those for Figures 10 and 11 but smaller values of , the contribution of the third-order derivative or first dispersive term decreases, and the number of sawtooth waves formed behind the leading wave's trailing edge decreases and their amplitude increases as is decreased from 0.01.As indicated at the beginning of this section, for the initial Gaussian conditions considered in this study, the nonlinear convective terms result in the formation of a shock wave for = = = 0 and a very steep front for small ̸ = 0 and = = 0; therefore, it may be concluded that the sawtooth waves observed for small value of are caused by the fifth-order derivative term that appears in (6).Note that, for the EW, RLW, and GRLW equations [45][46][47] which do not contain in (6), no sawtooth waves are observed. The effects of both and on wave propagation are shown in Figures 12 and 13 that should be compared with Figures 1 and 2 and Figures 10 and 11, respectively, in order to observe the effects of these parameters.For = 0 and = 0.1, four waves can be seen at = 25; the fourth one is followed by an -wave and some radiation.The amplitude of succesive waves increases and the separation between them decreases from the downstream to the upstream boundary, and their trajectories are straight lines compared with the curved ones observed in Figure 2; this indicates once again that not only does the viscosity decrease the wave amplitude, it also increases the curvature of the waves' trajectories.Furthermore, for = 0.01 and = 0, the results presented in Figure 13 indicate that twelve waves followed by an one and some radiation are observed at = 30; these waves move along straight lines and their speeds decrease from the downstream to the upstream boundary. A summary of the results discussed in this subsection is presented in Figure 14 that shows the (, ) profiles at = 2 and 20 for several values of and indicates that the number of waves that break up from the initial Gaussian one increases as is decreased from unity.For = 1 and 0.5, only two rightpropagating waves and a small oscillatory tail are observed.Figure 14 Effect of .As discussed previously in this paper, the linear stability of (6) places some limitations on the sign of which is here referred to as the second dispersion coefficient.Since the numerical calculations are performed in finite domains with a finite number of grid points and the largest wavenumber that can be resolved is /ℎ, small wave instabilities have been observed for values of that do not satisfy the condition described at the beginning of this section.This condition implies that || should be smaller than approximately one-thousandth of .For such small values of , very few differences in wave propagation were observed as indicated in Figure 15 that shows (2, ) and (20, ) for several values of .This figure indicates that the breakup of the initial Gaussian condition and the waves generated in its breakup are nearly independent of . Effect of the Initial Conditions.For the same initial Gaussian condition as (32) and = 1, but = 1/ √ 0.1, Figure 16 shows that nine solitary waves and a radiation tail are present at = 20; the amplitude of these waves remains constant after an initial transient, and their speeds decrease while the separation between successive waves increases from the downstream to the upstream boundary.As is decreased (for = 1), that is, as the Gaussian initial condition becomes narrower and, therefore, its slope increases, the number of waves also increases. For the initial Gaussian condition of (7), the first invariant is 1 = √ which clearly increases with the amplitude and standard deviation; furthermore, the largest slope of the Gaussian condition occurs at − 0 = ±/ √ 2 and its absolute value is (/)√2/.This implies that the slope of the initial condition increases as is increased and as is decreased, and an increase in slope results in an increase of the nonlinear advective terms and a consequent larger steepening of the propagating wave initially, as discussed at the beginning of this section.Such a steepening is eventually balanced by dispersion and results in the formation of a leading wave as illustrated, for example, in Figure 2 that propagates towards the downstream boundary; however, steepening may still occur behind the leading wave due to the nonlinear advective terms until they are again balanced by dispersion and a second solitary wave emerges and propagates towards the downstream boundary.This process continues until the slope of the profile is so small that the nonlinearities become smaller than dispersion; when this occurs, dispersion dominates and oscillatory tails such as the ones shown, for example, in Figure 2, may be observed. In order to further assess the effects of the initial conditions on both the first invariant and the wave breakup and propagation, numerical experiments were performed with the values of and shown in Table 4 and the initial condition of (32) and the results are illustrated in Figures 17 and 18. Figure 17 indicates that the first invariant is very well preserved, except for (, −2 ) = (1, 1) which corresponds to the largest slope of the initial conditions shown in Table 4.As discussed above, the initial steepness of the wave propagation and the number of solitary waves that emerge from the initial Gaussian profile increase as the slope of the initial condition increases as indicated in Table 4 and Figure 18; therefore, it may be concluded that the preservation of 1 depends strongly on the numerical resolution because as stated above, the number of solitary waves generated from the breakup of the initial conditions increases as the slope of the initial condition is increased (cf. Figure 18(a)) and an adequate number of grid points is needed to accurately resolve these solitary waves; the number of grid points must be increased as is increased and/or is decreased.Figure 18(b) also indicates that, for the same value of 1 (0), the number of solitary waves increases as is increased and, therefore, as the slope of the initial profile increases; on the other hand, for a fixed value of , an increase of results in an increase of 1 (0) and a decrease of the initial slope.Figure 18(a) indicates that the number of solitary waves that are formed from the breakup of the initial Gaussian profile increases as is increased and, therefore, as 1 (0) is increased, even though the slope of the initial profile decreases as is increased. Figure 18(b) clearly shows that the initial steepening of the leading front increases as is increased and as is decreased; the results shown in Figure 18(a) correspond to different values of 1 .When and are varied so that 1 is the same, the results presented in Figure 18(b) show that wave breakup only occurs for (, ) = (1, 1) for which the maximum slope of the initial condition is unity.For the other values of and considered in Figure 18(b), 1 = √, but both the initial amplitude and the initial slope of the profile are so small that the nonlinear advective terms are much smaller than the linear ones, dispersion is much more important than the nonlinearities, and, therefore, no wave breakup takes place. The results described heretofore correspond to the initial Gaussian condition of (32).Numerical experiments have also been performed with other initial conditions such as, for example, (0, ) = cosh −2 (] ( − 30)) (36) and show similar trends to the ones described above, that is, an initial steepening of the leading front of the initial condition, the formation of a leading solitary wave, and so forth, provided that the slope of the initial condition is larger than a critical value that depends on and ].In addition, the amplitude of the solitary waves formed after the breakup of the initial Gaussian condition decreases whereas the separation between successive waves and the wave's width increase from the downstream to the upstream boundary. Conclusions Wave generation from a generalized RLW-Rosenau equation subject to initial Gaussian conditions has been studied numerically by means of a second-order accurate finite difference method in time that employs fourth-order accurate finite difference discretizations for the first-, second-, and fourth-order spatial derivatives and is linearly stable, as a function of the linear and nonlinear advective terms and the viscosity and the two dispersion coefficients.It has been shown that the value of the viscosity does not only affect the amplitude and width of the waves; it also affects their curvature and the number of waves that are generated. The linear advective term was found to push the waves towards either the downstream or upstream boundaries, depending on its direction, whereas the nonlinear advective terms cause a steepening of the leading front and the formation of sawtooth waves and a radiation tail.The number of waves generated by nonlinearities depends not only on their magnitude but also on those of the viscosity and the two dispersion terms. The magnitude of the coefficient of the second dispersion term, that is, that associated with the fourth-order spatial derivative, was found to be limited by stability considerations and provided exponential solutions whose growth may be larger than those associated with the dispersion term that contains a second-order spatial derivative.Such a behavior is not observed if the second dispersion coefficient is much smaller than the first one, but, in this case, it has been found that the effects of the second dispersion term are small. It has also been found that the slope of the initial conditions plays a paramount role in determining the wave formation/breakup and number of solitary waves and that narrower Gaussian conditions result in more solitary waves than wider ones provided that the largest slope of the initial condition is sufficiently large to cause wave steepening and the first dispersion term is large enough so that a balance between nonlinearities and dispersion is reached and a solitary wave may form.In the absence of dispersion, the nonlinearities result in shock wave formation, whereas, in the absence of nonlinearities, neither solitary waves nor shock waves are formed. For the same value of the first invariant, it has been observed that solitary waves may only form when the slope of the initial conditions exceeds a threshold value.Below this value, the nonlinear advective terms are smaller than the linear ones and no solitary wave emerges from the initial conditions. It has also been found that the numerical method used in this study preserves very well the first invariant and predicts with a great accuracy the decay of the second one for negative values of the second dispersion coefficient in viscous media. Figure 4 Figure 4 also shows (cf.Figures1 and 2) that the trailing waves are initially of the or dispersive shock wave type and are caused by the steepening of the profile associated with the nonlinear advective term and dispersion.This is more clearly visible in Figure5that corresponds to a linear convective field with = 1.As is increased, the speed of the leading wave front increases but the number of waves that emerge from the breakup of the initial Gaussian profile decreases; this effect is due to the pushing effect associated with the linear advective term.Although not shown here, similar results to those presented in Figures4 and 5have been observed for = 0 and the same values of the parameters as those of Figures4 and 5.Although not shown here, numerical experiments performed with the same values of the parameters as those of Table3but with < 0 show similar behavior to that observed in Figures4 and 5, except that the leading wave speed increases and the leading wave's front becomes steeper as is increased.This is illustrated in Figure6that shows the profile at = 2 and 20 for positive and negative values of and indicates that the amplitude of the leading wave is greater than those of the trailing ones.Figure6also shows that as the magnitude of for < 0 is increased, it takes a longer time to break up the initial Gaussian profile due to the opposing linear convective term. also shows the initial steepening and wave breakup at = 2. Table 3 : Values of the parameters used in the numerical simulations (var.= variable). Table 4 : Values of the parameters and number of solitary waves (NS) at = 25 for initial Gaussian conditions.
9,667
2016-01-06T00:00:00.000
[ "Physics" ]
Machine learning algorithms to automate differentiating cardiac amyloidosis from hypertrophic cardiomyopathy Cardiac amyloidosis has a poor prognosis, and high mortality and is often misdiagnosed as hypertrophic cardiomyopathy, leading to delayed diagnosis. Machine learning combined with speckle tracking echocardiography was proposed to automate differentiating two conditions. A total of 74 patients with pathologically confirmed monoclonal immunoglobulin light chain cardiac amyloidosis and 64 patients with hypertrophic cardiomyopathy were enrolled from June 2015 to November 2018. Machine learning models utilizing traditional and advanced algorithms were established and determined the most significant predictors. The performance was evaluated by the receiver operating characteristic curve (ROC) and the area under the curve (AUC). With clinical and echocardiography data, all models showed great discriminative performance (AUC > 0.9). Compared with logistic regression (AUC 0.91), machine learning such as support vector machine (AUC 0.95, p = 0.477), random forest (AUC 0.97, p = 0.301) and gradient boosting machine (AUC 0.98, p = 0.230) demonstrated similar capability to distinguish cardiac amyloidosis and hypertrophic cardiomyopathy. With speckle tracking echocardiography, the predictive performance of the voting model was similar to that of LightGBM (AUC was 0.86 for both), while the AUC of XGBoost was slightly lower (AUC 0.84). In fivefold cross-validation, the voting model was more robust globally and superior to the single model in some test sets. Data-driven machine learning had shown admirable performance in differentiating two conditions and could automatically integrate abundant variables to identify the most discriminating predictors without making preassumptions. In the era of big data, automated machine learning will help to identify patients with cardiac amyloidosis and timely and effectively intervene, thus improving the outcome. Introduction Cardiac amyloidosis (CA) is a part of systemic amyloidosis, in which misfolded amyloid proteins are deposited outside cardiomyocytes and lead to restrictive pathology of the heart, often denoting a poor outcome [1,2]. In recent years, several new therapies that significantly improve the prognosis of patients with CA have been developed, including bortezomib-based induction and consolidation strategies, autologous stem cell transplantation, immunomodulatory drugs, etc. [3]. Unfortunately, for patients with advanced cardiac involvement, current treatments are still limited. Moreover, patients with CA could be easily misdiagnosed as hypertrophic cardiomyopathy (HCM) who have similar phenotypes that are difficult to distinguish on routine echocardiography, often leading to delayed diagnosis. However, CA has high mortality and poor prognosis, which makes early detection and differential diagnosis quite important. Because of the advantages of wide application and superior diastolic function assessment, echocardiography has become the preferred screening method for CA. Advanced two-dimensional speckle tracking echocardiography (2D-STE) and strain, and strain rate imaging have been proven to differentiate CA from other causes of concentric cardiac hypertrophy [4]. Since supersonic inspection always produces lots of imaging data and the variables interact with each other to varying degrees, it is difficult to identify the most discriminative predictors through ordinary statistical analysis. Therefore, more powerful data processing 1 3 approaches are urgently needed to extract and analyze imaging data. Machine learning (ML) utilizes computer algorithms to seek inherent patterns in datasets with massive variables without making preassumptions. It can learn from established datasets and facilitate the prediction of risk models on new data. In recent years, ML has become an effective means for prediction and intelligent decision-making [5][6][7] and has achieved commendable success in cardiovascular medicine, such as differentiation of constrictive pericarditis from restrictive cardiomyopathy [8], risk prediction of readmission of patients with heart failure [9], diagnosing different arrhythmias [10,11], etc. Given this, we proposed an intelligent identification study of CA and HCM based on ML. Methods A case-control study of 138 subjects, including 74 verified CA cases and 64 patients with verified HCM cases were referred to the First Affiliated Hospital of Zhejiang University School of Medicine from June 2015 to November 2018. The type of amyloid of all patients with CA assessed by immune histology was the light chain and patients were eligible for inclusion if they met any of the following criteria: (1) an endomyocardial biopsy confirmed amyloid deposits; (2) a positive non-cardiac biopsy for amyloidosis combined with cardiac magnetic resonance or non-strain-based echocardiography which presented typical characteristic of CA, with relevant clinical history and laboratory findings. The characteristics of CA are consistent with the Expert Consensus Recommendations for Multimodality Imaging [12]. Cardiac involvement of CA was assessed by imaging scans, of which forty patients involved the left ventricle, twenty-seven patients involved the left and right ventricles, one patient implicated two ventricles and the left atrium, and six patients involved all ventricles and atria. We further incorporated 64 patients with HCM as comparator groups whose diagnose were created according to recently published guidelines from the American College of Cardiology/the American Heart Association [13], and they underwent both echocardiography and cardiac magnetic resonance imaging to further assess HCM and exclude other pathologies. Three of them had also genetic analysis and all showed heterozygous mutation. An echocardiographic examination was performed in all patients with HCM who presented unexplained left ventricular asymmetrical hypertrophy with septal wall thickness ≥ 15 mm. In the case of positive family history (such as sudden death, cardiac hypertrophy, etc.), interventricular septal thickness ≥ 13 mm was also enrolled. Subjects with left ventricular ejection fraction < 45%, secondary cardiac hypertrophy caused by severe aortic valve disease, long-term uncontrolled hypertension, or thyroid disease were excluded from the study. Patients were also excluded if the relevant data were not available. The local institutional ethics committee approved the study. Echocardiographic examination All echocardiographic studies were conducted on GE Vivid E9 Color Doppler Ultrasound system (GE Medical, Milwaukee, Wisconsin, USA) equipped with a 2-dimension probe M5S with a frequency of 2.0-4.5 MHz and a frame rate of 50-70 frames per second. The grayscale dynamic images of the 4-chamber views, the long axis view of the left ventricle, the 2-chamber views, and the short axis section with 3 consecutive cardiac cycles were obtained and stored on the hard disk. M-mode and tissue Doppler ultrasound were used to collect ultrasonic parameters, which included: the left atrial volume index using an ellipse formula, end-diastolic left ventricular diameter, end-systolic left ventricular diameter, end-diastolic left ventricular volume, end-systolic left ventricular volume, ejection fraction using the biplane Simpson's method in 4-chamber views and 2-chamber views, septal wall thickness, posterior wall thickness. The eccentricity index was calculated as septal wall thickness divided by posterior wall thickness. Relative wall thickness was calculated as the ratio of 2 septal wall thickness divided by enddiastolic left ventricular diameter. The left ventricular mass index was calculated based on the Cube formula. Concentric hypertrophy was diagnosed in patients with relative wall thickness > 0.42 and a left ventricular mass index > 115 g/ m 2 . Diastolic parameters, including peak early (E) and late (A) diastolic mitral inflow velocity and the ratio of E/A, e′, and the ratio of E/e′ ratio were also measured ( Fig. 1). 2D-STE acquisition and analysis Offline analysis of the video clips was based on Echo PAC Version 201 software (GE Company, Fairfield, Connecticut, USA), running on Windows 10 Version 1709 (Microsoft Corporation, Washington State, USA). Selecting clear dynamic images and using the 4-chamber views, the long axis view of the left ventricle, and the 2-chamber views, the left ventricular endocardial and epicardial myocardium were automatically tracked combined with manually adjusted frame by frame throughout the cardiac cycle, and divided into 16 segments to generate a 'bull's-eye' plot. The strain data are gathered by time and space parameters. Each cardiac cycle was divided into 17 equal segments (T1-T17), and Tj represented the corresponding time points (j = 1,2…17); Strain measurements were included as follows: longitudinal strain (LS), global longitudinal strain, longitudinal strain velocity, longitudinal strain rate, longitudinal displacement, circumferential strain, global circumferential strain, circumferential strain rate, radial strain, global radial strain, radial strain rate, rotational rate, left ventricular twist, left ventricular twist rate. According to the above methods, 3791 (223 × 17) variables were systematically extracted for each patient (223 are strain-derived variables and 17 are time points). Average time strain-derived variables (223 variables) were used to train the models. Relative apical sparing was calculated as average apical LS divided by the sum of the average basal and mid-LS, septal apical to base ratio as apical septal LS divided by basal septal LS, and ejection fraction strain ratio as ejection fraction divided by global longitudinal strain. Establishment and assessment of prediction models Two ML-based prediction models were established respectively: one model was built using clinical characteristics, conventional echocardiography, and 2D-STE data; the other was to build models using only 2D-STE data. Prediction models using clinical characteristics, conventional echocardiography, and 2D-STE data We developed prediction models using four approaches: logistic regression, support vector machine, random forest, and XGBoost. These represent the comprehensive analysis from traditional logistic regression to classic ML algorithms (support vector machine, random forest), and then to advanced gradient boosting (XGBoost). To assess the validity of the models, we performed tenfold (or fivefold) crossvalidation by randomly dividing the entire data into 10 (or 5) parts for 10 (or 5) iterations. In each iteration, we selected 7 parts as training data and 3 parts as test sets. We reported average results for each model on 30% of unseen test sets. Logistic regression Logistic regression is the most commonly used risk prediction model. First, univariate logistic regression was used to screen out the variables that were meaningful to predict CA. Then, variables with p < 0.1 were enrolled in the multivariate regression analysis for modeling according to the previous research, clinical experience, and the multiple requirements between variables and outcome. In addition, Spearman correlation was used to exclude the influence of collinearity among variables. Support vector machine Support vector machine converts data into complex highdimensional space to look for the largest difference margin to realize the differentiation of diseases [14]. We applied linear basis kernel and cost function to build the model and tuning parameters to minimize the error classification. Random forest Random forest is a tree-based method, the essence of which is to continuously split variables at discrete cutting points, usually presenting in the form of a tree graph [14]. A separate tree is built from bootstrapped data and variables, and the final model is a collection of many trees. Gradient boosting The core idea of gradient boosting is to set up a series of initial models based on the decision tree, which is called base classifiers [15,16]. Subsequently, weaker base classifiers are iterated and adjusted the weights to create a single stronger classifier. Information gain (IG), a technique of feature selection, is defined as a metric of effective classification. It is measured in terms of the entropy reduction of the class, which reflects additional information about the class provided by the variables. Prediction models of using 2D-STE data Boosting-based algorithms are increasingly used because they involve the sequential creation of models, with each iteration attempting to correct errors in the previous models. LightGBM and XGBoost are two widely used algorithms. We developed predictive classifiers using 2D-STE data: (1) LightGBM; (2) XGBoost; (3) voting model based on Light-GBM and XGBoost. To evaluate the validity of models, a fivefold cross-validation was performed. We split the dataset into the training set and test set in a 4:1 ratio and reported the performance on the test data. Statistical analysis Categorical variables were expressed as the number of cases and percentages and were compared using the chi-square test or Fisher's test. Continuous variables were expressed as mean ± SD. Kolmogorov-Smirnov test was used to determine whether the data were normally distributed. If the data conform to the normal distribution, the independent sample T-test was used for comparison; otherwise, the Mann-Whitney U test was suitable. p < 0.05 was considered statistically significant. Sensitivity, specificity, positive predictive value, negative predictive value, accuracy, receiver operating characteristic curve, and area under the curve (AUC) were used to evaluate the performance of models. DeLong test was used to evaluate whether the AUC in different models was statistically significant. The data analysis was implemented on SPSS 23.0 (Version 23.0), R (Version 4.0.3), and Python (Version 3.7). Study population The clinical characteristics of both groups are summarized in Table 1. The age (60.9 ± 9.7 vs 50.3 ± 15.3, p < 0.001), There was no statistical difference in left atrial volume index, e′, global longitudinal strain, septal apical to base ratio, and ejection fraction strain ratio between the two groups, as shown in Table 2. The models based on 2D-STE data After training with the tuned hyperparameters, the feature importance of the voting model integrated LightGBM and XGBoost was obtained and ranked. The results indicated that RadStrain3 (the radial strain of the middle ventricular septum) was the most important predictor, followed by LongStrainEpi7 (the longitudinal strain of the anterior wall of the epicardial basement segment) and CirStrainR4 (circumferential strain rate of the posterior wall of the basement segment) ( Table 4). Among all the three ML algorithms (XGBoost, Light-GBM, and voting model), the discriminant ability of the voting model was similar to LightGBM (AUC of both was 0.86), while the AUC of XGBoost was slightly lower, which was 0.84 (Fig. 3). In the fivefold cross-validation, the mean AUC of Light-GBM, XGBoost, and voting models were 0.89 ± 0.19, 0.85 ± 0.43, and 0.87 ± 0.30, respectively. The voting model was globally more robust and outperformed to individual model on test sets (Fig. 4). Discussions ML combined with 2D-STE to carry out intelligent identification on CA and HCM discovered: that ML had a great performance in the differential diagnosis and could automatically integrate plentiful variables to identify the most discriminative predictors without preassumption. CA is a rare and complex disease with high mortality and poor prognosis. Although new treatments which significantly improve the outcomes have been developed, the available management with advanced cardiac involvement is still very limited. In addition, clinical confusion about cardiac hypertrophy caused by other causes (e.g., HCM, hypertension, and aortic stenosis) often leads to delayed diagnosis, which prevents patients with CA from receiving an early and effective intervention. Echocardiography has become the preferred screening approach for patients with CA due to its wide application, low risk, low cost, convenience, and superior diastolic function assessment. 3 Therefore, many related studies had been done to distinguish CA from other causes of cardiac hypertrophy. Cardiac deformation analysis of 2D-STE could reveal early systolic abnormalities. Early reports by Sun et al. [17] suggest that global longitudinal strain, global circumferential strain, and the global radial strain were significantly reduced in patients with advanced CA compared with HCM and hypertensive heart disease, and although there was some overlap between the groups, the three causes of cardiac hypertrophy could be distinguished to a certain extent. Di Bella et al. [18] displayed that the epicardial strain in patients with amyloid transthyretin was significantly lower than that in patients with HCM. Subsequently, Baccouche et al. [19] made use of 3-dimension speckle-tracking echocardiography to identify CA from HCM, presenting that most of the functional parameters in both groups were lower, while those in the CA group were the lowest. The radial strain of CA patients demonstrated the "reverse pattern" from base to apex, suggesting that two conditions could be distinguished based on functional patterns. Similarly, Phelan et al. [4] proposed that the "relative apical sparing" of longitudinal strain could well identify CA. Liu et al. [20] showed that septal apical to base ratio > 2.1 joint with deceleration time < 200 ms helps to differentiate CA from other causes of ventricular hypertrophy. In recent years, Pagourelias et al. [21] proposed that the ejection fraction strain ratio has the best CA differentiation effect (AUC 0.95; 95% CI 0.89-0.98). In the challenging subgroups (maximum wall thickness ≤ 16 mm and LVEF > 55%), ejection fraction strain ratio is still the best predictor of CA. Furthermore, Boldrini et al. [22] developed a scoring-based CA diagnostic model by analyzing morphological, functional, and strain-derived parameters. The results show that centripetal reconstruction and strain-derived parameters have the best diagnostic performance. The multivariate logistic regression model, which included relative wall thickness, E/e′, LS and tricuspid annular plane contraction deviation, had the best diagnostic effect on monoclonal immunoglobulin light chain amyloidosis (AUC 0.90; 95% CI 0.87-0.92). The complexity of CA assessment has increased in terms of a large amount of data generated by supersonic inspection and increasing clinical variables. Traditional statistical analysis could only explore the relationships among limited variables and achieve a certain degree of predictive performance. However, in the era of big data, it is usually necessary to integrate abundant variables, which is a great challenge for clinicians. Therefore, we presented the study of differentiating CA and HCM based on ML. Combining clinical characteristics, routine echocardiography and 2D-STE data, support vector machine, random forest, and XGBoost manifested a favorable discriminative performance (AUC > 0.9). When based on 2D-STE data solely, the different gradient boosting models still performed well in the identification of CA patients. The voting model was more robust globally and superior to a single algorithm on some test sets. Although the difference in the AUC of ML was not statistically significant compared with the traditional logistic regression model (p > 0.05), it should be pointed out that this study was based on small sample data, and the performance of ML needs to be further discussed on larger data. Previously, Zhang et al. [23] employed ML to achieve automatic echocardiography interpretation. The algorithms can not only implement view recognition, image segmentation, structure, and function quantification but also realize automatic detection of CA, HCM, and pulmonary hypertension, which further reflects the effectiveness of ML. ML, a form of artificial intelligence that eliminates preassumptions, explores the unknown pattern with all useful data to avoid neglecting some important but not yet recognized predictors. Interestingly, ML also automatically identified traditional variables, such as ejection fraction, eccentricity index, and relative apical sparing, which further validates the potential RadStrain3 1 BasalRotation3 11 LonStrainR10 21 CirStrainR8 31 CirStrainR11 41 LonStrainEpi7 2 LonStrainV8 12 PapiRotation6 22 PapiRotation2 32 GRSPAPI 42 CirStrainR4 3 LonStrainV13 13 LonStrainR1 23 CirStrainR16 33 LonStrainD2 43 LonStrain7 4 LonStrainEndo7 14 RadStrain4 24 GRSAPICAL 34 LonStrainEndo18 44 RadStrain2 5 LonStrainR7 15 PapiRotation3 25 CirStrain16 35 LonStrain13 45 RadStrain7 6 RadStrain8 16 ApicalRotationR4 26 CirStrainR13 36 CirStrainR17 46 LonStrainEpi16 7 LonStrainD9 17 LonStrainV14 27 CirStrainR5 37 LonStrain3 47 CirStrain10 8 RadStrainR12 18 LonStrainD10 28 RadStrain11 38 LonStrainR3 48 BasalRotation1 9 CirStrain4 19 LonStrainEpi15 29 CirStrain15 39 BasalRotation4 49 BasalRotation2 10 LonStrainEpi13 20 LonStrainV7 30 LonStrainD13 40 BasalRotationR3 50 scalability and practicability. In addition, unexpected interactions between several weaker predictors would not be overlooked. ML will not replace traditional statistical analysis, conversely, it provides a supplement and extension [24]. For rapidly growing data, ML explores non-linear patterns and automatically extracts important variables, thus simplifying feature selection and improving prediction, and facilitating the differentiation of similar phenotypic diseases. Also, ML seamlessly incorporates new data to continually update models and promote performance over time. Beyond that, ML possesses efficiency as it runs complex mathematical algorithms, such as gradient boosting, in a few seconds and produces easy-to-understand results with low variability and high accuracy. Limitations of the study There are some limitations to this study. First of all, the establishment of ML models was carried out in a small number of samples, and further validation needs to be conducted in a larger dataset. In addition, considering the imbalance of data among different areas, our study was single-center with certain specific population characteristics, further research needs to be trained and verified in multiple centers and regions to improve the generality of the models. Finally, our model was only evaluated in two-dimensional echocardiography with limited time and space, and further studies could include more ultrasonic sections or implement the models by other imaging methods. With the increment of samples, deep learning may improve the prediction of the models. Conclusions CA has a poor prognosis, and high mortality and is often misdiagnosed as HCM, leading to delayed diagnosis. If CA can be identified early and provided effective intervention in time, it is beneficial to improve the outcome for patients. We proposed intelligent identification of CA from HCM based on ML using 2D-STE data. The results indicated that the ML models had great discriminative performance, and could automatically integrate vast variables without making any preassumption, so as to identify the most important predictors. In the era of big data, automated ML will help Fig. 3 The ROC curve of different gradient boosting models. Among all the three ML algorithms (XGBoost, LightGBM and voting model), the discriminant ability of voting model was similar to LightGBM (AUC of both were 0.86), while the AUC of XGBoost was slightly lower, which was 0.84. A XGBoost; B LightGBM; C Voting model to identify patients with CA, so that timely and effective intervention can be carried out to improve the prognosis.
4,831.4
2022-10-19T00:00:00.000
[ "Medicine", "Computer Science" ]
Reversibility of hAT-MSCs phenotypic and metabolic changes after exposure to and withdrawal from HCC-conditioned medium through regulation of the ROS/MAPK/HIF-1α signaling pathway Background Mesenchymal stem cells (MSCs) play an important role in tumor progression; concomitantly, MSCs also undergo profound changes in the tumor microenvironment (TME). These changes can directly impact the application and efficacy of MSC-based anti-tumor therapy. However, few studies have focused on the regulation of MSC fate in TME, which will limit the progress of MSC-based anti-tumor therapy. Herein, we investigated the effects of conditioned medium from human hepatocellular carcinoma cells (HCC-CM) on the phenotype and glucose metabolism of human adipose tissue-derived MSCs (hAT-MSCs). Methods The passage 2 (P2) to passage 3 (P3) hAT-MSCs were exposed to conditioned medium from Hep3B, Huh7 and HCCLM3 cells for 4–8 weeks in vitro. Then, immunofluorescent, CCK-8 assay, EdU assay, Transwell assay, and flow cytometry were used to assess the alterations in cell phenotype in terms of cell morphology, secretory profiles, proliferation, migration, invasion, cell cycle, and apoptosis. In addition, glucose metabolism was evaluated by related kits. Next, the treated hAT-MSCs were subjected to withdrawal from HCC-CM for 2–4 weeks, and alterations in phenotype and glucose metabolism were reevaluated. Finally, the molecular mechanism was clarified by Western blotting. Results The results revealed that after exposure to HCC-CM, hAT-MSCs developed a stellate-shaped morphology. In association with cytoskeleton remodeling, hAT-MSCs showed enhanced capacities for migration and invasion, while cell proliferation was inhibited by regulating the cell cycle by downregulating cyclins and cyclin-dependent kinases and activating the mitochondrial apoptosis pathway. In terms of glucose metabolism, our results showed mitochondrial dysfunction and elevated glycolysis of hAT-MSCs. However, interestingly, when the treated hAT-MSCs were subjected to withdrawal from HCC-CM, the alterations in phenotype and glucose metabolism could be reversed, but secretory phenotype and tumor-promoting properties appear to be permanent. Further studies showed that these changes in hAT-MSCs may be regulated by the ROS/MAPK/HIF-1α signaling pathway. Conclusion Taken together, the effects of long-term HCC-CM treatment on phenotype and glucose metabolism in hAT-MSCs are modest and largely reversible after withdrawal, but HCC-CM endow hAT-MSCs with permanent secretory phenotype and tumor-promoting properties. This is the first report on the reversal of phenotype and glucose metabolism in tumor-associated MSCs (TA-MSCs), it is anticipated that new insights into TA-MSCs will lead to the development of novel strategies for MSC-based anti-tumor therapy. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-020-02010-0. Background It is increasingly evident that the initiation and progression of tumors are not only determined by the genetic or epigenetic changes of tumor cells but also by the regulation of TME [1]. Cancer-associated fibroblasts (CAFs), the activated phenotype of fibroblasts within tumors, are one of the most important components of TME [2]. CAFs can be highly heterogeneous, with distinct expression patterns and multiple sources [3], and MSCs are one of the main sources [4]. MSCs have been reported to migrate to tumors, and in response to signals from growing tumors, MSCs continuously remodel the tumor niche, therefore profoundly affecting tumor growth and metastasis [5]. Meanwhile, under the education of tumor cells, MSCs evolve into TA-MSCs, obtain expression of α-smooth muscle actin (α-SMA) and Vimentin then be stellate in shape [6]. In this process, the fate of MSCs will also undergo profound changes, including cell phenotype, metabolic pattern and cell function. However, few studies have focused on the regulation of MSC fate in TME, which is not conducive to fully understanding TME. As a kind of non-hematopoietic stem cell, MSCs have the ability to self-renew and to differentiate into various cell types, including adipocytes, osteoblasts and chondrocytes. Due to the characteristics of 'tumor homing' and 'immune privilege', MSCs have recently been considered as viable tools for anticancer approaches [7]. However, with further preclinical study, studies have suggested that MSCs could undergo malignant transformation in TME [8]. There have also been reports that the degree of engraftment is generally low and transient [9]. In addition, MSCs have a special temporal-spatial pattern in tumors: MSCs can be concentrated inside the tumor in a short time and will redistribute to the tumor boundary after a certain period of time [10,11]. Thus, the problems of safety, engraftment efficiency and redistribution also restrict the progress of MSC-based antitumor therapy. In addition, cellular energy derives mainly from glucose metabolism, and cellular fate correlates closely with energy status. Tumor cells are known for their altered glucose metabolism characteristic, which is the metabolic shift to aerobic glycolysis, regardless of oxygen availability (the 'Warburg Effect') [12]. Recently, some studies have found that CAFs can also have a similar effect [13]. Further analysis showed that tumor cells induced oxidative stress in CAFs, which then led to mitochondrial dysfunction and acted as a 'metabolic' motor to drive aerobic glycolysis. As a consequence, CAFs provided nutrients such as lactate and pyruvate to simulate mitochondrial biogenesis and oxidative metabolism in adjacent tumor cells (the "Reverse Warburg Effect") [14]. Nevertheless, almost all research has focused on CAFs, and none of the reports were detailed regarding MSCs. Therefore, exploring the changes in glucose metabolism of MSCs is conducive to gain a more complete understanding of cell fate regulation, which has profound practical value. In summary, we believe that clarifying the regulation of MSC fate in TME is of great practical value for understanding TME and solving the current dilemma of MSCbased anti-tumor therapy. Only when we figure out the cell fate regulation during the transformation of MSCs into TA-MSCs, especially the changes in phenotype and glucose metabolism, can we understand the meaning of the corresponding functional changes, which will enable targeted engineering MSCs to make them more suitable for the role of drug-carrying tools. Herein, we investigated the effects of conditioned medium from human hepatocellular carcinoma cells on the phenotype and glucose metabolism of human adipose tissue-derived MSCs (hAT-MSCs) and its molecular mechanism to develop novel strategies for MSC-based anti-tumor therapy. Identification of hAT-MSCs The hAT-MSC cell line was a gift from Dr. Zhenhua Hu (The Fourth Affiliated Hospital, College of Medicine, Zhejiang University, China). To further identify the hAT-MSCs, we conducted flow cytometric analysis and in vitro differentiation assays. To clarify the surface markers of MSCs, we utilized a Human Mesenchymal Stem Cell Multi-Color Flow Kit (R&D Systems, Minneapolis, MN, USA) to label CD45, CD90, CD105 and CD146. hAT-MSCs were detached by trypsin and resuspended in PBS and then incubated in medium with antibodies for 30 mins. After washing with PBS, flow cytometric analysis was performed. To assess the differentiation potential of hAT-MSCs towards osteoblasts, adipocytes and chondroblasts, we induced the hAT-MSCs by osteogenic, adipogenic and chondrogenic media (Cyagen, Guangzhou, China), respectively. Cells were stained after 3-4 weeks of incubation. The hAT-MSCs were stained with Alizarin Red S (Sigma-Aldrich Co., St Louis, MO, USA), Oil Red O solution (Sigma-Aldrich Co., St Louis, MO, USA) and Alcian blue (Sigma-Aldrich Co., St Louis, MO, USA). The hAT-MSCs from passage 2 (P2) to passage 3 (P3) were divided into 4 groups. Three of the groups were experimental groups that were cultured in Hep3Bconditioned medium (3B-CM), Huh7-conditioned medium (Huh7-CM) and HCCLM3-conditioned medium (LM3-CM) for 4-8 weeks and were marked as treated hAT-MSCs, with the medium replaced every 2 days. The other group was treated with normal medium as a control. For phenotypic and metabolic reversal experiments, the treated hAT-MSCs were replaced in normal medium for another 2-4 weeks and were marked as R-3B-CM, R-Huh7-CM and R-LM3-CM, respectively. Untreated hAT-MSCs (2 × 10 5 ), treated hAT-MSCs (2 × 10 5 ) and reversed hAT-MSCs (2 × 10 5 ) were seeded into a 10-cm dish and cultured for 3 days. Then the MSC-conditioned medium was collected and filtered through a 0.22-μm filter. Then the HCC cells were divided into 4 groups, respectively. Three of the groups were experimental groups that were cultured in untreated hAT-MSC-conditioned medium (U-MSC-CM), treated hAT-MSC-conditioned medium (T-MSC-CM) and reversed hAT-MSC-conditioned medium (R-MSC-CM) for 3 days. The other group was treated with normal medium as a control. Cell proliferation assays hAT-MSCs (8 × 10 3 ) in each group (untreated hAT-MSCs, treated hAT-MSCs and reversed hAT-MSCs) and HCC cells (3 × 10 3 ) in each group (DMEM, U-MSC-CM, T-MSC-CM and R-MSC-CM) were seeded into 96-well plates, and cell proliferation was measured at different time-points using the Cell Counting Kit-8 (Dojindo Molecular Technologies Inc., Tokyo, Japan) for 96 h according to the manufacturer's protocol. We also investigated the newly synthesized DNA of hAT-MSCs in each group using the EdU assay (RiboBio, Guangzhou, China). Cells (1 × 10 4 ) were seeded into 48well plates and exposed to 50 μM 5-ethynyl-2′-deoxyuridine for 5 h at 37°C. Then, the remaining procedures were all performed according to the manufacturer's protocol. Cell cycle analysis Flow cytometry was applied for cell cycle analysis. Approximately 2 × 10 5 hAT-MSCs in each group were harvested and fixed in 75% ethanol for 24 h at − 20°C, and the cells were then stained with DNA staining solution (LiankeBio, Hangzhou, China) for 30 mins to detect cell cycle distribution. Cell apoptosis assays Approximately 5 × 10 5 hAT-MSCs in each group were harvested and then detected using an Annexin V-FITC/ PI Apoptosis Detection Kit (LiankeBio, Hangzhou, China) according to the manufacturer's protocol. Transwell assays The chemotaxis, migration and invasion assays were performed in Transwell chambers with 8 μm pore size (Corning, NY, USA). To measure the migratory capacity to different HCC-conditioned medium of hAT-MSCs, untreated hAT-MSCs (5 × 10 4 ) in 200 μL of serum-free medium were seeded in the upper chamber, and 800 μL of HCC-conditioned medium or DMEM with 10% FBS was placed in the lower chamber. The chambers were incubated at 37°C for 20 h. Subsequently, cells were stained with Crystal Violet Staining Solution (Beyotime, Shanghai, China). The number of cells that had migrated across the transwell membrane was counted based on the number of stained nuclei using an inverted microscope (Leica, Malvern, PA, USA). For migration assays, hAT-MSCs in each group and HCC cells in each group (5 × 10 4 ) in 200 μL of serumfree medium were seeded in the upper chamber, and 800 μL of DMEM with 10% FBS was placed in the lower chamber. The remaining procedures were the same as the chemotaxis assay. For invasion assays, 50 μL of diluted BD Matrigel (1: 8) was added to the upper chambers and incubated at 37°C until completely solid. Then, hAT-MSCs in each group were added to the upper chambers, and the time of incubation was extended to 72 h. The remaining procedures were the same as the chemotaxis assay. Western blot analysis Approximately 5 × 10 5 hAT-MSCs in each group were harvested, and total proteins were then extracted from the cells by incubating in RIPA cell lysis buffer with 1% PMSF and phosphatase inhibitors (Servicebio, Wuhan, China) on ice for 30 mins. After centrifugation (14, 000×g, 4°C, 15 mins), the supernatant was collected, and the total protein contents were measured using a BCA protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA). Next, proteins were separated by 4%-12% SurePAGE Bis-Tris gels (Genscript, Nanjing, China) at consistent 120 V for 60 mins. Then, the proteins were transferred to PVDF membranes at consistent 350 mA for 60 mins. After the membranes were blocked using 5% BSA solution for 2 h, the membranes were incubated overnight at 4°C with primary antibodies against β-actin , Affinity) and p-ERK1/2 (1:1000, ab201015, Abcam). After washing 3 times with TBST, the membranes were incubated with secondary anti-mouse or antirabbit antibodies (1:3000, Servicebio, Wuhan, China) for 1 h at room temperature. Detection was carried out using the ECL kit (Servicebio, Wuhan, China). Glucose uptake, lactate, pyruvate and ATP assays The Glucose Uptake Colorimetric Assay Kit (BioVision, Milpitas, CA, USA), Lactate Colorimetric Assay Kit II (BioVision), Pyruvate Colorimetric Assay kit (BioVision) and ATP Colorimetric Assay Kit (BioVision) were used to determine glucose uptake and levels of lactate, pyruvate and ATP, respectively, according to the manufacturer's protocols. Mitochondrial staining To stain mitochondria, we used Mito-Tracker Green (100 nM, Beyotime, Shanghai, China) to mark mitochondrial mass and JC-1 (10 μg/ml, Beyotime, Shanghai, China) to mark mitochondrial membrane potential (MMP). hAT-MSCs in each group (2 × 10 4 ) were seeded into 6-well plates. After 24 h, cells were incubated with Mito-Tracker or JC-1 staining solution for 30 mins at 37°C. Later, cells were washed in normal medium. Cells were observed using a fluorescence microscope. Measurement of ROS Intracellular ROS were measured using a Reactive Oxygen Species Assay Kit (Yeasen Biotech Co., Shanghai, China). Approximately 2 × 10 5 hAT-MSCs in each group were harvested and incubated with 10 μM DCFH-DA Solution for 30 min at 37°C. The remaining procedures were performed in accordance with the manufacturer's protocol. Statistical analysis Data are presented as the mean ± SD of three independent experiments. All statistical analyses were performed in GraphPad Prism version 8.0 software (GraphPad Software Inc., San Diego, CA, USA) using one-way or twoway ANOVA. For all statistical tests, the level of p < 0.05 was considered as significance. hAT-MSCs migrated to TME and evolved into TA-MSCs Before analyzing the effects of HCC-CM on hAT-MSC phenotype and glucose metabolism, we first needed to characterize the obtained cells. Flow cytometry indicated that the obtained cells were positive for the MSC surface markers CD90, CD105 and CD146 and negative for the hematopoietic marker CD45 (Fig. 1a). Experiments of differentiation in vitro indicated that the obtained cells could differentiate to adipocytes, osteocytes and chondrocytes (Fig. 1b). Thus, we determined that the obtained cells were indeed hAT-MSCs. In the next experiment, the precondition was based entirely on the tumor-homing properties of MSCs. To verify the chemotaxis capacity of untreated hAT-MSCs towards the liver cancer environment, we designed in vitro chemotaxis assays. Untreated hAT-MSCs were allowed to migrate towards HCC-CM (Hep3B, Huh7 and HCCLM3) or DMEM as a control. A significant increase in the migratory capacity of hAT-MSCs towards HCC-CM was found when compared to DMEM (Fig. 1c, d). The results suggested that hAT-MSCs were actively recruited to HCCs. Once MSCs migrated to TME, tumor cells can 'educate' MSCs to evolve into TA-MSCs via paracrine interaction. As a consequence, MSCs gain expression of α-SMA and Vimentin and become stellate in shape. Therefore, we decided to study the changes in hAT-MSC properties after HCC-CM stimulation (treated hAT-MSCs). After 4-8 weeks of exposure to HCC-CM, the results of Western blot showed that the expression of α-SMA and Vimentin in hAT-MSCs increased significantly (Fig. 1e). To further investigate the effects of HCC-CM on morphological and cytoskeletal changes in hAT-MSCs, immunofluorescence assays were performed. The results showed that hAT-MSCs exhibited strong α-SMA expression following treatment with HCC-CM, in keeping with the morphology as cruciform or stellate shape (Fig. 1f). All of the above results suggested that hAT-MSCs can be truly converted into TA-MSCs after being induced by HCC-CM. Phenotypic changes of hAT-MSCs after exposure to HCC-CM The remodeling and reorganization of the cytoskeleton can impact cellular motility. Therefore, we decided to study the migration and invasion capacities of hAT-MSCs after exposure to HCC-CM. Significant increases in the migratory and invasion capacities of treated hAT-MSCs were found (Fig. 2a, b). Next, to evaluate the effect of HCC-CM on the proliferation of hAT-MSCs, CCK-8 assays and EdU staining were used. EdU assay results showed a significant reduction in EdU-positive cells after exposure to HCC-CM (Fig. 2c, d). In accordance with the results of EdU assays, CCK-8 assays revealed significant suppression of cell viability after HCC-CM stimuli, except for the 24-h point ( Fig. 2e). In brief, proliferation of hAT-MSCs was significantly inhibited by HCC-CM. Considering that distribution of the cell cycle and level of cell apoptosis can influence cell proliferation and viability, flow cytometry was applied to analyze hAT-MSC apoptosis and cell cycle. The results of cell apoptosis showed a markedly higher cell apoptosis rate (Annexin V +) in hAT-MSCs after exposure to HCC-CM (Fig. 2f). Regarding the cell cycle, we observed that after exposure to HCC-CM, the ratio of cells in the G0/G1 phase was significantly increased while in the G2/M phase it's significantly reduced, which suggested that HCC-CM induced G2/M phase cell cycle arrest (Fig. 2g). Finally, we determined the expression levels of apoptosis-and cell cycle-related proteins by Western blot. The results showed that the mitochondrial apoptosis pathway was activated in hAT-MSCs after exposure to HCC-CM. The results revealed an increase in the expression levels of apoptotic proteins (Bax and cleaved caspase-3) and a decrease in anti-apoptotic protein (Bcl-2) (Fig. 2h). Regarding cycle-related regulatory proteins, cyclins and cyclin-dependent kinases were generally downregulated, especially the expression levels of regulatory proteins related to the G2/M phase (Cyclin A2, Cyclin B1, and CDK1) (Fig. 2i). These data demonstrate that after exposure to HCC-CM, the proliferation of hAT-MSCs was inhibited by regulating the cell cycle and activating the mitochondrial apoptosis pathway, while the migratory and invasion capacities were significantly enhanced. Glucose metabolic changes of hAT-MSCs after exposure to HCC-CM As we described above, the mitochondrial apoptosis pathway of hAT-MSCs was activated after exposure to HCC-CM; thus, we wondered whether the function of mitochondria could be influenced by HCC-CM. Maintenance of mitochondrial transmembrane potential is essential for normal mitochondrial function. Therefore, we used JC-1 and Mito-Tracker to stain mitochondria. The accumulation of JC-1 within the mitochondria is dependent on the electrochemical gradient, so it can be used to evaluate the MMP. However, Mito-Tracker is independent of membrane potential; thus, mitochondrial mass can be monitored with it. The staining results showed that the mitochondrial mass of hAT-MSCs was essentially unaffected after exposure to HCC-CM, while there was a marked decline in MMP (Fig. 3a). These results suggest that HCC-CM induced mitochondrial dysfunction. Mitochondria are the cellular bioenergetic centres, and mitochondrial dysfunction will lead to a metabolic switch from oxidative phosphorylation to glycolysis. Therefore, we tested whether HCC-CM modulated the glycolytic phenotype in hAT-MSCs. The results showed significant increases in glucose uptake, pyruvate level, lactate production and ATP level of hAT-MSCs after exposure to HCC-CM (Fig. 3b-e). Moreover, after exposure to HCC-CM, hAT-MSCs showed markedly enhanced GLUT1, GPI, GAPDH, PGK1, LDHA and LDHB, but not HK2 and PKM2, at the mRNA and protein levels (Fig. 3f, g). These data indicate glycolysis is enhanced in hAT-MSCs after exposure to HCC-CM due to mitochondrial dysfunction. Reversal of phenotypic changes in hAT-MSCs after withdrawal of HCC-CM treatment Our previous study showed that the fate of hAT-MSCs had been seriously threatened after exposure to HCC-CM, while cells gained the capacities of migration and invasion. Thus, we wondered whether treated hAT-MSCs would escape from TME and reverse their fate. To verify the conjecture, we decided to culture treated hAT-MSCs in normal medium for another 2-4 weeks and to evaluate the reversibility in phenotype and glucose metabolism. Before verification, we needed to verify if the treated hAT-MSCs still retained the same characteristic of TA-MSCs after withdrawal of HCC-CM treatment. Western blot results indicated that hAT-MSCs maintained the high-level expression of α-SMA even after withdrawal of HCC-CM treatment (Fig. 4a). In addition, the results of immunofluorescence suggested that hAT-MSCs still maintained their cruciform or stellate shape after withdrawal of HCC-CM treatment (Fig. 4b). These findings indicate that HCC-CM can induce permanent alterations during the conversion of hAT-MSCs into TA-MSCs. Once again, we evaluated proliferation of treated hAT-MSCs after the detachment from HCC-CM by CCK-8 assays and EdU staining. The results showed no significant difference in proliferation between the reversed hAT-MSCs and untreated hAT-MSCs (Fig. 4c-e). These findings suggest the reversal of HCC-CM-induced inhibition of proliferation in hAT-MSCs after withdrawal of HCC-CM treatment. We then used flow cytometry to analyze reversed hAT-MSC apoptosis and cell cycle. The results showed that the apoptosis rate (Annexin V +) and cell cycle distribution were not significantly different between the groups (Fig. 4f, g). These data demonstrate that both activated apoptosis and cell cycle arrest returned to control levels after withdrawal of HCC-CM treatment. Finally, we determined the expression levels of cell cycle-and apoptosis-related proteins by Western blot. The results revealed a slight increase in the expression of anti-apoptotic protein (Bcl-2) and no difference in apoptotic proteins (Bax and cleaved caspase-3) (Fig. 4h). Regarding cell cycle-related regulatory proteins, all cyclins and cyclin-dependent kinases were restored, consistent with the untreated hAT-MSCs (Fig. 4i). These data indicate that HCC-CM-induced inhibition of proliferation, activation of the mitochondrial apoptosis pathway and cell cycle arrest in hAT-MSCs can be reversed and restored to normal by withdrawal of HCC-CM treatment. Reversal of metabolic changes in hAT-MSCs after withdrawal of HCC-CM treatment As we found above, the mitochondrial apoptosis pathway of hAT-MSCs was reversed after withdrawal of HCC-CM treatment; thus, we wondered whether the function of mitochondria could also be reversed. Therefore, we used JC-1 and Mito-Tracker to stain mitochondria again. The staining results showed that the mitochondrial mass of hAT-MSCs was essentially unaffected after withdrawal of HCC-CM treatment, while there was a marked enhancement in MMP (Fig. 5a). These results suggest that HCC-CM-induced mitochondrial dysfunction was reversed. Next, we tested the glycolytic phenotype in hAT-MSCs after withdrawal of HCC-CM treatment. The results showed a significant reduction in lactate production, but there were no significant differences in glucose uptake, pyruvate level and ATP level of hAT-MSCs after withdrawal of HCC-CM treatment (Fig. 5b-e). Moreover, after withdrawal of HCC-CM treatment, hAT-MSCs showed markedly reduced GLUT1, GPI and PKM2, but not HK2, GAPDH, PGK1 and LDHA, at the mRNA levels ( Fig. 5f). At the protein levels, hAT-MSCs showed reduced levels of HK2, PGK1, PKM2 and LDHB (Fig. 5g). These data indicate that the enhanced glycolysis in hAT-MSCs was reversed after withdrawal of HCC-CM treatment. HCC-CM regulated the activation of the ROS/MAPK/HIF-1α signaling pathway Given the central roles that mitochondria played in phenotypic and metabolic alterations of hAT-MSCs, as mitochondria are the major source of ROS and since ROS can determine cell fate by regulating multiple signaling pathways, we used DCFH-DA probes to measure intracellular ROS. Our results showed that ROS levels significantly increased in hAT-MSCs after exposure to HCC-CM (Fig. 6a). However, after withdrawal of HCC-CM treatment, levels of ROS were restored to normal control levels (Fig. 6b). To further investigate the possible roles of ROSassociated signaling pathways, we assessed the levels of HIF-1α, MAPK (ERK1/2, JNK and p38) and AKT. We found that exposure of HCC-CM resulted in activation of the HIF-1α and MAPK pathways, but not the AKT pathway. The phosphorylation levels of JNK and p38 were increased, while that of ERK1/2 was decreased (Fig. 6c). However, after withdrawal of HCC-CM treatment, the levels of HIF-1α and phosphorylated ERK, p38, and JNK were restored to normal control levels (Fig. 6d). Next, to determine whether ROS are the upstream signal molecules of MAPK and HIF-1α pathways, we used the ROS scavenger NAC to block ROS generation. When cells were pre-treated with 5 mM NAC for 24 h, ROS production was severely suppressed (Fig. 6e). Meanwhile, the levels of HIF-1α and phosphorylated JNK and p38 were clearly reduced, and the phosphorylation level of ERK1/2 was restored (Fig. 6f). Besides, to elucidate the relationship between MAPK signaling pathway and cell phenotype, the treated hAT-MSCs were pre-treated with SB203580 (20 μM, 2 h), a kind of p38 inhibitor. We found that SB203580 could significantly rescue the loss of cell viability, including cell proliferation, cell apoptosis and MMP ( Figure S1a-d). These data suggest that the ROS/MAPK/HIF-1α signaling pathway plays an important role in HCC-CM-induced phenotypic and metabolic alterations of hAT-MSCs. HCC-CM endow hAT-MSCs with permanent secretory phenotype and tumor-promoting properties The fate of hAT-MSCs had been seriously threatened after exposure to HCC-CM, however it can be reversed by withdrawal of HCC-CM treatment. We wondered whether these observations have functional relevance, then we investigated the effect of HCC-CM on secretory phenotype in hAT-MSCs, the mRNA expression of a variety of cytokines, growth factors and chemokines were analyzed. The expression levels of many factors, especially IL-1β, TGF-β and CCL-7, were much higher after exposure to HCC-CM than in untreated hAT-MSCs (Fig. 7a). and the characteristics of factors secretion by TA-MSCs appears to be permanent, since these cells keep producing these factors even after withdrawal of HCC-CM treatment (Fig. 7b). We next determined whether the altered mRNA expression in TA-MSCs has functional relevance, CCK-8 and Transwell assays were used to investigate the effect of MSC-CM on proliferation and migration in HCC cells. No significant differences in proliferation and migration were observed when HCC cells were incubated with conditioned medium from untreated hAT-MSCs (U-MSC-CM), compared to the normal medium (DMEM) (Fig. 7c-h). In contrast, conditioned medium from treated hAT-MSCs (T-MSC-CM) and reversed hAT-MSCs (R-MSC-CM) significantly enhanced proliferation compared to DMEM and U-MSC-CM at 96-h point (Fig. 7c-e). Transwell migration assays demonstrated that T-MSC-CM and R-MSC-CM significantly increased the migration potency of HCC cells (Fig. 7f-h). Fig. 6 Effect of HCC-CM on the ROS/MAPK/HIF-1α signaling pathway. a Representative histogram (left) and quantification data (right) of ROS levels in hAT-MSCs after exposure to HCC-CM or normal medium; b Representative histogram (left) and quantification data (right) of ROS levels in hAT-MSCs after withdrawal of HCC-CM treatment; c HIF-1α, phosphorylated AKT, total AKT, phosphorylated JNK, total JNK, phosphorylated p38, total p38, phosphorylated ERK and total ERK protein expression of hAT-MSCs after exposure to HCC-CM or normal medium; d HIF-1α, phosphorylated AKT, total AKT, phosphorylated JNK, total JNK, phosphorylated p38, total p38, phosphorylated ERK and total ERK protein expression of hAT-MSCs after withdrawal of HCC-CM treatment; e ROS levels in hAT-MSCs after exposure to HCC-CM, normal medium or pretreated with NAC; f HIF-1α, phosphorylated AKT, total AKT, phosphorylated JNK, total JNK, phosphorylated p38, total p38, phosphorylated ERK and total ERK protein expression of hAT-MSCs after exposure to HCC-CM, normal medium or pre-treated with NAC. *p < 0.05, **p < 0.01, vs NC, n = 2 Taken together, these results revealed that HCC-CM endow hAT-MSCs with permanent tumor-promoting properties. Discussion In this study, we have identified the regulation and reversal of hAT-MSC fate in an HCC-mimicking microenvironment. Our data indicate that in hAT-MSCs exposed to HCC-CM for 4-8 weeks, hAT-MSCs could evolve into TA-MSCs. Correspondingly, the phenotype and glucose metabolism of hAT-MSCs had changed dramatically. Moreover, for the first time, our results illustrated that after withdrawal of HCC-CM treatment for 2-4 weeks, the alterations of phenotype and glucose metabolism could be reversed and restored to normal, but secretory phenotype and tumor-promoting properties appear to be permanent. Importantly, we also revealed underlying mechanisms involving the promotion of ROS release and activation of the HIF-1α and MAPK signaling pathways. Our experiments verified that mitochondria played core roles in phenotypic and metabolic alterations of hAT-MSCs, as represented here by activation of the mitochondrial apoptosis pathway and reduction of MMP, respectively. Mitochondria are cellular bioenergetic centres and major sources of ROS. However, under conditions of environmental stress, the overaccumulation of ROS can cause excessive oxidative stress and oxidant injury in mitochondria [15]. Then, the increase of mitochondrial damage can initiate the apoptosis cascade and result in cell death. The disturbance in MMP is a main mark of mitochondrial dysfunction induced by oxidative stress. ROS can cause depolarization of MMP and activate the mitochondrial apoptosis pathway [16]. ROS can phosphorylate and activate JNK [17], which activates proapoptotic Bax, which can further inhibit the functions of anti-apoptotic Bcl-2. Inactivation of Bcl-2 can open channels on the outer mitochondrial membrane, which allows leakage of cytochrome c into the cytosol, eventually resulting in a cascade reaction of caspase to induce cell apoptosis [18]. The present work showed that levels of ROS underwent significant increases in hAT-MSCs with HCC-CM treatment, which consists with our data concerning a decline in MMP and activation of the JNK/mitochondrial apoptosis pathway. In addition, after withdrawal of HCC-CM treatment to reduce the oxidative stress, we found a significant reduction in intracellular ROS, which was accompanied by the reversal of phenotypic changes. Therefore, we have reason to speculate that the overaccumulation of intracellular ROS induced by environmental stress plays an important role in phenotypic changes, especially in cell apoptosis. In addition to phenotypic changes of hAT-MSCs, the present work also addressed alterations in glucose metabolism. Our results showed that the decreased MMP was accompanied by enhanced glycolysis activity with HCC-CM treatment. Accumulating evidence indicates that mitochondrial function is dependent on the proton motive force, which is determined by the MMP. On the one hand, the overaccumulation of ROS can induce the decline of MMP and then disrupt electron transport and uncouple oxidative phosphorylation. To maintain an intracellular energy balance, cells must enhance glycolysis activity to compensate for the weakening of oxidative phosphorylation. On the other hand, ROS can trigger activation of HIF-1α, leading to hypoxia signaling and enhancing glycolysis [19]. These pathways are in line with the results observed in our study. To further explore the major driving force behind the switch in glucose metabolism, we studied the reversibility of mitochondrial function. The results showed that after withdrawal of HCC-CM treatment, mitochondrial dysfunction was reversed and glycolysis activity returned to normal levels. Based on this evidence, we speculate that reprogramming of glucose metabolism is an important protective mechanism for the regulation of cell fate. Under conditions of environmental stress, to reduce the production of ROS, cells actively switch their glucose metabolism from oxidative phosphorylation to aerobic glycolysis, thus improving their survival. Once they get rid of survival stress, cells can restore mitochondrial oxidative phosphorylation and efficient use of glucose to produce ATP. We wondered whether reversibility of hAT-MSC phenotypic and metabolic changes have functional relevance, we further explore the secretory phenotype and tumor-promoting properties of treated and reversed hAT-MSCs. To our surprise, HCC-CM appears to endow hAT-MSCs with permanent secretory phenotype and tumor-promoting properties, since reversed hAT-MSCs exhibited a cytokine/chemokine expression pattern and tumor-promoting capability similar to treated hAT-MSCs. This indicates that the TME can induce permanent functional alterations during the conversion of MSCs into TA-MSCs, even if the phenotypes are reversed. Combined with the findings of our present study that hAT-MSCs underwent cytoskeleton remodeling and a significant increase in the migratory and invasion capacity with HCC-CM treatment, and the redistribution phenomenon of MSCs we mentioned above, all of these lines of evidence provide us with new ideas for studying TA-MSCs. We speculate that TA-MSCs are likely to play a "pioneer" role in tumor expansion and metastasis. Trapped in a stressed cage, MSCs have to remodel their cytoskeleton and continually invade the border between the tumor and normal tissue, thus improving their survival and changing their fate. Meanwhile, cells can keep exerting functions as TA-MSCs and constantly 'remodel' the tumor niche to further guide tumor progress, including the conversion of naïve MSCs to TA-MSCs and recruitment of macrophages to tumor sites [20,21]. This will be the next research direction and focus. Our study found that HCC-CM could significantly inhibit hAT-MSC proliferation, which was in contradiction with the previous studies. Numerous studies have demonstrated that MSCs obtained from tumor tissue exhibited higher growth capacity and proliferative activity [22]. Co-culture with cancer cells or induction by conditioned medium in vitro also resulted in similar findings [8]. We speculate that one reason for this inconsistency was different induction times in vitro. No matter what kind of induction manner is adopted, the action time was often no longer than 2 weeks in past studies. We hold the opinion that the induction time was generally too short to simulate long-term changes of cells in the body. Therefore, we extended HCC-CM treatment for 4-8 weeks. The reason why we adopted the different treatment length with HCC-CM is that due to the time costs considerations. We need to ensure that the treatment time is long enough, meanwhile minimize the effects of MSC aging on cell phenotype. The proliferation of hAT-MSC is exceptionally slow, and it will take more than a week for one passage. Therefore, 4-week treatment will not excessively increase passage number of cells and result in cell senescence. At the same time, after 4-week treatment, phenotypic and metabolic changes of hAT-MSCs are very stable, we have analyzed at different time point, the results were consistent. In addition, treating the same time in each of the independent experiments will also minimize the variable. However, since we did not dynamically monitor the changes of proliferation in hAT-MSCs, it is likely to ignore the possibility of enhanced proliferation in the early stage of induction, or it could be caused by neglecting the reversibility of the phenotype. Choosing MSCs derived from tumor tissue as the research object can solve the problem of insufficient induction time; however, by using normal medium for cell expansion in vitro, the reversal of the cell phenotype is very likely to distort the results. In our study, although there was no significant increase in the proliferation capacity of hAT-MSCs after withdrawal of HCC-CM treatment, it did increase in value. Overall, the current study confirmed that HCC-CM can regulate the phenotype and glucose metabolism of hAT-MSCs through the ROS/MAPK/HIF-1α signaling pathway. In addition, through analysis of our data, we believe that the accumulation of ROS caused by environmental stress led to the activation of the MAPK/HIF-1α signaling pathway, which is the key to changes in hAT-MSCs. If antioxidants or genetic engineering are used to transform MSCs and improve their antioxidant capacity, MSCs may improve their survival in the tumor environment and become better tools for drug delivery. Moreover, we demonstrated the reversibility of phenotype and glucose metabolism in hAT-MSCs, which is critically important for generating novel ideas for TA-MSC research. However, our study was not designed to identify which components present in HCC-CM play roles. Thus, we cannot perform selective and precise targeting of the substance to block phenotypic and metabolic changes of hAT-MSCs. Further experiments are clearly necessary. Conclusions In conclusion, the effects of long-term HCC-CM treatment on phenotype and glucose metabolism in hAT-MSCs are modest and largely reversible after withdrawal, but HCC-CM endow hAT-MSCs with permanent secretory phenotype and tumor-promoting properties. And these alterations were achieved through regulation of the ROS/MAPK/HIF-1α signaling pathway. This is the first report on the reversal of phenotype and glucose metabolism in TA-MSCs, and it might provide new insights of TA-MSCs to develop potential strategies for MSC-based anti-tumor therapy.
7,926.8
2020-05-14T00:00:00.000
[ "Biology", "Medicine" ]
ROSINA ion zoo at Comet 67P The Rosetta spacecraft escorted Comet 67P/Churyumov-Gerasimenko for 2 years along its journey through the Solar System between 3.8 and 1.24~au. Thanks to the high resolution mass spectrometer on board Rosetta, the detailed ion composition within a coma has been accurately assessed in situ for the very first time. Previous cometary missions, such as $\text{Giotto}$, did not have the instrumental capabilities to identify the exact nature of the plasma in a coma because the mass resolution of the spectrometers onboard was too low to separate ion species with similar masses. In contrast, the Double Focusing Mass Spectrometer (DFMS), part of the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis on board Rosetta (ROSINA), with its high mass resolution mode, outperformed all of them, revealing the diversity of cometary ions. We calibrated and analysed the set of spectra acquired by DFMS in ion mode from October 2014 to April 2016. In particular, we focused on the range from 13-39 u$\cdot$q$^{-1}$. The high mass resolution of DFMS allows for accurate identifications of ions with quasi-similar masses, separating $^{13}$C$^+$ from CH$^+$, for instance. We confirm the presence in situ of predicted cations at comets, such as CH$_m^+$ ($m=1-4$), H$_n$O$^+$ ($n=1-3$), O$^+$, Na$^+$, and several ionised and protonated molecules. Prior to Rosetta, only a fraction of them had been confirmed from Earth-based observations. In addition, we report for the first time the unambiguous presence of a molecular dication in the gas envelope of a Solar System body, namely CO$_2^{++}$. Introduction Relatively small, with a nucleus size of a few kilometres to a few tens of kilometres, comets are only detectable once they are close enough to the Sun and display a bright tail. Compared to other planetary bodies and their atmosphere, the gas envelope of comets, the coma, behaves very differently. The coma results from the sublimation of ices near the nucleus' surface, which then undergoes an acceleration to several hundreds of m·s −1 , continuously replenishing the coma. Mainly made of water, the coma contains a diversity of neutral species, such as CO 2 , CO Hässig et al. 2015), and many others (e.g. Le Roy et al. 2015) that have been detected in situ at 1P/Halley (hereinafter referred to as 1P) and 67P/Churyumov-Gerasimenko (hereinafter referred to as 67P, Churyumov & Gerasimenko 1972). Extreme ultraviolet (EUV) solar radiation penetrates and ionises the neutral gas envelope, giving birth to the cometary ionosphere. In addition to EUV, an additional source of ionisation is energetic electrons (Cravens et al. 1987). Depending on the local neutral number density, newborn cometary ions may undergo collisions with neutrals, yielding the production of cations which cannot result from di-rect ionisation of the neutrals. The diversity of ions is therefore richer than that of neutrals. Cometary ions may be observed remotely at ultraviolet and visible wavelengths. Emissions in these wavelengths arise mainly from the resonant fluorescence of sunlight. These types of emissions from cometary molecular ions were first observed at Comet C/1907 L2 (Daniel) (Deslandres & Bernard 1907;Evershed 1907). Although the emitting species was unknown at the time of the detection (Larsson et al. 2012), it was later identified as CO + . The discovery of an ion tail that is always oriented anti-sunward led to the discovery of the solar wind (Biermann 1951;Parker 1958). Several cometary ions have since populated the list: N + 2 and CH + (Swings 1942), CO + 2 and HO + (Swings & Page 1950;Swings & Haser 1956), Ca + (Preston 1967), H 2 O + (Herzberg & Lew 1974), CN + (Lillie 1976), and H 2 S + (Cosmovici & Ortolani 1984). It is important to note that some ions are detected in cometary environments through observations in EUV and X-Rays (Lisse et al. 1996). Nevertheless, they are not cometary as we define here. The emission originates from the de-excitation of multiply-charged ions (e.g. O 6+ ), produced after a charge exchange between the high charge state solar wind Article number, page 1 of 26 The term 'unambiguous detection' or similar formulation should be taken with great care for Giotto data at 1P since the mass resolution of its instruments was about ∆m ∼ 1 u. The combination and assemblage of the primary blocks, C, H, O, and N atoms, to build more complex molecules are limited at low masses (typically below 25 u, Mitchell et al. 1992). At some specific values of u·q −1 , there exists only one combination: C + (12 u·q −1 ), CH + (13 u·q −1 ), H 3 O + (19 u·q −1 ), C + 2 (24 u·q −1 ), C 2 H + (25 u·q −1 ), if one disregards isotopes and isotopologues. There is even no candidate between 20 u·q −1 and 23 u·q −1 . At other u·q −1 (in particular 18 u·q −1 which corresponds to H 2 O + and NH + 4 ), photo-chemical models are needed to infer the relative contribution of each ion, or, conversely, constrain the neutral composition (Haider & Bhardwaj 2005). At comet 67P, the Rosetta orbiter carried two instruments which performed a true mass analysis of the ambient ions: the Ion Composition Analyzer (ICA, Nilsson et al. 2007), part of the Rosetta Plasma Consortium (RPC, Carr et al. 2007), and the Double Focusing Mass Spectrometer (DFMS), part of the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA, Balsiger et al. 2007). Although we may compare RPC-ICA with its homologue RPA-PICCA, the former suffers from limitations to probe cold cometary ions. As Rosetta was moving slowly with respect to the ambient plasma, ions were not collimated along the spacecraft velocity and RPC-ICA has a wide field of view. In addition, its minimum energy acceptance is 4-5 eV, such that it only observed ion species after they were energised either as pick-up ions or accelerated by the spacecraft potential prior to entering RPC-ICA. Nevertheless, RPC-ICA was perfectly designed for probing the energetic solar wind ions, such as H + , He ++ , and He + , unlike ROSINA-DFMS. ROSINA-DFMS (described in Section 2) has the ability to probe neutrals as well as ions with two different mass resolutions either m/∆m ≈ 500 in the 'Low Resolution' LR mode or m/∆m > 3000 in the 'High Resolution' HR mode. DFMS is the most powerful spectrometer in terms of mass resolution ever flown on board a spacecraft so far. Previous analyses of DFMS ion spectra in high resolution revealed the unambiguous detection of H 2 O + , NH + 4 , and H 3 O + (Fuselier et al. 2016;Beth et al. 2016). Low resolution spectra have been also analysed and highlighted the presence of other species either at large heliocentric distances (Fuselier et al. 2015) or near perihelion (Heritier et al. 2017) with the support of photo-chemical modelling. In this paper, we present in situ detections of cometary ions at 67P over the range 13-39 u·q −1 in high resolution and 13-141 u·q −1 in low resolution. In HR, DFMS pinpointed the massper-charge ratio of impinging cometary ions with such a high accuracy that their composition and identity can be ascertained without any ambiguity. The DFMS spectrometer and data processing are presented in Section 2, followed by a review of the mass spectra acquired during the period Oct 2014-Apr 2016 in Section 3. Section 4 highlights the main results including the key different ion family behaviours (4.1), the protonated molecules (4.2), water isotopologues (4.3), and dications (4.4). Discussion and conclusions are presented in Section 5. spacecraft potential of Rosetta was very negative during most of the escort phase (Odelstad et al. 2017) and the grid was permanently set to a small negative potential, of -5 V. Once inside the instrument, ions are accelerated significantly by a large negative potential so that their energy in the ion optics is much higher than their energy at the entrance of the instrument: they undergo a first deflection in the electrostatic energy analyser which selects the ion energy before they exit through either the LR or the HR energy slit, the former being 6.5 times wider that the latter, into the magnetic analyser where they are deflected according to their mass and charge. Exiting the magnetic analyser, ions impinge on the detector which consists of a Micro Channel Plate (MCP) followed by a Linear Electron Detector Array (LEDA). Since the magnetic field intensity in the magnet varies with the temperature (see Keyser et al. 2019), the impact position of a given ion on the detector will depend on the temperature as well. The LEDA is split into two identical rows (hereafter referred to as channel A and channel B) with 512 pixels, 25-µm wide and 8-mm long perpendicular to the mean axis of the row. When an ion hits the MCP, a cascade of electrons is produced and the total amount of negative charges collected by the LEDA, known as the MCP gain, depends on the voltage applied to the MCP. The gain is not uniform over the entire MCP area and, for each pixel, one can define a 'pixel gain', which modulates the average MCP gain, and determine the actual number of electrons collected by the corresponding LEDA pixel. The pixel gains vary during the Rosetta escort phase and have been regularly determined through dedicated in-flight calibrations. Pixel gains degraded during the mission (Schroeder et al. 2019), especially for pixels located close to the centre of each row (from pixel 200 to pixel 400), where H 2 O + ions strike in both neutral and ion modes of DFMS. To partially compensate this degradation and the loss of sensitivity, on the 27 th of January 2016, the post-acceleration was modified in order to move the central pixel p 0 , such that the position on the detector of the selected mass of each spectrum was moved forwards, on pixels with a less degraded gain. Pixels at the very edge of the LEDA rows have a poor gain as well but they are not included in the analysis. DFMS has two basic modes of operation. In the 'neutral' mode, the neutral species are ionised and fragmented through electron impact in the ion source, thanks to a filament emitting electrons at ∼ 45 eV, before being accelerated into the ion optics. In the 'ion' mode, the filament is not powered and ions are directly admitted into the ion optics. The ion and neutral modes are not operated simultaneously but, for both of them, the total integration time for each individual spectrum is 19.8 s made of 3000 exposures of 6.6 ms. For both modes, DFMS may operate in Low or High mass-per-charge Resolution (hereafter referred to as LR and HR, respectively). HR mode, for which m/∆m > 3000 at the 1% peak height level for 28 u·q −1 (Balsiger et al. 2007), allows separation of ions that have very close mass-per-charge ratios (e.g. 13 C + and CH + , H 2 O + and NH + 4 , CO + and N + 2 ), which is not possible in LR mode, for which m/∆m ≈ 500. However, the sensitivity at a given gain step is significantly higher in LR than in HR, therefore the LR mode was of particular interest during periods of low outgassing and when fewer ion species can be detected due to limited ion-neutral chemistry (e.g. Fuselier et al. 2015). By comparison, the HR mode was of particular interest for periods at high outgassing activity, such as near perihelion, when ion-neutral chemistry takes place and many new species are present and need to be separated (e.g. Beth et al. 2016). A typical sequence of acquisition is as follows. Firstly, the first commanded (instructed to the instrument when operating) mass-per-charge ratio is 18 u·q −1 both in LR and in HR. Secondly, the second commanded mass-per-charge ratio is the lowest one which the instrument can perform: 13.65 u·q −1 in LR, 13 u·q −1 in HR. Thirdly, the commanded mass-per-charge ratio is then incremented: exponentially in LR, linearly in HR, with m 0 (i) the i-th commanded mass-per-charge ratio m 0 (i ≥ 2) defined as: Fourthly, the penultimate commanded mass-per-charge ratio: 134.4 u·q −1 (i = 26) in LR, 100 u·q −1 (i = 89) or 50 u·q −1 (i = 39) in HR (100 was used as an upper limit during the first half of the mission but nothing was detected above 50 u·q −1 , this limit was then lowered in July 2015). Finally, the last commanded mass-per-charge ratio is 18 u·q −1 . The three measurements of 18 u·q −1 during a sequence helped in monitoring the variability of the ambient plasma conditions and/or the effective DFMS geometrical factor in the ion mode which depended on the spacecraft potential. A full sequence lasts between 10 and 20 minutes, depending on the resolution and the number of commanded mass-per-charge ratios. LR and HR modes differ in terms of u·q −1 coverage since the mass-per-charge coverage for a given mass-per-charge ratio m 0 is roughly 0.1 m 0,LR in LR and 0.016 m 0,HR in HR (see Eq. 1 below). Therefore, successive LR spectra overlap and cover the full range from 13 to 141 u·q −1 . HR successive spectra may overlap only at high masses from 64 u·q −1 onwards. Finally, less spectra are required in LR to cover the same mass-per-charge range because several u·q −1 may be covered in a given spectrum. However, in the latter case, the peaks fall on different locations on the detector, while, in HR, peaks fall close to the centre of the detector. Thanks to the high resolution of DFMS, the ions species presented in this paper were identified by a detailed and accurate data analysis without the need to rely on photo-chemical models. The models presented in Section 4 only aim at understanding the variability of the cations throughout the escort phase for those confirmed. Data analysis The HR mode requires the utmost care for its mass calibration, that is determining the exact relation between the location of the pixel p on the detector and the associated mass m(p). This relation is given by (Le Roy et al. 2015): where C = 25 µm is the centre-to-centre distance between adjacent pixels, D = 127000 µm the dispersion factor, p 0,m 0 the location of the commanded m 0 ·q −1 on the detector, and z m 0 the zoom factor (1 for LR). Eq. 1 is linearisable because the argument inside the exponential is 1. As indicated by their subscript in Eq. 1, both p 0,m 0 and z m 0 depend on the commanded mass-percharge m 0 and, as aforementioned, on the magnet temperature since the exit location of a given u·q −1 , hence the pixel on the detector, depends on the magnetic field intensity. If the variation of p 0,m 0 and z m 0 between two adjacent u·q −1 (e.g. 18 u·q −1 and 19 u·q −1 ) are very small, they may be significant for widely different u·q −1 such as 13 u·q −1 and 40 u·q −1 . To achieve a perfectly accurate spectrum analysis, both parameters should be reassessed for each sequence of acquisition of DFMS which is possible when two species, with mass m 1 and m 2 located at pixel p 1 and p 2 respectively, are present in the same spectrum. The zoom factor z m 0 can be derived from: and then p 0,m 0 is inferred from one of the two species from Eq. 1. Although this procedure may work well in neutral mode, it is seldom applicable in ion mode since spectra with two wellshaped and separated peaks are only observed for a few u·q −1 and favourable observation periods such as at 18 u·q −1 and at perihelion. Indeed, in ion mode, the count rates on the detector are much smaller than in the neutral mode because of the effective geometrical factor of DFMS for cometary ions, which is lower than that for neutrals due to several combined factors (e.g. large neutral number density, high ion source efficiency, ions accelerated by the spacecraft potential). As a matter of fact, all the ion mode spectra were acquired with the highest gain step to ensure the maximum sensitivity for the instrument. Following the findings of De Keyser et al. (2015), we have set the zoom factor z to 5.5 for 13, 14, and 15 u·q −1 and to 6.4 otherwise. Recent analysis of the spectra in neutral mode showed that the zoom factor is slightly lower at 13, 14, and 15 u·q −1 confirming that 5.5 is appropriate. For p 0,m 0 , we used the value determined from the most proximate spectrum either at 18 u·q −1 or 19 u·q −1 during the same sequence of acquisition of DFMS, that is either p 0,m 0 ≈ p 0,18 or p 0,m 0 ≈ p 0,19 . Indeed, spectra at 18 and 19 u·q −1 show strong peaks throughout the escort phase attributed to H 2 O + and H 3 O + . However, as there is also NH + 4 at 18 u·q −1 , we preferred to use 19 (p 0,m 0 ≈ p 0,19 ) to remove any ambiguity. This approach for deriving p 0,m 0 works well, except for 13, 14, and 15 u·q −1 , discussed in Appendix A. p 0,m 0 is less constrained than z and varies more significantly in comparison. One may evaluate the uncertainty of the mass δm from those of p 0 , δp 0 , and z, δz: We found that the main source of uncertainty is δp 0 . In the dataset generated by the ROSINA team, the default value for |δp 0 | is set to 10. The reader may find additional information in the ROSINA User Guide. For the identification in high resolution, we proceeded as follows. Firstly, we selected a u·q −1range within which species may be found. As u·q −1 increases, the range does as well. Secondly, we performed an additional visual inspection if needed for low counts to remove any suspicious spectrum (e.g. not-flat spectrum baseline, spurious peak far from any known ion species). Thirdly, we over-plotted spectra (from a few tens to hundreds, depending on the mass-per-charge ratio with colour coding which depends on the time of acquisition through the mission, see Fig. 1). Similar studies may be performed with different variables (e.g. latitude). In addition to ion identification, one of the main goals is also to assess in which conditions these ions have been detected: low and high outgassing activity, close and large heliocentric distance, close and large cometocentric distance. Because of all of these variables, we decided to colour spectra as a function of the time of acquisition during the mission. Fig. 1 shows the colour code used as a function of time for the spectra, the time Spectra have been acquired between the 30 th of October 2014 and the 12 th of April 2016. A separation has been set on 27 January 2016 corresponding to the time when p 0 has been voluntarily shifted in DFMS (see text and Appendix A). Colour bars representing the time coverage of DFMS spectra in LR (second panel) and HR (third panel) ion mode are also displayed. White means that sequences of scans are performed on that day and black none. Solstice refers to the Summer Solstice over the Southern Hemisphere (solar latitude = −52 • ). Fourth panel: heliocentric distance, cometocentric distance, and local outgassing rate (≈ n n r 2 n ) as a function of time for the period of interest. Black dots correspond to (from left to right) the inbound Equinox, Perihelion, Solstice, and outbound Equinox. An outgassing speed n of 1 km·s −1 has been assumed for the outgassing. coverage of DFMS in LR and HR as well as the heliocentric distance, cometocentric distance, and outgassing rate with the corresponding colour. Yellow corresponds to the early phase of the mission, with Rosetta far from the Sun (> 2.2 au), close to the nucleus (< 50 km), and 67P with a low outgassing rate (Q < 10 27 s −1 ). Orange corresponds to the period before perihelion with Rosetta close to the Sun (< 2.2 au), between 100 km and 200 km from the nucleus, and 67P with an intermediate outgassing rate (10 27 < Q < 10 29 s −1 ). Red corresponds to the period after perihelion with Rosetta close to the Sun (< 2.2 au), farther than 200 km from the nucleus, and 67P with an intermediate outgassing rate (10 27 Q < 10 29 s −1 ). Green and blue correspond to the period after the pixel shift with Rosetta far from the Sun (> 2.2 au) and 67P with a low outgassing rate. We strongly advise the reader to refer to Fig. 1 for the interpretation of the figures in Section 3. Overview Thanks to its great sensitivity, ROSINA-DFMS allowed probing the ion composition in LR up to very high masses for the first time. Fig. 2 shows a series of selected spectra from 13 to 141 u·q −1 . Above 72 u·q −1 , the mass calibration is not as good as for lower masses because a different post-acceleration is applied within the instrument, which explains why peaks are not centred correctly. The highest counts are recorded at ∼ 18 u·q −1 and at ∼ 19 u·q −1 , where H 2 O + and H 3 O + are found. Other high count regions are also observed at ∼ 28 u·q −1 (e.g. CO + ) and at ∼ 44 u·q −1 (e.g. CO + 2 ). We note that we have gaps, low signals, or non-detections for instance at 36 u·q −1 and around 51 u·q −1 , similar to those showed by Mitchell et al. (1992), already described in Section 1. However, in contrast, we have strong peaks at 21 u·q −1 and 22 u·q −1 where no combination of C, H, O, and N to form a monocation may fit. As the commanded massper-charge ratio increases, the signal-to-noise ratio decreases together with the signal (physical) and sensitivity (instrumental). Moreover, at high mass-per-charge ratios, an insidious effect decreases the width of the peak. With a constant ∆m/m, the mass difference between two successive pixels m(p + 1) − m(p) increases with m 0 such that ions are focused and spread over fewer and fewer pixels, down to a single pixel in extreme cases. This focusing results in sharp peaks, with high counts for one pixel (spikes), instead of broad ones, which may be misinterpreted as 'ghost' peaks, that is sharp and spurious peaks at the location of one pixel with high counts compared with surrounding pixels. However, over-plotting several spectra reveals that these spikes are located around each integer mass-per-charge ratio up to 141 u·q −1 and are thus real. Above 40 u·q −1 , the exact species identification cannot be achieved due to the lack of peaks in HR ion mode as a consequence of the decreased sensitivity. The following sections are dedicated to the identification of ion species detected in the range of 13 to 39 u·q −1 . 3.2. Ion mass-per-charge range 13 -21 u·q −1 Fig. 3 shows spectra for the range 13-14 u·q −1 . In LR, two distinct peaks are present at each integer. In HR at 13 u·q −1 , there are two candidates: 13 C + and CH + . Once the correction described in Appendix A has been applied, spectra at massper-charge 13 u·q −1 show a very faint signal attributed to CH + (see Fig. 3, middle). A weak but stronger peak is also visible at 14 u·q −1 (see Fig. 3, bottom) and attributed to CH + 2 . There is no evidence for N + . From LR spectra, it appears relatively difficult to identify the most favourable periods for the detections of these ions. At 13 u·q −1 , even though some peaks appeared around perihelion, the most favourable conditions seem to be met at large heliocentric distances (in yellow, blue, and green). One should not be misguided by the relatively low signal in LR: after the pixel shift on 27 January 2016, the peak 13 u·q −1 shifted to the left edge of the detector such that the left part of the peak is lost. At 14 u·q −1 , there is a similar behaviour and overall, the highest counts occurred on average at large heliocentric distances. Fig. 4 shows mass-per-charge 15 u·q −1 in LR (top) and HR (bottom). It is one of the rare commanded mass-per-charge ratio for which LR spectra (with 18 u·q −1 sometimes) only cover one integer in u·q −1 . The peak is associated with CH + 3 , as seen in HR and its intensity is quite strong compared with CH + , CH + 2 (see Fig. 3), and CH + 4 (see Fig. 5). LR and HR spectra show that the detection is not controlled by cometary conditions, in particular the outgassing rate. The main reason is that CH + 3 is barely destroyed through ion-neutral reactions with the dominant cometary neutral species, namely H 2 O, CO 2 , and CO, as the corresponding kinetic rates are ≤ 10 −11 cm 3 ·s −1 (Bates 1983;Herbst 1985;Luca et al. 2002). Fig. 5 shows spectra for the range 16-17 u·q −1 . Three ions are identified at 16 u·q −1 in HR: O + , NH + 2 , and CH + 4 . The O + signal is stronger prior to spring equinox than near perihelion/winter solstice. At large heliocentric distances, the source of ions is mainly driven by ionisation of the neutral molecules by electronimpact (Heritier et al. 2018). Ion-neutral chemistry is limited or even negligible (Galand et al. 2016). The major sources of O + are ionisation of CO 2 , followed by ionisation of H 2 O, based on their respective ionisation rate and volume mixing ratios. Indeed, although the CO 2 abundance is spatially depending on the sub-spacecraft latitude (Hässig et al. 2015;Gasc et al. 2017), the photo-ionisation rate yielding O + is an order of magnitude higher than that of H 2 O (Huebner & Mukherjee 2015). Alongside O + , two other ions are present: NH + 2 and CH + 4 , while there is no evidence of 13 CH + 3 . As 13 CH + 3 should slowly react with H 2 O like CH + 3 , if by any chance 13 CH + n (n = 1 − 4) would be detectable, 13 CH + 3 would be the best candidate. The non-detection of 13 CH + 3 implies that 13 CH + , 13 CH + 2 , and 13 CH + 4 would not be detected either, which is indeed the case. According to the isotopic ratio 13 C/ 12 C derived by Hässig et al. (2017), 13 CH + 3 should be at 1% height peak level from 12 CH + 3 , that is about 0.6 counts in the best case, therefore preventing its detection. The CH + 4 signal ( Fig. 5) is much weaker than that of CH + 3 (Fig. 4), from fivefold to tenfold, and is only detected at large heliocentric distances. The electron-impact ionisation of CH 4 is expected to slightly favour CH + 4 compared to CH + 3 , as the associated cross sections are alike (Song et al. 2015), while the ionisation potential is lower for the production of CH + 4 (12.61 eV for CH + 4 , 14.25 eV for the dissociative ionisation of CH 4 into CH + 3 at 0 K, Samson et al. 1989). Assuming that the ionisation of CH 4 is the main source of CH + n (n = 1 − 4), CH + 3 however dominates over CH + 4 as it almost does not react with H 2 O, CO, and CO 2 , as found for 1P/Halley at 0.9 au (Allen et al. 1987). The photo-ionisation rate of CH 4 by EUV leading to CH + 4 is roughly twice the corresponding value for CH + 3 (Huebner & Mukherjee 2015). The very high count ratio of CH + 3 over CH + 4 , especially near perihelion is a clear signature of ion-neutral chemistry occurring in the coma. Possible other sources of CH + 3 , and not of CH + 4 , are the dissociative ionisation and ionisation following fragmentation of saturated hydrocarbons (excluding CH 4 ), found at 67P (Schuhmann et al. 2019), or the protonation of CH 2 like at 1P (Altwegg et al. 1994). In contrast to CH + 4 , NH + 2 is detected near perihelion, when the photo-ionisation rate and outgassing rates are larger, and not at large heliocentric distances. NH + 2 results from the dissociative ionisation of NH 3 , but its yield is fourfold less than that of NH + 3 (Huebner & Mukherjee 2015). In addition, as NH + 2 is lost through ion-neutral chemistry with H 2 O, this indicates that its detection at perihelion stems from a higher production rate from NH 3 . The peaks in LR at 16 u·q −1 show a right 'shoulder' and, at times, a double peak. While this behaviour may be associated with some instrumental effects (De Keyser et al. 2015), it LR LR LR LR LR Fig. 2. Concatenation of spectra recorded at each channel in ion low resolution mode. This covers mass-per-charge ranging from 13 u·q −1 to 141 u·q −1 . Dark grey regions represent ranges which are not covered by the instrument (below 13 u·q −1 and above 141 u·q −1 ). Light grey regions represent ranges where two consecutive scans overlap, meaning that these ranges are covered by the edge of the detector. more likely results from the contribution of two ion species, for instance O + and CH + 4 (∼ 11 pixels apart in LR) and/or O + and NH + 2 (∼ 7 pixels apart in LR). Overall, in view of the HR spectra, the main contributor at 16 u·q −1 is O + whose predominance occurred at large heliocentric distances prior the inbound Equinox, whereas NH + 2 appeared at perihelion when photo-ionisation is much stronger. Interestingly, CH + 4 is more abundant near the outbound Equinox as a possible consequence of the evolution of the neutral composition: Schuhmann et al. (2019) showed that there was a clear enhancement in the CH 4 /H 2 O ratio by a factor ∼ 20 between May 2015 and May 2016. There exists diverse causes of spacecraft pollution specific to Rosetta (Schläppi et al. 2010) for nitrogen bearing components which cannot be completely excluded for observations performed after and close to spacecraft manoeuvres since UV photolysis of hydrazine N 2 H 4 can be a source of N 2 H 3 , N 2 H 2 , and, to a lesser extent, of NH 3 and NH 2 (Biehl & Stuhl 1991;Vaghjiani 1993). As it might in turn affect the detection of NH + 4 (through the ion-neutral reaction NH 3 +H 3 O + −→NH + 4 +H 2 O), in particular after manoeuvres, it might affect NH + 3 and NH + 2 as well (Beth et al. 2016). At 17 u·q −1 (Fig. 5), two ions, HO + and NH + 3 , have been detected. Both are mainly produced by ionisation of their re-spective parent neutral molecules, H 2 O and NH 3 respectively. HO + follows the same pattern as the water production with increased intensity as 67P gets closer to the Sun. The NH + 3 signal is quite strong, mainly near perihelion. In addition to the ionisation of NH 3 , NH + 3 can be produced through charge transfer between H 2 O + and NH 3 . Although NH + 3 may be lost through the reverse charge exchange reaction with H 2 O, the reaction is slow (rates of about 10 −10 cm 3 ·s −1 ) and therefore its contribution to the ion composition remains negligible compared with others ions reacting with H 2 O (see details in Section 4.1). To summarise, HO + is seen throughout the escort phase with a maximum in intensity near perihelion, when outgassing rate and photoionisation are strong. NH + 3 follows the same pattern with high counts near perihelion but cannot be detected at large heliocentric distances because its parent molecule NH 3 is much less abundant than H 2 O. For information, we have indicated the location of 17 O + (see Fig. 5, bottom): even if it might be present, its closeness to HO + (∼ 7 pixels apart) and the peak deformation would prevent its detection. In addition, according to the isotopic ratio 17 O/ 16 O derived by Schroeder et al. (2019) N, where N stands for the number of counts, is superimposed to the counts for information. The colour coding is given by the colour bar in Fig. 1 and relies on the time of acquisition during the escort phase. The mass-percharge ratios of expected ions from Table 1 are also indicated and given in App. C, tomatic of spectra at 16, 17, and, to a lesser extent, of 18 u·q −1 in HR (De Keyser et al. 2015). Without corrections, the peak is not symmetric and the maximum of the peak is slightly shifted to the right ( 5 pixels) due to the DFMS' characteristic double peak structure for this subset of masses (De Keyser et al. 2015). We did not apply the correction proposed by De Keyser et al. (2015) as it would not provide further insight on the ion identification. Fig. 6 shows spectra for the range 18-19 u·q −1 and two comments must be made. First, the range around 18 u·q −1 in both LR HR Fig. 4. Same as Fig. 3, but for 15 u·q −1 only. Stacked spectra in low resolution (upper) and HR (bottom). LR and HR was scanned threefold more often than any other mass ranges as a result of the organisation of the DFMS measurement sequence: while each u·q −1 range was scanned successively and increasingly, each sequence started and ended by scanning 18 u·q −1 . In addition, in LR, both spectra centred on 18 and 18.16 u·q −1 have the 19 u·q −1 peak at the edge of the detector with its right part often lost out of the useful pixel range. HR spectra at 18 u·q −1 show clear signatures of H 2 O + and NH + 4 which have been already reported by Fuselier et al. (2016) and Beth et al. (2016). The weak signal of NH + 4 seen at large heliocentric distances is attributed to hydrazine during manoeuvres (Schläppi et al. 2010). However, near perihelion, it is produced through ion-neutral chemistry within the coma due to the high proton affinity of NH 3 , higher than that of H 2 O (Beth et al. 2016). Candidate ion isotopologues are also indicated, such as 18 O + , H 17 O + , and DO + . The H 2 O + peak is so strong and therefore so widely spread that it prevents their detection as their signal is expected to be weak if present. Indeed, for the strongest recorded signal, the peak covers almost 0.30 u·q −1 in total, over ∼ 56 pixels (∼ 28 pixels each side Fig. 6, bottom panel). H 2 O + is a peculiar ion as its production and loss depend on the H 2 O density in the coma. Photochemical equilibrium, for which production balances chemical losses, is reached at the location of Rosetta or closer to the nucleus depending on the outgassing ac- tivity (see Section 4.1). Under such a condition, the H 2 O + number density is given by: , k being relatively constant, and T is the temperature of the gas. Overall, the H 2 O + peak intensity exhibits this trend, increasing with decreasing heliocentric distance (and hence increasing ν, at least for the photoionisation, Heritier et al. 2018), though its variability over a day or during a month, in particular near perihelion is still puzzling (Beth et al. 2016). Possible reasons, not investigated in this paper, include the variability of the generally-negative spacecraft potential (Odelstad et al. 2018), the interaction between corotating interaction regions and coronal mass ejections (Hajra et al. 2018) with the 67P's ionosphere, and to the proximity of the diamagnetic cavity (Goetz et al. 2016a) or plasma boundaries (Mandt et al. 2019 Altwegg et al. 2015;Schroeder et al. 2019). In addition, the peak in LR at 19 u·q −1 is at the edge of the detector such that it is only partially resolved. 2016). Amongst neutral cometary species, NH 3 has the highest proton affinity and, therefore, the ability to steal a proton from other protonated molecular ions (Heritier et al. 2017). NH + 4 may be only lost through transport and, to a lesser extent, dissociative recombination, making it quite stable together with other ions, such as H 2 O + . Due to the abundance of its parent molecule H 2 O, H 2 O + is observed during the whole mission with highest counts near perihelion, whereas NH + 4 is mainly detected near perihelion because the necessary conditions of its yielding, a large NH 3 number density and a large ion-neutral collision frequency, are only met during this period. Spectra at 19 u·q −1 (see Fig. 6, bottom panel) exhibit a large and strong peak associated with H 3 O + . As H 2 O has a proton LR HR Fig. 7. Same as Fig 3, but for 20 u·q −1 . Stacked spectra in low resolution (top panel) and in HR at 20 u·q −1 (bottom). The HR spectra at 20 u·q −1 are already shown in Fig. 6. As one may see, the DFMS mass-per-charge resolution cannot separate: H 3 17 O + from H 2 DO + . In addition, the peak in LR at 19 u·q −1 is at the edge of the detector such that it is only partially resolved. affinity higher than that of HO, once H 2 O + is produced, it readily reacts with H 2 O to yield H 3 O + , dominating the ion composition at short cometocentric distances (Fuselier et al. 2016;Beth et al. 2016) in the absence of NH 3 . Nevertheless, when the NH 3 number density is large enough, NH + 4 may become the dominant ion species at distances of a few to tens of kilometres above the nucleus' surface depending on the NH 3 volume mixing ratio of a few percent. Close to the nucleus, H 3 O + dominates, whereas H 2 O + becomes the major ion at larger cometocentric distances. At the location of Rosetta near perihelion (∼ 150 − 200 km), H 3 O + was expected to dominate over NH + 4 as is indeed observed. In Fig. 6 7 shows LR spectra at 19-20 u·q −1 and HR spectra at 20 u·q −1 . In HR, mass-per-charge ratio 19 u·q −1 is at the edge of the detector, on the left side (shortwards). The peak attributed to H 3 O + is sometimes fully resolved before the pixel shift has been applied, depending on the magnet temperature, but not afterwards. Indeed, the reference pixel p 0 has been moved towards the left by 70 pixels on the 27 th of January 2016 such that a peak located at the pixel p 1 before the shift was relocated at the pixel p 1 − 70 afterwards. Consequently, the lower and upper limits for LR HR Fig. 8. Same as Fig. 3, but for 21-22 u·q −1 . Stacked spectra in low resolution (top panel) and in HR at 20 u·q −1 (bottom). There is no evidence for a species at 22 u·q −1 in high resolution and the associated spectra are not shown. In addition, the peak in LR at 23 u·q −1 is at the edge of the detector such that it is barely caught on its left edge (see also Fig. 2). each spectrum in terms of mass-per-charge increase by ∼ 1.4% in LR and ∼ 0.23% in HR after the shift. A striking difference, when comparing these spectra with LR spectra at lower u·q −1 , is the noise level: it reaches 10 counts, whereas on lower u·q −1 , it never exceeded 3-4 counts. The HR spectra at 20 u·q −1 exhibit two peaks: one is unambiguously associated with H 2 18 O + and another with either H 3 17 O + or H 2 DO + (or both). The resolving power is not sufficient to separate these species, only 2 × 10 −3 u·q −1 apart (3-4 pixels). However, we favour H 2 DO + against H 3 17 O + : according to Schroeder et al. (2019), in neutral mode, the H 2 17 O signal is too low and buried in the shoulder of the HDO peak and thus we expect a similar behaviour for the corresponding protonated ions observed in ion mode. In addition, an estimation might be inferred from the DFMS-derived isotopic ratios of neutral species Schroeder et al. Fig. 8, top panel, shows a peak at 22 u·q −1 in LR. The associated mass spectra in HR are not displayed because we do not observe persistent and reliable signals at 22 u·q −1 . The presence of a peak in the LR spectra is at first surprising since there is no cometary neutral molecule and no candidate for a protonated one identified at 22 u·q −1 . Based on neutral mode observations, we identify the corresponding ion as CO ++ 2 . In neutral mode, the ionisation of CO 2 in the ion source leads to the production of both CO + 2 and CO ++ 2 which is indeed observed in the data, whereas CO ++ 2 dications detected in the ion mode are naturally produced in the coma. More details on this finding and a discussion on the identification of this dication are presented in Section 4.4. It may be noticed that these cations are only detected before the inbound Equinox at large heliocentric distances when the H 2 O density is low which might indicate that the corresponding ions react with H 2 O or may only subsist for very low ion-neutral collision rates. The non detection of CO ++ 2 out of this period might be also linked to the cometocentric distance of Rosetta and the CO ++ 2 lifetime. When CO ++ 2 was detected, Rosetta was around 20-30 km from the nucleus. However, CO ++ 2 is known to have different lifetimes depending on its electronic states, from 4 s for its ground state (Mathur et al. 1995) to µs for excited states (Alagia et al. 2009;Slattery et al. 2005). Consequently, in order to have CO ++ 2 be produced in sufficient quantity and detected by DFMS, (r − r c )/U, where r is the cometocentric distance of Rosetta, r c is the nucleus' radius, and U is the neutral speed, should be of the order of or lower than the lifetime of CO ++ 2 . If (r − r c )/U 4 s, only a small fraction of CO ++ 2 ions would have the time to reach the spacecraft after they have been created. During the periods when DFMS was operating in ion mode, this condition was the most favourable in December 2014 -January 2015; for this period, (r − r c )/U ≈ 20 − 30 s (see Fig. 1 assuming U ∼ 1 km·s −1 ). This also implies that the CO ++ 2 ions which are detected are most likely in the ground state (though the latter is less likely to be produced than the short-lived excited state, Masuoka 1994;Alagia et al. 2009). Ion mass-per-charge range 23 -40 u·q −1 In this subsection, we present ions which have been detected in this range; they are not part of the water ion group and are usually minor. Fig. 9 shows spectra for the range 23-25 u·q −1 in LR and at 23 u·q −1 in HR only. HR spectra at 24 and 25 u·q −1 do not exhibit any physical and reliable signal and are thus not displayed. The peak at 23 u·q −1 cannot be fully resolved in LR since, neither in Fig. 8 nor in Fig. 9, the full extent of the peak is captured by the detector: in LR for scans centred on 21.98 u·q −1 (Fig. 8), only the left (lower masses) tail of the peak is captured before the pixel shift, whereas in LR for scans centred on 24.18 u·q −1 (Fig. 9) only the right (higher masses) tail of the peak is captured. In both cases, the peak is partially, or not at all, covered by the detector (see also Fig. 2). However, both indicate a strong LR HR Fig. 9. Same as Fig. 3, but for 23-25 u·q −1 . Stacked spectra in low resolution (top panel) and in HR at 23 u·q −1 (bottom). There is no evidence for a cation neither at 24 u·q −1 nor at 25 u·q −1 and the associated spectra are not shown. In addition, the peak in LR at 23 u·q −1 is at the edge of the detector such that it is barely caught on its right edge (see also Fig. 2). signal near perihelion. This is consistent with the HR spectra and the detection of Na + close to perihelion. The associated neutral, Na (sodium), is a refractory element and has been observed by DFMS early during the escort phase. Indeed, Wurz et al. (2015) reported its detection in October 2014, along those of Si (silicium), Ca (calcium), and K (potassium). Their presence results from solar wind sputtering of dust grains on 67P's surface. However, for heliocentric distances below 2 au, the coma of 67P is dense enough to create the so-called solar wind ion cavity within which the nucleus surface and dust grains close to Rosetta are shielded from solar wind ions (Behar et al. 2017). However, the solar wind can still sputter dust grains outside the cavity and an as-yet-unknown mechanism could therefore transport Na/Na + inwards, towards Rosetta. Another possibility might be sputtering by ENAs (Energetic Neutral Atoms), that is, neutralised solar wind ions, which are expected to be produced in large amount near perihelion, still having access to the nucleus of 67P unlike the solar wind (Simon Wedlund et al. 2019). Near perihelion, the source of Na and, therefore, that of Na + may be different. Na + was also observed at 21P/Giacobini-Zinner (Geiss et al. 1986;Ogilvie et al. 1998) and 1P/Halley ). For 21P, Geiss et al. (1986) ruled out sputtering on cometary grains and favoured the idea that Na was either trapped in the ice or sorbed in or on carbonaceous grains. Ogilvie et al. (1998) suggested that Na might come from the evaporation of ice-like grains, containing Na in ionic form. Another scenario was proposed by Combi et al. (1997) for 1P/Halley, Benett C/1969 Y1, and Kohoutek C/1973 D for a near nucleus Na source: the photodissociation of the parent molecule, NaOH, characterised by a high photo-dissociation rate (10 −3 s −1 at 1 au, Plane 1991). We note that Na, as an alkali metal, has a low ionisation energy of 5.139 eV, such that it can be ionised by the intense Lyman-α, with a typical ionisation rate of 7 × 10 −6 s −1 at 1 au (Huebner & Mukherjee 2015). In addition, it is interesting to point out the peculiar ion-neutral chemistry of Na and Na + . As an alkali metal, Na is a great electron donor and reacts with several cations (e.g. H 2 O + and NH + 3 ) in the coma through charge exchange. It also reacts with protonated molecules. Given a high proton-affinity molecule X (e.g. X = H 2 O, H 2 CO, HCN, NH 3 ), and their protonated version XH + , Na + results from Na + XH + −→ Na + + X + H. This reaction with protonated molecules, which dominate the ion composition near perihelion (Heritier et al. 2017), is likely a dominant source of Na + . Unfortunately, it is impossible to track the evolution of Na + throughout the escort phase. In HR, the sensitivity is too low, only allowing the detection near perihelion. In LR, in spite of the higher sensitivity and the unambiguity on the ion species at 23 u·q −1 , the bad quality of measurements at the edges of the MCP prevents from obtaining accurate data. However, according to Fig. 8, top panel, one can conclude that the Na + number density maximises near perihelion and may reach even higher levels. Even if peaks are detected in LR at 25 u·q −1 , no species have been identified in HR. As shown in LR, the peaks are lower than 20-30 counts and many spurious ones contaminate the spectra. From spectra at lower u·q −1 , we may define a rough scaling factor for counts between LR and HR modes. Due to the differences between slits, the sensitivity in HR is about 10% to 1% of its level in LR explaining why often no reliable peaks are observed in HR for C + 2 or C 2 H + . However, the peak at 25 u·q −1 observed in LR at large heliocentric distances before the inbound Equinox is likely due to C 2 H + for two reasons: there are no other species close to 25 u·q −1 and the peak disappeared at perihelion, indicating that the corresponding species should react with H 2 O, which is the case of C 2 H + (Prasad & Huntress 1980). Fig. 10 shows the range 26-27 u·q −1 in LR (top) and HR (middle and bottom) also allowing to distinguish the low u·q −1 part of a peak at 28 u·q −1 . In HR, we barely see a peak for C 2 H + 2 at 26 u·q −1 , without evidence of CN + . The analysis for minor species above 23 u·q −1 in HR is made difficult due to the increased background level requiring unfortunately a visual inspection of each spectrum with two main criteria to select reliable peaks: (i) a well-shaped peak (i.e. the peak can be reasonably well-fitted by a single-Gaussian, two-Gaussian, Lorentzian, or a Voigt profile and be not too spiky/sharp) and (ii) the existence of a candidate species with an u·q −1 close enough to the peak location. Not all spectra at 26 u·q −1 exhibit a peak at C 2 H + 2 . Only three spectra, out of more than hundreds through the escort phase, reach almost 10 counts at the location of C 2 H + 2 , which is however sufficient to ascertain its presence. There is no evidence for CN + which can be explained by a reaction rate of CN + with H 2 O about tenfold higher than that of C 2 H + 2 (Anicich 2003). Near perihelion, the high cometary activity leads to the loss of CN + through chemistry impeding its detection. Stronger signals are observed on LR spectra at large heliocentric distances but the counts barely exceed 100 which explain the poor or lack of detection in HR. This is consistent with the detection of a faint peak at the location of C 2 H + 3 on the spectra at 27 u·q −1 suggesting that the latter has a common origin with C 2 H + 2 . This might be investigated by correlating both signals which is beyond the scope of this paper. There is no evidence of HCN + in HR. However, at large heliocentric distances, LR spectra at 27 u·q −1 exhibit two overlapping peaks of similar amplitude separated by a few pixels only. Although the low resolution does not allow to unambigu-LR HR HR Fig. 10. Same as Fig. 3, but for 26-27 u·q −1 . Stacked spectra in low resolution (top panel), in HR at 26 u·q −1 (middle) and at 27 u·q −1 (bottom). ously separate HCN + from C 2 H + 3 , the double peak at 27 u·q −1 in LR suggests the presence of both HCN + and C 2 H + 3 ions with comparable contributions at large heliocentric distances. Fig. 11 shows the range 28-30 u·q −1 in LR and HR. Species at 28 u·q −1 are detected at large heliocentric distances as well as near perihelion. However, the contributing species are not the same for both periods. As seen in HR, the detected species at large heliocentric distances are CO + and, at times, C 2 H + 4 . CO is one of three major components reported by Hässig et al. (2015) along with CO 2 and H 2 O. However, CO + reacts with H 2 O and CO 2 and disappears or is, at least, strongly attenuated close to perihelion (Heritier et al. 2017). C 2 H + 4 is seen during both periods. However, its parent molecule may be different during these two periods. At large heliocentric distances, C 2 H + 4 can be produced by ionisation of C 2 H 4 or dissociative ionisation of C 2 H 6 since the electron-impact ionisation, dominant at large heliocentric distances, of C 2 H 6 primarily leads to C 2 H + 4 (Avakyan et al. 1998;Tian & Vidal 1998b). On the contrary, at perihelion, it may be produced either by protonation of C 2 H 3 (C 2 H 3 +H 3 O + ) or by Article number, page 11 of 26 A&A proofs: manuscript no. AA-2019-36775 LR HR HR Fig. 11. Same as Fig. 3, but for 28-30 u·q −1 . Stacked spectra in low resolution (top panel), in HR at 28 u·q −1 (middle) and at 29 u·q −1 (bottom). There is no evidence for a cation at 30 u·q −1 and the associated spectra are not shown. charge exchange (C 2 H 4 +H 2 O + ). HCNH + is detected near perihelion as it is produced through proton transfer, between HCN (or HNC) and H 2 O + /H 3 O + (Heritier et al. 2017). One spectrum early in the mission exhibits a peak at Si + but it may be a spurious and ghost peak since it is observed on a single spectrum and on one channel only. In fact, both channels are not equally sensitive (due to ageing, temperature, tuning, etc.) and hence for low signals this may happen regularly. No conclusion can be drawn, though Si has been detected by another mass spectrometer of ROSETTA during the same time interval (Wurz et al. 2015). Si was assumed to be produced by sputtering of the nucleus' surface by solar wind ions, still able to access the surface at low outgassing activity (Behar et al. 2017). There is no evidence of N + 2 , which is consistent with Earth-based observations of other comets . The N + 2 /CO + ratio is of interest in radio-astronomy and has been investigated at several comets (Cochran et al. 2000;Cochran 2002). Due to the telluric contamination of N + 2 or a lack of detection, an upper limit of this ratio is usually given of the same order as the ratio of their parent molecules , that is less than ∼ 1%, which explains why a possible N + 2 peak would be buried in the background. Overall, although the peak at 28 u·q −1 is present at any time of the escort phase, the main contributors may have changed during Rosetta's escort phase: CO + (and maybe C 2 H + 4 ) at large heliocentric distances, HCNH + +C 2 H + 4 near perihelion. At 29 u·q −1 , two ion species, HCO + and C 2 H + 5 , are clearly visible on HR spectra near perihelion. Spectra in LR at 29 u·q −1 show a peak in January 2016 (green) with the same amplitude as near perihelion (orange) while, in HR mode, both species are detected mainly near perihelion. The presence of HCO + is puzzling as it should be lost through chemistry with H 2 O. Included in the photochemical model of Heritier et al. (2017) for 67P at perihelion, HCO + is produced through ion-neutral chemistry and its contribution is of the order of CO + which is however not observed at that time by DFMS. At 1P/Halley, Haider & Bhardwaj (2005) had used the same ion-neutral chemical reactions plus another one: C + +H 2 O. The latter was calculated to contribute up to 10% to the total amount of HCO + in 1P's coma. At 67P, DFMS did not perform detections below 13 u·q −1 such that C + at 12 u·q −1 cannot be qualitatively assessed. However, we do not expect C + to be significantly dense enough near perihelion to yield HCO + . Rosetta was close to the nucleus between 150 and 200 km and the potential sources of C + are limited to carbon-bearing molecules, namely CO 2 , CO, and, to a much lesser extent, H 2 CO. That said, the potential direct sources of HCO + are the dissociative ionisation of H 2 CO (present in the coma, Heritier et al. 2017), the photo-dissociation of H 2 CO into HCO followed by its ionisation, and/or ion-neutral chemistry of H 2 CO with ions. HCO + barely reacts with H 2 O, like CH + 3 (Herbst 1985): perihelion is hence a favourable period for its production with a stronger EUV flux and the limited loss through ion-neutral chemistry. However, it is difficult to assess the contribution of the different chemical pathways, as the most likely parent molecule, H 2 CO, may be a distributed source like at 1P/Halley (Meier et al. 1993). Due to the poor spatial coverage of Rosetta near perihelion, it is difficult to determine the neutral number density profile of H 2 CO and determine whether or not it departs from a ∼ 1/r 2 dependency. However, studying distributed sources for neutrals is beyond the scope of this paper dedicated to ions. No HR spectra at 30 u·q −1 are shown as no peak was detected, which is consistent with the relatively weak signal in LR ( 60 counts). The LR spectra do not exhibit significant differences between observations at perihelion and at large heliocentric distances, meaning that the main contributor may change throughout the mission and/or may not react with H 2 O. Fig. 12. Same as Fig. 3, but for 31-33 u·q −1 . Stacked spectra in low resolution (top panel) and in HR at 31 u·q −1 (second panel), at 32 u·q −1 (third panel), and at 33 u·q −1 (fourth panel). Fig. 12 shows spectra for the range 31-33 u·q −1 . Peaks at 31, 32, and 33 u·q −1 are similar, showing larger intensities at heliocentric distances from 2 to 2.5 au after perihelion (green), in particular at 33 u·q −1 , slightly above the levels observed near perihelion (orange). At 31 u·q −1 , H 2 COH + (protonated formaldehyde) is clearly identified in HR as the major ion species. H 2 CO has a proton affinity higher than that of H 2 O, such that H 2 COH + is mainly produced through H 2 CO+H 2 O + or H 2 CO+H 3 O + . There are some weak (<5-7 counts) peaks at the location of phosphorus cation P + near perihelion as seemingly shown by the observations of phosphorus atoms and amino-acids in neutral mode reported by Altwegg et al. (2016). The faint level of the P + signal in HR can be explained by the loss of P + which reacts, even slowly, with the most abundant neutral species such as H 2 O, CO 2 , and NH 3 . For HR spectra at 32 u·q −1 , there are several peaks with low intensity (<20 counts per 19.8 s) which can be identified by superimposing a number of spectra. S + and CH 3 OH + are clearly detected. Amongst the sulphur-bearing molecules detected in the coma of 67P by DFMS, the most abundant species near perihelion are H 2 S, of which dissociative ionisation leads to S + in part, and neutral S (Calmonte et al. 2016). There is no significant loss of S + through chemistry as it does not react with the dominant neutral species H 2 O, CO 2 , and CO. Regarding CH 3 OH + , the methanol cation, CH 3 OH, the parent molecule of the methanol cation CH 3 OH + , is quite abundant around perihelion, between 0.5% and 3% with respect to H 2 O, based on MIRO (Microwave Instrument for the Rosetta Orbiter, Gulkis et al. 2007) sub-millimetre radio-telescope observations (Biver et al. 2019). Nevertheless, these values, derived from measurements of the column density between the surface of the nucleus and Rosetta, differ from those (0.1-0.3%) measured in situ by DFMS for the same period (see Fig. 10 from Heritier et al. 2017). This difference may be attributed to the adiabatic expansion of the gas along the line of sight (Heritier et al. 2017) and the differences between the bulk speed of light and heavy species. In addition, there are also disagreements between H 2 O local measurements by DFMS and integrated column measurements by MIRO and VIRTIS with associated impacts on the relative abundances of cometary species (Hansen et al. 2016;Marshall et al. 2017;Combi et al. 2020). There is one occurrence for a peak in one channel at the location O + 2 , of which the main sources are charge exchange (O 2 +H 2 O + ) and ionisation of O 2 . The latter was not considered in Heritier et al. (2017) such that O + 2 is underestimated in their model. While the conditions were favourable for its production, its detection cannot be confirmed. Indeed, another peak of similar intensity is also observed between O + 2 and CH 3 OH + locations but cannot be assigned to a given species, it is most likely a ghost peak, that is an unphysical peak. Concerned by spacecraft contamination, in Fig. 12 (third panel) we have added N 2 H + 4 , of which the ionisation potential is 8.1 eV (Meot-Ner et al. 1984). No strong signal is detected at this mass, ruling out its contribution to 32 u·q −1 . Same precaution has been undertaken for 33 u·q −1 with N 2 H + 5 because N 2 H 4 has a proton affinity just above that of NH 3 . As only one peak is observed (see Fig. 12, bottom panel), this is not conclusive. HR spectra at 33 u·q −1 reveal the presence of a protonated molecule, namely CH 3 OH + 2 (protonated methanol), produced from CH 3 OH+H 2 O + and CH 3 OH+H 3 O + . CH 3 OH + 2 is more abundant than the ionised methanol and is the main contributor to mass 33 u·q −1 . A faint accumulation of signals is observed around HS + . Considering the main sulphur-bearing molecules, H 2 S and S (see discussion for mass 32 u·q −1 ), the possible processes to generate HS + are rather limited. The most likely source is the photo-dissociative ionisation of H 2 S, though this process is not very efficient (Huebner & Mukherjee 2015). Regarding its loss, S has a low proton affinity with respect to water and HS + easily loses its proton for the benefit of H 2 O, for instance. Fig. 13. Same as Fig. 3, but for 34-37 u·q −1 . There is no evidence of cations in high resolution and the associated spectra are not shown. Fig. 13 shows stacked LR spectra for the range 34-37 u·q −1 . None of the associated HR spectra present clear peaks over this range. The most curious and surprising absence is that of H 3 S + , even in LR. H 2 S is present in the coma of 67P (Calmonte et al. 2016) and its proton affinity is higher than that of H 2 O, though lower than those of H 2 CO, HCN, CH 3 OH, and NH 3 (Heritier et al. 2017). It is thus predicted to be detectable by Rosetta near perihelion, a period favourable for proton transfer (Heritier et al. 2017). However, the sensitivity of DFMS in the ion mode is affected by (i) the instrument energy acceptance window and, to a lesser extent, (ii) the decrease of the detector efficiency at higher energies. As the mass-per-charge ratio u·q −1 increases, the DFMS energy acceptance window decreases. Indeed, to select ions with respect to their mass-per-charge, a specific post-acceleration V acc ∝ (u·q −1 ) −1 is applied within DFMS. The energy acceptance window for the electrostatic analyzer is ∼ 20 V ± 0.1%|V acc | such that ions are filtered through a narrower energy range when the mass-per-charge ratio increases. Further details are given in Schläppi (2011). In views of previous works at 1P/Halley (e.g. Eberhardt et al. 1994), the main contributors are 34 S + , followed by H 2 S + and 13 CH 3 OH + 2 at 34 u·q −1 , and H 3 S + at 35 u·q −1 . However, for the latter, it is clear that the signal is not stronger in LR near perihelion, which is unexpected for a protonated molecule. Fig. 14 shows LR spectra (top panel) for the range 37-40 u·q −1 and HR spectra (bottom panel) for 39 u·q −1 . No cations have been detected except at 39 u·q −1 . Because the signal-tonoise ratio in HR spectra increases with higher masses, we have only kept spectra with signals above 4 counts and present on both channels in order to get rid of contamination by spurious and unreliable signals. Only two species are expected: C 3 H + 3 and K + . Korth et al. (1989) argued that the peak at 39 u·q −1 at 1P/Halley, detected by PICCA (Korth et al. 1987) on board Giotto, was C 3 H + 3 and ruled out K + . However, thrice, on the 8 th , the 9 th , and 11 th of August 2015, K + , and not C 3 H + 3 , was detected. K (potassium) is an alkali metal like Na yet with a lower electronegativity. Ion-neutral reaction rates of K + and K of interest for astrochemistry are practically inexistent; however, as K belongs to the same group as Na, we can likely assume that K + will undergo the same interaction with neutrals as Na + . K was detected early in the mission at large heliocentric distances by Wurz et al. (2015), along with Na. In spite of the rather faint K + and Na + peaks in the HR mode, we believe that the presence of these ions in the ionised coma is undoubted, supported, in particular, by the detection of K and Na neutral atoms by ROSINA instruments. The photo-ionisation of K and Na appears as a likely produc-LR HR Fig. 14. Same as Fig. 3, but for 37-40 u·q −1 . Stacked spectra in low resolution (top panel) and in low resolution at 39 u·q −1 (bottom). There is no evidence of cations in HR at 37 u·q −1 and at 38 u·q −1 and the associated spectra are not shown. LR Fig. 15. Same as Fig. 3, but for 41-44 u·q −1 . Stacked spectra in low resolution only. tion mechanism. However, the ultimate origin of these neutral species is still debated as already mentioned for Na. Ion mass-per-charge range > 40 u·q −1 We have decided not to look at u·q −1 above 40, as the sensitivity of DFMS in HR becomes too low to unambiguously allow a clear identification of the detected species . This is illustrated by a comparison of Fig. 15 with Fig. 8. The most contributing species at 44 u·q −1 is undoubtedly CO + 2 and consequently CO ++ by CO 2 (Hässig et al. 2015;Gasc et al. 2017). As explained in Section 4.4, the production rate of the CO ++ 2 dication is a hundredfold lower, at least, than that of the monocation CO + 2 but the relative intensity between the peaks at 22 u·q −1 and 44 u·q −1 is only about 10% which means that the DFMS sensitivity at 44 u·q −1 is about tenfold lower than that at 22 u·q −1 , a number consistent with mass-dependency reported by Schläppi (2011). In addition, CO + 2 is mainly lost through charge exchange with H 2 O such that near perihelion, it is close to photochemical equilibrium (i.e. its loss is through chemistry, not transport, and its number density barely varies with respect to the cometocentric distance above tens of kilometres, see Fig. 7 and 8 in Heritier et al. 2017). Highlights HR ion mode dataset of ROSINA-DFMS has allowed us to directly identify for the first time different ion species and obtain an improved knowledge of the composition of a cometary plasma (columns 4 and 5 in Table 1). Our analysis has revealed the complexity of the cometary ionosphere made of ionised neutral atoms and molecules and also of ionised radicals and protonated molecules (see Section 4.2). We also confirm the presence of isotopologues (see Section 4.3). Observations from previous cometary missions, such as Giotto, had to rely on photochemical models to determine the exact nature of numerous ions detected at a given u·q −1 . From ground-based observations, only a few ions were directly identified. Table 1 shows in the second and third columns a compilation of the previous knowledge on cometary plasma composition from the literature (Delsemme 1985(Delsemme , 1991Balsiger et al. 1995;Lis et al. 1997;Huebner et al. 1991;Haider & Bhardwaj 2005) from 13 u·q −1 (lower bound for DFMS) to 40 u·q −1 . We have detected a number of new ion species that have not been predicted before the Rosetta mission such as the alkali metal ions Na + and K + and the dication CO ++ 2 (see Section 4.4). Some ion species predicted by photo-chemical models have not been detected by DFMS. Their presence cannot be ruled out since favourable conditions for their detection may have not been met, such as a lower outgassing rate of the nucleus compared to 1P during the Giotto fly-by, lack of DFMS ion mode closer to the nucleus, and, probably of great importance, the detrimental influence of the spacecraft potential on the effective acceptance of DFMS in ion mode. Most, yet not all, cometary ion species may be sorted into three main families (hereinafter F): (F1) those produced by ionisation of a parent molecule p and lost through transport, like CH + 3 (i.e. reacting slowly or not at all with H 2 O), (F2) those produced by ionisation of a parent molecule p and lost through chemistry with H 2 O, the dominant neutral species, like HO + , CH + 4 (i.e. the ion species X + is reacting with H 2 O such that X + + H 2 O −→ products) (F3) those produced through ion-neutral chemistry only and lost either by transport (e.g. NH + 4 ), by chemistry, or both (e.g. molecules with a proton affinity between those of H 2 O and NH 3 , such as H 3 O + , CH 3 OH + 2 , and H 3 S + ). Ions produced from high proton affinity neutrals are further discussed in Section 4.2. However, not all ion species belong to one of these families. For ions produced by ionisation, there is a possibility to quantify which loss process, that is transport (F1) or chemical reactions with water (F2), dominates. Beth et al. (2019) showed that a dimensionless parameter of interest is: where k X + +H 2 O stands for the reaction rate constant of X + + H 2 O −→ products (see Appendix B), n H 2 O (r) is the local water number density close to the comet (assumed ∝ 1/r 2 ), Q H 2 O is the outgassing rate of H 2 O, U is the ion outward radial speed, assumed to be that of neutrals and constant with cometocentric distance, and r c is the radius of the nucleus. α X + gauges which loss process dominates. For α X + 1, X + is mainly lost through chemistry with water close to the surface, while for α X + 1, it is through transport. At large cometocentric distances, the loss through transport always dominates such that the ion number decreases in 1/r asymptotically. Due to the range of kinetic rates depending on the species as showed in Table B.2 and the evolution of the water number density, α X + is depending as well as the heliocentric distances. In order to assess the evolution and the variability of some detected ions, one should refer to Appendix B for detailed information on the photo-ionisation and kinetic rates. Electron-ion dissociative recombination is negligible at the location of Rosetta (Heritier et al. 2018;Beth et al. 2019); as a result, the ion number density profile of these ions is given by (adapted from Eq. B.3 in Beth et al. 2019): where E 2 stands for the exponential integral function, τ c the mean optical depth at the surface for a water-dominated coma, significant near perihelion (Heritier et al. 2017;Beth et al. 2019), ν p the ionisation frequency of the neutral parent molecule p yielding X + , and Q p the outgassing rate of the parent molecule. For low outgassing activity (α X + 1, i.e. Q 10 25 s −1 , and τ c 1), the main loss for the ion is through transport regardless the cometocentric distance and its number density profile converges towards (Galand et al. 2016): For high outgassing activity (α X + 1), the loss is dominated by chemistry with H 2 O for distances close to the nucleus; as a result, the ion is in photochemical equilibrium and its density is given by: valid up to tens or hundreds of kilometres above the surface, depending how high α X + is. Interestingly, α X + /4 corresponds to the ratio between the maximum X + number density reached with transport-dominated loss (i.e. Eq. 5 at r = 2r c ) and that reached with chemistry-dominated loss, when photo-absorption is ignored (i.e. Eq. 6 with τ c = 0). Fig. 16 illustrates the effect of increasing the relative importance of reactions with H 2 O on the ion number density. As Q H 2 O 2 ) 23 Na Table 1. Compilation of the ions predicted and detected at comets as a function of the mass. 'Identified species' are those detected by UV, IR, visible or radio spectroscopic ground-based observations of comets (Delsemme 1985(Delsemme , 1991Lis et al. 1997, see Section 1). 'Predicted species' are those included in photochemical models for 1P/Halley (Huebner et al. 1991;Haider & Bhardwaj 2005) or considered as dominant for the corresponding mass (Balsiger et al. 1995). 'Detected species in HR' refers to those detected with ROSINA-DFMS for this study. 'Peaks in LR but not in HR' refers to peaks detected in LR (), () otherwise, while no peaks were present in HR with a strong candidate in parenthesis (see Section 4.4). * means for Si + , P + , and O + 2 that, although a peak is located at the correct mass, it is seen once or twice in one of the two channels only, which may cast some doubt on their presence. increases, α X + increases and the ion density profile is damped and flattened to the photo-chemical value, or lower in presence of photo-absorption (based on Eq. 6). At large cometocentric distances, the ion number density profile follows Eq. 5. As 67P got closer to the Sun, α X + increased from 1 to 1, similarly to τ c . Beth et al. (2019) assessed the importance of the photo-absorption near perihelion. According to the average photo-absorption cross-section of H 2 O that they derived, the optical depth is ∼ 2 − −3 near perihelion at the nucleus' surface. This entails a decrease in the effective H 2 O photo-ionisation rate by 7-20 at the comet's surface. Fig. 17 shows, for different cometary outgassing conditions, the number density profiles for ions with a number density lower than H 3 O + between the nucleus and the location of Rosetta: H 2 O + , CO + 2 , CH + n (n = 1 − 4), and NH + m (m = 0 − 3). These ions are produced by (dissociative) ionisation of parent molecules (H 2 O, CO 2 , CH 4 , and NH 3 ) and lost through transport or chem-istry with mainly H 2 O. H 2 O + may also be produced by ionneutral chemistry (charge transfer between cations and H 2 O; e.g. H 2 O+CO + 2 ) but this process has been neglected here as it is significantly less efficient than photo-ionisation. As H 2 O is the main EUV absorber in the coma near perihelion, we have used the same optical depth (i.e. 3) in Fig. 17 for all photoionisation rates. Results shown in Fig. 17 for three outgassing conditions provide a simple insight about the ions of main interest and their number density profiles, even if obtained using several simplifying assumptions. Firstly, only photo-ionisation has been considered, whereas Galand et al. (2016) and Heritier et al. (2018) have shown that electron-impact ionisation is usually the main ion source, against photo-ionisation, at large heliocentric distances. Solar-wind charge exchange has also been neglected although it may contribute to the ionisation of neutrals (Simon Wedlund et al. 2019) in some cases at large heliocentric distances. Increasing the ionisation rate would shift profiles to Fig. 16. Dimensionless ion number density profile versus scaled cometocentric distance as a function of α with τ c = 0 (left panel) and τ c = 3 (right panel). On the y-axis, the values of the ion number density n i are scaled with respect to the maximum ion number density reached when only transport is considered for a given ion, i.e. α X + = 0. As α X + (coloured curves) increases, more and more ion-neutral reactions take place especially close to the nucleus. This results in damping the number density of ions reacting with H 2 O. higher number densities. Then, the ion dynamics and any significant acceleration of the cometary plasma has been neglected. Galand et al. (2016) and Heritier et al. (2018) have shown that considering the ion velocity to be that of neutral species is a good approximation to assess the plasma density at large heliocentric distances. Near perihelion, if the ion speed was significantly higher than that of neutrals as suggested by Odelstad et al. (2018), α X + would be lower, preventing ion-neutral chemistry to happen and chemically-produced species, such as NH + 4 and CH 3 OH + 2 , from being present. Results shown in Fig. 17 are thus more reliable when one compares various ions originating from the same parent molecule such as, for example, CH + 3 and CH + 4 . At large cometocentric distances (see Fig. 17, top panel), transport is always dominating such that CH + 4 number density is slightly higher than that of CH + 3 (assuming both only produced from CH 4 ). At the location of Rosetta however, because CH + 4 does react with H 2 O (F2) and CH + 3 does not (F1), CH + 3 is similar to (top panel, n CH + 3 /n CH + 4 ≈ 1) or dominates over CH + 4 (middle and bottom panels, n CH + 3 /n CH + 4 > 10). Figures 16 and 17 may help defining the most favourable conditions for detecting these ions. At low outgassing activity, when transport dominates the ion loss, the ion number density is ∝ ν p Q p , both parameters increase as the comet gets closer to the Sun. The ion number density profile peaks at r = 2r c , meaning that the best location for detection is close to the nucleus. Under high outgassing activity (when photochemical equilibrium is achieved), the ion number density is approximately ∝ ν p , i.e. only the photo-ionisation rate and its variations with the heliocentric distances will control the ion number density. However, the location at which the ion density profile peaks also depends on the absorption of solar EUV radiation by neutral species in the coma, essentially H 2 O. If photo-absorption is neglected, the ion density peaks closer to the nucleus' surface and has a plateau that may extend to distances ranging between a few tens to thousands of kilometres depending on the cometary activity (see Fig. 16, left panel). Under optically thick conditions, the EUV radiation cannot penetrate deep enough into the coma to ionise neutrals such that the ion number density peaks farther away from the nucleus' surface (see Fig. 16 and compare profiles of similar colours). Depending on the ion species and whether or not it reacts with H 2 O, the peak of the ion number density is not located at the same cometocentric distance: (F1) for ions which do not react with H 2 O nor with any other major neutral species and are mainly produced by ionisation of parent molecules (e.g. CH + 3 ), the maximum number density is reached around r ≈ 2r c and the ion density decreases asymptotically in 1/r, (F2) for ions which do react with H 2 O (e.g. HO + and CH + 4 ), their number density peaks at different cometocentric distances, depending on the cometary activity (see Fig. 17). For low activity, the ion number density peaks at r ≈ 2r c . For high activity, these species start to reach photochemical equilibrium such that their number density profile exhibits a plateau from the nucleus' surface to tens or hundreds of cometary radii, and then decreases asymptotically in 1/r, (F3) for high proton-affinity molecular ions (e.g. NH + 4 , H 3 O + , and CH 3 OH + 2 ), their number density peaks close to the comet's surface, but decreases asymptotically faster, in log(r)/r 2 . The aforementioned statements are true when photoabsorption is negligible. For outgassing rates higher than 10 28 -10 29 s −1 , photoabsorption matters : the ion number densities peak farther away from the comet's surface as EUV solar radiation cannot penetrate deep enough into the coma to ionise neutrals (see Fig. 16, right panel). High proton affinity The detection of protonated high-proton-affinity molecules other than H 3 O + at 67P, such as NH + 4 , confirms the expectations from the models. Because of the insufficient resolution of the ion spectrometers on board Giotto, NH + 4 was blended with H 2 O + such We have used the same neutral composition for each case: 88.7% H 2 O, 10% CO 2 , 1% NH 3 , 0.3% CH 4 . The photo-ionisation frequencies ν p are taken from Huebner & Mukherjee (2015) at low solar activity at 1 au and scaled with respect to the heliocentric distance. The expanding speed of the gas is set to a constant U = 900 m·s −1 and the nucleus' radius r c to 2 km. Ions are produced from photo-ionisation or photo-dissociative ionisation of the neutral molecules (see Section B). The gas temperature is assumed constant at T = 100 K. The grey areas correspond to the cometocentric distance of Rosetta during these periods. The uncertainties from the kinetic rates on the ion number density are represented through coloured shades. that its contribution to the peak at 18 u·q −1 has only been assessed from a photochemical modelling based on the measured neutral composition. Prior to Rosetta's arrival at 67P, Vigren & Galand (2013) attempted to assess the total contribution of these water ions inside the diamagnetic cavity. Though this cavity, detected around perihelion, was not as extended as expected (Goetz et al. 2016b), that does not prevent these ions from being produced close to the nucleus, then transported outwards, outside the diamagnetic cavity, and finally detected by the instrument. Indeed, during the escort phase and near perihelion, Beth et al. (2016) have unambiguously detected NH + 4 in the coma of 67P, well separated from H 2 O + thanks to the high resolution of the ROSINA-DFMS HR mode. Fig. 6 clearly shows that near perihelion, at the location of Rosetta, H 2 O + was on the average more abundant compared with NH + 4 . In the present paper, we have confirmed the presence of three additional protonated molecules in the coma: HCNH + , H 2 COH + , and CH 3 OH + 2 ,that were previously predicted from a photo-chemical modelling (Heritier et al. 2017). Their detection attests the importance of ion-neutral chemistry and collisions in a high activity environment. Except H 3 S + (undetected in HR), HCNH + , H 2 COH + , and CH 3 OH + 2 are strong candidates for the peaks observed at 28, 31, and 33 u·q −1 respectively in the LR mode. By means of a photochemical modelling, Heritier et al. (2017) investigated the relative contribution at 28 u·q −1 . They found that CO + should be lower than HCNH + . However, their modelling did not include C 2 H + 4 , another candidate at 28 u·q −1 in LR, and its associated chemistry, although this species is clearly detected in HR simultaneously with HCNH + on the same spectrum (see Fig. 11, middle). The non-detection of H 3 S + in HR is still not clearly understood. It has been pointed out that it may be related to the energy acceptance of the instrument which decreases at higher masses (Heritier et al. 2017). It may also come from the origin and the source of H 2 S. In the photo-chemical model, the source and background of the neutral species are supposed to exclusively come from the gas released following the ice sublimation at the surface, excluding extended sources. However, Calmonte et al. (2016) have suggested from DFMS neutral measurements that part of H 2 S is associated with dust grains. Finally, CH + 3 might be considered to be part of (F3) as CH 2 has a higher proton affinity than HO and H 2 O (Altwegg et al. 1994). However, Fig. 4 shows no evidence for higher counts of CH + 3 at perihelion when proton transfer reactions are favoured, compared to large heliocentric distances. Water isotopologues Figs. 7 and 8 attest the presence of water ion isotopologues, namely H 2 18 O + , H 2 DO + , and H 3 18 O + , near perihelion. As a consequence, DO + and H 18 O + must also be present but the mass resolution of ROSINA-DFMS is not high enough to separate them from H 2 O + and H 3 O + signals, respectively (see Fig. 6). In view of the isotopic ratios D/H (∼ 5.3 × 10 −4 , Altwegg et al. 2015) and 18 (Balsiger et al. 1995) using count rates at 19 and 20 u·q −1 . However, as already noticed in Section 3.2, the quality of DFMS measurements at 19 u·q −1 close to the edge of the detector is not good enough to provide the necessary accuracy for the data processing. Moreover, ions at 19, 20, and 21 u·q −1 were not probed exactly at the exact same time (i.e. approximately 30 seconds between individual spectra) and plasma conditions (and thus the spacecraft potential) have been shown to vary on short time scales. On the 16 th of September 2015 from 08:59 UT, two successive HR spectra exhibit reliable peaks in both channels: 1) at 20 u·q −1 associated with H 3 18 O + and the doublet H 3 17 O + and H 2 DO + (not separable) and 2) at 21 u·q −1 with a peak associated with H 3 18 O + . Peaks have been fitted with one Gaussian or double-Gaussian for the highly abundant ion H 3 O + only while the total counts corresponding to the peaks of its isotopologues with very small amplitudes were simply obtained by summing the individual pixel counts. In this occasion, the isotopic ratio may be derived although with a limited accuracy. Firstly, (3D/H+ 17 O/ 16 O) may be inferred from (H 2 DO + +H 3 17 O + )/H 3 O + (Eberhardt et al. 1995). We obtained ∼ 10 −3 which is compared with (2.05±0.3)×10 −3 from neutral isotopic ratio obtained by DFMS observations in the neutral mode Schroeder et al. 2019 (Eberhardt et al. 1995). We obtained ∼ 10 −3 which is compared with (2.25±0.18)×10 −3 from the DFMS neutral mode (Schroeder et al. 2019). As aforementioned, the isotopic measurements have only been possible on very few occasions when the count rates in the ion HR mode were sufficient. Moreover, the accuracy of their derivation is limited due to several factors such as the intrinsic smaller effective sensitivity of DFMS and the variable instrument energy acceptance controlled by the large and varying spacecraft potential. It is therefore out of scope to perform accurate isotopic measurements using HR ion data. However, our results show a relatively good agreement between both the ionderived and neutral-derived isotopic abundances. Dications As shown in Section 3.3, ROSINA-DFMS provides the first unambiguous detection of the doubly-charged (or dication) CO ++ 2 in a cometary ionosphere. Dications have been previously observed in dense planetary ionospheres throughout the Solar System: O ++ at Earth (Hoffman 1967), Venus (Taylor et al. 1980), perhaps at Mars (Dubinin et al. 2008) and at Io, along with S ++ (Frank et al. 1996) and C ++ (Sandel et al. 1979). The recent (and only) review about ionospheric doubly-charged ions was published by Thissen et al. (2011). Additional information on organic dications and their structures, aiming at presenting their chemical properties may be found in Lammertsma et al. (1989). Thissen et al. (2011) reviewed stable dications stemming from the most abundant atoms and molecules found in planetary atmospheres, namely: C ++ , N ++ , O ++ , CO ++ , N ++ 2 , NO ++ , O ++ 2 , Ar ++ , CO ++ 2 . Notwithstanding a couple of ions indistinguishable by mass spectrometry (N ++ 2 and N + , O + 2 and O + ), many of these dications have u·q −1 very close to monocations from different neutral atoms, molecules, and radicals (e.g. N ++ and Li + , CO ++ and N + , NO ++ and CH + 3 , Ar ++ and Ne + ) and their detection is difficult or impossible even with high performance mass spectrometers such as ROSINA-DFMS. Moreover, there were only a few attempts in assessing their potential role within an ionosphere and exosphere. For instance, Lilensten et al. (2013) showed that the presence of CO ++ 2 and its associated chemistry in the upper atmosphere may play a non-negligible role in the ion escape. Furthermore, Falcinelli et al. (2016) showed that the Coulomb explosion of CO ++ 2 produces CO + and O + at a few eVs, which is energetic enough to overcome the gravitational attraction of the planet. ROSINA-DFMS has two advantages for the detection of these peculiar and scarce ions compared with the ion spectrometer on Giotto: a higher sensitivity and a wider mass coverage. By covering several integers in terms of u·q −1 , DFMS may probe those stemming from double ionisation of odd-mass-number parent molecules. It is likely that even-mass-number dications are also present but buried into monocations' signals (e.g. O ++ 2 with O + or N ++ 2 with N + ), but, fortunately, this is not the case for CO ++ 2 at 22 u·q −1 . In neutral mode, the neutrals are ionised by electron impact with energies of 45 eV. The recorded signal at 22 u·q −1 stems from CO 2 whose double ionisation threshold is ∼ 37 eV. In the ion mode, ions have not been ionised within the instrument and originate from the cometary plasma. In order to firmly establish the nature of ions detected at 22 u·q −1 , we have considered the following possibilities. We first check for candidate monocations: 22 Ne + , D 2 18 O + , HD 2 17 O + , and H 2 D 18 O + (Balsiger et al. 1995). Ne (Neon) has never been detected (Rubin et al. 2018) and, regarding the isotopic composition Schroeder et al. 2019), hydronium ion isotopologues have a too low abundance to be detected. In a second step, we check for candidate dications and consider other neutral candidates than CO 2 at 44 u: CS, CH 2 ON, C 2 H 4 O, C 2 H 6 N, and C 3 H 8 . Within this list, two arguments are in favour of the CO ++ 2 dication: CO 2 is dominant at 44 u and the peak at 22 u·q −1 correlates with latitude, as CO 2 does. Fig. 18 shows LR spectra at 22 u·q −1 above both hemispheres. To exclude seasonal variations, we selected the time period from October 2014 to February 2015 (pre-equinox). ROSINA-DFMS probed frequent and strong signals at 22 u·q −1 above the Southern Hemisphere, where CO 2 is abundant (Hässig et al. 2015). Over the pre-equinox Northern Hemisphere where CO 2 is minor, no peak is present at 22 u·q −1 . ROSINA-DFMS cannot simultaneously probe both ion and neutral compositions such that a 'direct' correlation cannot be assessed. However, we find that, pre-equinox, both the neutral CO 2 abundance and the signal at 22 u·q −1 in ion mode correlate with the spacecraft latitude. Finally, to strengthen our argumentation, one should compare the pre-equinox count rates at 22 u·q −1 and at 44 u·q −1 . At 22 u·q −1 , the count rates were almost 100 while at 44 u·q −1 , they were almost 1000. However, as aforementioned, the instrument sensitivity is a tenfold higher at 22 u·q −1 than at 44 u·q −1 such that the 44 u·q −1 /22 u·q −1 ratio is ≈ 100, consistent with the ratio of photo-ionisation crosssections of CO 2 leading to CO + 2 and CO ++ 2 (Masuoka 1994;Tian & Vidal 1998b). Unfortunately, no detection of dications except at 22 u·q −1 could be achieved from DFMS data, even if the design of ROSINA covers half-odd-integer masses. Indeed, in neutral mode, when neutrals are ionised in the gas chamber and then travel within the instrument, dications are observed at 13.5 u·q −1 for example. There are some hints to explain the lack of detection of stable doubly-charged ions other than CO ++ 2 . Dications are primarily produced from direct ionisation of neutrals. As ROSINA-DFMS detection starts at 13 u·q −1 , only dications associated with neutral species above 26 u could be detected. Considering in first instance CO, H 2 CO, CH 3 OH, H 2 S, and hydrocarbons (saturated or not), all of them have an even-integer mass number and the corresponding u·q −1 of the doubly-charged ion falls on a similar value of existing singly-charged ions except at 22 u·q −1 (see Table 1). The peak at 22 u·q −1 never exceeded 100 counts. Let's assume that double-ionisation of neutral species follows a similar pattern than that of CO 2 , i.e. it occurs hun-LR LR LR Fig. 18. Spectra in LR at 22 u·q −1 over three latitudinal regions prior to the 1 st of March 2015: above +30 • (upper panel, ∼ 170 stacked individual spectra), between -30 • and +30 • (middle panel, ∼ 470 stacked individual spectra), and below -30 • (bottom panel, ∼ 290 stacked individual spectra). Due to a uneven latitudinal coverage when DFMS was operating in ion mode, more spectra were acquired above the Southern Hemisphere which might be a source of observational bias. dredfold less than a single ionisation (Masuoka 1994;Tian & Vidal 1998a). A neutral parent species with an odd-integer mass number (which offers no overlap of the dication peak with other species) should exceed ∼ 1% of the CO 2 volume mixing ratio in order for its daughter dication to generate 1 count, and even more to have the peak above the noise level. No cometary neutral species fulfil these two requirements. The detection of stable dications also raises questions about their effects upon plasma dynamics and behaviour. Unlike terrestrial ionospheres, gravity is not at play at comets such that any neutral or ionised species will leave independently of their mass. The dynamics of an ion will only depend on its mass-percharge ratio, as the dynamics is usually dominated by electromagnetic forces. In the case of CO ++ 2 , its mass-per-charge is not so different from the water-group ions and, therefore, its effect is marginal on the dynamics. However, it might play a role in the ion-chemistry and in the production of 'energetic' CO + and O + , of about a few eVs. The term energetic may mislead the reader though. Within a terrestrial ionosphere like those of Mars, Earth, or Titan, the ions are (almost) thermalised with the ambient neutral species such that their temperature is equal to the neutral temperature which never exceeds 1000 to 2000 K at most. In that case, an ion of 1 eV or more is classified as energetic compared with the neutral species. Within a cometary ionosphere, ions already have a significant amount of energy. Considering ions travelling between 1 (that of neutral species) and 8 km·s −1 , as observed by and at Rosetta (Odelstad et al. 2018), the ion kinetic energy is between 0.09 and 6.0 eV for H 2 O + , between 0.15 and 9.3 eV for CO + , and between 0.083 and 5.3 eV for O + . In comparison, the kinetic energy released by the Coulomb explosion of CO ++ 2 into CO + and O + lies within these values (Falcinelli et al. 2016) such that this process is not a significant additional source of energy. Aside this energetic aspect, the double photoionisation of CO 2 might be a marginal source of CO + and O + ions. As suggested in Section 3.3, DFMS likely detected the 'stable' CO ++ 2 which is produced in lower quantities than its metastable/unstable counterpart, quickly dissociating into CO + +O + (Masuoka 1994;Slattery et al. 2005;Falcinelli et al. 2016). Lastly, our analysis shows that not only the low outgassing rate (Q < 10 25 − 10 26 s −1 ) but also the proximity of Rosetta to 67P (i.e. a few to tens of kilometres) played a key role in the detection of CO ++ 2 because of the low lifetime of CO ++ 2 (≤ 4 s, Mathur et al. 1995). Hence, we anticipate its detection is possible at a comet only if these conditions are met. Future missions Given the instrumental, operational, and orbital constraints, we showed that DFMS had a remarkable capability to assess the ion composition of the cometary plasma, though this was not its primary goal. Unlike Giotto, Rosetta was a non-spinning and quasi-stationary spacecraft with respect to 67P. DFMS was often operating in ion mode when it was pointing towards the comet, though manoeuvres were ongoing. These different characteristics may greatly influence the ion detection. In the case of a flyby for a future mission, the spacecraft will fly at speeds of tens of km·s −1 with respect to the target. There are two advantages of such a trajectory. Firstly, it gives a radial coverage of the plasma number density and composition over thousands of kilometres over a short period of time (typically a few hours) limiting timedependent effects (e.g. changes in outgassing and neutral composition). Secondly, most of the cometary ions will be collimated in a limited region of velocity phase space in the spacecraft frame of reference (since the ion thermal speed is below or of the order of their mean speed) such that the instrument will capture the bulk of the cometary ions with a limited field of view. In addition, having a fast spacecraft minimises the troublesome effect of the spacecraft potential, very negative most of the time and varying for Rosetta. Indeed, considering a cometary ion, its energy, mainly being kinetic, will be 1 2 m i (u i − u SC ) 2 ≈ 1 2 m i u 2 SC with respect to the spacecraft. Typically, for H 2 O + , its kinetic energy in the spacecraft frame spans from 9 eV (v SC = 10 km·s −1 ) to 330 eV (v SC = 60 km·s −1 ) to be compared with the largest negative spacecraft potential of Rosetta, around -30 V. In the case of an escorting spacecraft like Rosetta, being closer (a few tens of kilometres instead of hundreds of kilometres, or performing the dayside excursion) to the nucleus near perihelion would have improved the signal to noise and the detection of ions resulting from chemistry and dissociative ionisation of neutrals. However, being too close (typically ∼ 10 km) may not be the best location for very active comets and is not safe for the spacecraft. Indeed, the maximum in the plasma number density is not located at the same cometocentric distance depending on the outgassing activity (see Section 4.1). Although 67P has a low-to-intermediate outgassing activity, many cations have been detected even at 150 − 200 km from the nucleus near perihelion. This result indicates that a very active comet is not a requirement regarding the current instrumental capabilities for mass spectrometers like DFMS. For weakly active comets (<10 27 -10 28 s −1 ), an escorting spacecraft, like Rosetta, is the best option as it allows to measure the ion composition over a long period, at different stages of its outgassing activity. However, additional aspects should be considered for future similar missions: limiting the manoeuvres during the spectrometer's scans, allocating operational time for the ion mode close to the nucleus, and a more uniform time coverage throughout the mission. As shown in Fig. 1, DFMS ion dataset is relatively sparse, excluding safe mode and excursions. Regular ion scans would have helped to track the evolution of the ion composition through the mission. Running alternatively LR and HR modes should be also considered. For very active comets (>10 29 s −1 ), a flyby is the best option from an ion composition perspective, though a different instrumentation is required such as a time-of-flight (TOF) mass spectrometer. A fly-by requires a high time resolution and spectrometers like DFMS do not suit (one DFMS spectrum at a given u·q −1 is acquired during 20 s every 10-15 min). TOF spectrometers have a much higher time resolution acquiring several u·q −1 all at once, at the expense of a lower sensitivity. Beside fostering the ion-neutral chemistry, very active comets exhibit several boundaries (Mandt et al. 2019) and regions including the diamagnetic cavity (Cravens 1989). Ion composition may differ inside and outside such a cavity ). Although it was detected at 67P (Goetz et al. 2016a,b), diamagnetic crossings were on average less than 30 minutes, which corresponds to less than 3 scans for a specific mass-per-charge ratio by DFMS, limiting its ability to probe the composition inside the cavity. More time should be spent inside this region and that can be best achieved at very active comets. In addition, one flyby will allow to assess the composition inside and outside over a short time period. Conclusion The mass spectrometer DFMS with its HR mode outperforms any in situ measurements made at comets so far. In particular, its HR ion mode has been extremely valuable for identifying ions present within a coma at close range. Although the time coverage in ion mode is more restricted than that of the neutral mode by far, DFMS has produced invaluable results, which are presented in this paper. Amongst all of them, a very new and interesting result is the first detection of the CO ++ 2 dication. For future studies, to make the most out of the ion ROSINA-DFMS dataset, crossanalysis should be performed with the set of instruments from the RPC Consortium. List of the ions displayed in the different spectra with their exact mono-isotopic mass (m 0 + ∆m). Near the pixel p 0 , the difference between two pixels corresponds to ∼ 0.03 × 10 −3 m 0 u·q −1 in terms of mass-per-charge ratio in HR. As the HR resolution is > 3000 at 1% peak height, ion species should be separated by at least 10 pixels or ∼ 0.33 × 10 −3 m 0 to be resolved if the counts do not exceed 100 times the noise level, the mass-per-charge separation has to be higher otherwise. First column: commanded mass-per-charge ratio m 0 to which ions belong. Second column: algebraic mass-per-charge shift with respect to m 0 . Third column: the ion species.
24,838.8
2020-08-19T00:00:00.000
[ "Physics" ]
OSTRFPD: Multifunctional Tool for Genome-Wide Short Tandem Repeat Analysis for DNA, Transcripts, and Amino Acid Sequences with Integrated Primer Designer Microsatellite mining is a common outcome of the in silico approach to genomic studies. The resulting short tandemly repeated DNA could be used as molecular markers for studying polymorphism, genotyping and forensics. The omni short tandem repeat finder and primer designer (OSTRFPD) is among the few versatile, platform-independent open-source tools written in Python that enables researchers to identify and analyse genome-wide short tandem repeats in both nucleic acids and protein sequences. OSTRFPD is designed to run either in a user-friendly fully featured graphical interface or in a command line interface mode for advanced users. OSTRFPD can detect both perfect and imperfect repeats of low complexity with customisable scores. Moreover, the software has built-in architecture to simultaneously filter selection of flanking regions in DNA and generate microsatellite-targeted primers implementing the Primer3 platform. The software has built-in motif-sequence generator engines and an additional option to use the dictionary mode for custom motif searches. The software generates search results including general statistics containing motif categorisation, repeat frequencies, densities, coverage, guanine–cytosine (GC) content, and simple text-based imperfect alignment visualisation. Thus, OSTRFPD presents users with a quick single-step solution package to assist development of microsatellite markers and categorise tandemly repeated amino acids in proteome databases. Practical implementation of OSTRFPD was demonstrated using publicly available whole-genome sequences of selected Plasmodium species. OSTRFPD is freely available and open-sourced for improvement and user-specific adaptation. Evolutionary Bioinformatics repeat finder and primer designer (OSTRFPD) has been designed to address some of these key issues by providing a simple yet useful tool to rapidly identify and categorise repetitive nucleic or amino acid sequences and to assist in the development of microsatellite-targeted primers with minimum user input and programming knowledge. Implementation OSTRFPD has been designed for molecular researchers with little or no computer programming background in mind and optimised for small-(approximately 5 Kbp) to medium-sized (approximately 50 Mbp) FASTA sequences. The architecture and workflow of OSTRFPD ( Figure 1) consist mainly of FASTA sequences (DNA, RNA, or proteins), which are scanned for user-configurable repetitive units. The software supports detection of both perfect and imperfect repeats with low complexity, which widens the range of potential STR analyses. Configuration options for results can vary based on sequence type and the anticipated output format. The format of the output can be tabulated values (default), FASTA sequences, or alignment type. OSTRFPD has the option to display imperfect repeats in plain text alignment, comparing the imperfect sequence with its nearest perfect equivalent for visually identifying indels, gaps, and mismatches. The alignment mode also generates additional information, such as the default local alignment scores, custom scores, and a rudimentary consensus sequence, based on perfectness of the repeat. For DNA sequences, the software uses the well-established Primer3 platform with configurable parameters for simultaneously designing primers on microsatellite detection. Moreover, assuming that the primer-tag option is selected, OSTRFPD appends a user-defined tag to the 5′-tail of primers, which simplifies the process for ordering tagged primers. The dictionarybased motif search is a unique feature of OSTRFPD. The dictionary is essentially a plain text file with each custom motif listed on a new line. The dictionary must contain only 1 type of molecule (not a mixture of DNA, RNA, or proteins). During the runtime, motifs are processed automatically to filter out any duplicates or equivalent cyclic motifs. The current version of OSTRFPD only supports fixed-length motifs and single minimum repeat number-based searches, although a single dictionary file may contain collections of variable-length motifs. The dictionary mode exclusively allows searches of motifs of 1 to 30 bp or amino acids, which may enable researchers to identify user-defined simple oligonucleotides, transcription factor binding regions, or signalling peptide sequences. Dictionaries optimised for nucleotide and amino acid motifs commonly observed in Plasmodium species have been bundled with the OSTRFPD distribution. Selection of databases The usability of OSTRFPD was demonstrated with freely available standard reference genomic and protein databases of selected Plasmodium species from the PlasmoDB web server (http://plasmodb.org/common/downloads/release-36/). The Figure 1. Schematics of OSTRFPD software architecture and workflow. OSTRFPD can either be used as command line console with arguments or as a fully featured graphic user interface tool. Single or multi-FASTA file (eg, .fasta, .fa, and .gz 'gunzip-compressed fasta') for nucleic acid or protein is directly accepted as data source. All type of sequences can be scanned for short tandem repeats and primers can be simultaneously designed for DNAassociated microsatellites using built-in flanking sequence filter and primer3 plugin. Results can be generated with the option to include general statistics report. Results generated can be of 3 major types: (1) 'Default' with tab-delimited values and associated headers (2) 'Alignment' or 'Imperfect Alignment only' format with alignments of repeats for both perfect and imperfect repeat, and (3) 'FASTA' as portable multi-FASTA format containing target microsatellite with flanking sequences. MS indicates microsatellites; OSTRFPD, omni short tandem repeat finder and primer designer. Software prerequisites for running OSTRFPD OSTRFPD is freely available under the GNU General Public License (GPL) (https://www.gnu.org/licences/gpl-3.0.en .html). The software was tested for proper operation in both Windows (version 7, 10) and Linux Ubuntu (version 16.04), provided that at least Python 3.5, PyQt5 5.9.1, and Biopython 1.7 are correctly installed. 18,19 The software uses Python's builtin powerful regular expression engine to identify patterns within DNA, RNA, or amino acid sequences and locate STRs. To generate primers, users can either directly implement standalone primer3 binaries supplied with the software package or individually compile primers from the official source (https:// sourceforge.net/projects/primer3/files/primer3/1.1.4/). The details of each parameter for primer design can be obtained from primer3 documentation (http://primer3.sourceforge.net /primer3_manual.htm). 20 Ease of operation OSTRFPD can either run as fully featured standalone OS-specific binaries or run directly from the source code within a platform-independent Python environment. OSTRFPD supports fully featured graphical user interface (GUI) or command line interface (CLI) in a Windows console or Linux terminal. The GUI mode ( Figure 2) is equipped with tool tips and basic Simplified graphical user interface (GUI) for data input. OSTRFPD provides a user-friendly graphical interface which can be initialised using simple argument 'python3 ostrfpd.py -gui true' in console or terminal. The user interface has decent level of built-in error handling modules to minimise invalid data input. Graphical user interface works along with display of console screen. Simple tooltip displayed on status bar provides a short description of each option under consideration and shows example of command line interface parameters whenever feasible as '<eg, -command value>'. OSTRFPD indicates omni short tandem repeat finder and primer designer. 4 Evolutionary Bioinformatics level of error handling modules to avoid invalid or unintentional inputs. A typical GUI mode can be initiated using parameters 'python3 ostrfpd.py -gui true' in the console or terminal. The CLI mode ( Figure 3) is suitable for advanced users who choose to conduct batch operations or implement OSTRFPD as a plugin for their own utilities. Command line interface mode is activated by default. The software generates user-configurable detailed output that can be retrieved as a tab-delimited report file (default), FASTA sequences, or in an alignments format. The details of each parameter and the syntax in the CLI mode can be accessed by following software documentation or using the built-in help '--help' argument. OSTRFPD has an advance option for CLI that can be initialised using no argument 'python3 ostrfpd.py' or 'python3 ostrfpd.py -gui false' in console or terminal. The CLI mode allows to use OSTRFPD for batch operation as well as a plugin script that can be implemented by other software. Representative images are truncated to save space. OSTRFPD indicates omni short tandem repeat finder and primer designer. Abbreviation: OSTRFPD, omni short tandem repeat finder and primer designer. Summary of amino acid repeat conducted for proteome-wide search for 1 to 2 amino acid (aa) unit motif repeat using default settings with minimum repeats of 7 and 5, respectively. Equivalent command line parameters were supplied as 'python3 ostrfpd.py -scan protein -input source_protein_fasta -unitmin 1 -unitmax 2 -misa 7,5' . Evolutionary Bioinformatics Practical implementation of OSTRFPD As an example, the microsatellite (Table 1) and amino acid residues (Table 2) identified during the demonstration reflect characteristic features of the extremely AT-rich Plasmodium genome. 4,21 The P falciparum genome had the highest number of microsatellites (66 146) with an average density of 2835 microsatellites/million base pair (Mbp), and the total number of tandemly repeated amino acid residues was 3803. In addition, A, AT, and AAT were among the most frequently repeated motifs, comprising more than 50% of the total motifs in each Plasmodium species. OSTRFPD can be configured to automatically generate computationally feasible primers targeting such microsatellite motifs. Process of primer design begins with identification of microsatellite, subsequent analysis of its flanking sequences, and selection of computationally feasible primer pair that can amplify the region containing tandem repeats (Supplemental Figure 5). Microsatellite-targeted candidate genotyping primers were designed for the relatively less studied P ovale curtisi GH01 (Supplemental Table 1). For amino acid repeats, the highest number was detected in P falciparum (3803) with an average density of 908 repeats per million residues ( Table 2). In addition, each motif-sequence and the associated frequency distribution of microsatellites (Figure 4), rRNA repeat motifs ( Figure 5), and amino acid sequences ( Figure 6) were automatically categorised to clearly elucidate the types of repeats involved. Identification and simple alignment view of imperfect repeats An in-depth analysis of imperfect microsatellites could be conducted by visualising the simple text-based alignment to identify indels. The example provided illustrates the results displayed for an imperfect alignment of a randomly selected Plasmodium DNA ( Figure 7A) and protein ( Figure 7B) sequence with their closest corresponding equivalent perfect repeats. In addition, the result displays Biopython's default local alignment scores, non-motif indels, and custom scores along with other minor parameters by default (Figure 7). Similar results can be obtained with user-specified command line parameters for DNA: 'python3 ostrfpd.py -scan dna -input source_dna_fasta -unitmin 1 -unitmax 3 -imperfect 10 -imalign true' and for protein: 'python3 ostrfpd.py -scan protein -input source_protein_fasta -unitmin 1 -unitmax 3 -imperfect 10 -imalign true'. Processing speed, CPU, and memory usage On average, the speed of sequence searches for perfect repeats of 1 to 6 bp long DNA motifs in 'fast search' mode is approximately 200 seconds for nearly 30 Mbp of sequence with a 2.4 GHz Core i5 processor containing 4 GB DDR3 RAM and 3 Mb cache memory. The search time was reduced to approximately 90 seconds for 1 to 4 bp DNA motifs under similar conditions. In contrast, for amino acid sequences totalling approximately 4 million residues, the speed of sequence searches for 1 to 3 and 1 to 2 amino acid long repeats in 'fast search' mode was approximately 468 and 75 seconds, respectively. However, the estimates were found to vary 5% to 10% depending on the background computing load of the system. During each scanning process, the overall CPU usage by OSTRFPD remained in the range of 15% to 35%, allowing the computer to remain operable for regular multitasking. Feature comparison with other microsatellite software An overview of OSTRFPD in comparison with other common microsatellite search tools belonging to a similar category was conducted. OSTRFPD was the only software with an option to filter out microsatellite-targeted primers based on short repeats found within flanking sequences (Table 3). In addition, OSTRFPD has the unique feature of direct analysis of nucleic acid (DNA and RNA) and amino acid sequences for tandem repeats. Other than Msatcommander, 22 OSTRFPD was the only offline tool that could simultaneously generate microsatellite-targeted primers without the need of any additional PERL scripts or manual steps (Table 3). Moreover, OSTRFPD had additional improvements over Msatcommander by identifying and categorising STRs with longer motifs. In contrast with MISA-Web 23 and SciRoKo, 24 OSTRFPD allowed a wider range of motif selection with the provision of filtering STRs based on multiple parameters including perfection threshold, flanking regions, and custom motifs. The dictionary-based search mode was exclusive to OSTRFPD among the other tools, which allowed precise control over motif sequences being scanned with longer motif ranges (1-30 bp) for both nucleotide and protein sequences. OSTRFPD could selectively generate alignment-formatted output for imperfect repeats with custom scores, a feature minimally available in other software. Discussion OSTRFPD provides an integrated solution for identification of perfect or imperfect STRs with low complexity and microsatellite-targeted primer design. The ease of operation and the open-source and cross-platform compatibility of the software make it a useful tool for genome-or proteome-wide surveys of small-to medium-sized sequence databases. Plasmodium species were suitable for validation of the STR mining capacity of this software because of their high microsatellite content and diversity. 4 The capabilities and features of OSTRFPD for identification and categorisation of nucleic or amino acids in Plasmodium species suggest the ease of operations and suitable improvement over existing Figure 6. Frequency distribution of unit amino acid repeat motifs in Plasmodium species using OSTRFPD. Entire known protein sequences of (A) Plasmodium falciparum 3D7, (B) Plasmodium vivax SAL-1, and (C) Plasmodium ovale curtisi. GH01 were searched for 1 to 2 amino acid unit motif with minimum repeat number of 7 and 5, respectively. Search criteria for the representative graph was limited to maximum of 2 amino acid unit motifs due to large number of unique motif type involved. Each letters in x-axis represents regular notation for amino acid residues. Equivalent command line parameters were supplied as 'python3 ostrfpd.py -scan protein -input source_protein_fasta -unitmin 1 -unitmax 2 -misa 7,5'. Evolutionary Bioinformatics software. 22,23 Other than perfect microsatellites, STRs have various forms and complexities. 29,30 OSTRFPD partly addresses these issues by being able to detect imperfect repeats with low complexity. Specifically, the STRs that satisfy the minimum selection criteria are further examined for interruption within the bound of user-supplied imperfection limits. Moreover, these imperfect repeats can be scored and filtered based on percentage of perfectness, type of indels causing the imperfection, or the combination of both. The scoring scheme is essentially a numerical designation for the number of imperfect indels and imperfection-associated penalties that the user assigns for imperfect repetitive sequences. Similarly, perfectness is the percentage of motifs within the imperfect repeat. For example, a perfect repeat containing 10 motifs scores 100% perfectness, whereas as an imperfect repeat of the same length and motif but containing only 9 units of perfect repeats scores 90% perfectness. The ability of OSTRFPD to identify, score and present imperfect STRs, and provide output in both regular and alignment formats can foster deeper understanding of repetitive elements in genomes and proteomes. One important bottleneck in the study of STRs is the categorisation of motifs, which may occur in cyclic, palindromic, or complimentary forms. For example, ATA n , AAT n , and TAA n are cyclic equivalents of each other and thus are categorised as the same motif under partial standardisation. Full standardisation incorporates cyclic equivalents and their reverse complements under the same category of repetitive sequence. Thus, ATA n , AAT n , TAA n , TAT n , ATT n , and TTA n will be categorised as the same motif under full standardisation. Options for both full and partial standardisation are Ability to design and simultaneously produce primers using Primer3 without the need of additional post-processing with PERL scripts or further manual steps. b The maximum unit motif length of tandemly repeated nucleotide or amino acid residue supported by each software. c For OSTRFPD using dictionary-based custom motif search, the maximum length for unit motif is 30 base pair (bp) or amino acid (aa). available for nucleic acids, whereas the amino acid sequences are restricted to partial standardisation. Thus, OSTRFPD resolves this motif categorisation issue, which benefits the user by allowing the customisation of results based on the motif-sequence and the anticipated output format. Another common problem faced during microsatellite-based primer design is the occurrence of low-numbered repeats in flanking regions. For example, the occurrence of A n , AT n within flanking regions, where n is generally less than half the value of the corresponding microsatellite detection threshold, creates problems in primer design. Manual inspection to mitigate these issues in a large data set is not often a feasible solution. The presence of a configurable scanner to filter out microsatellites flanked by sequences harbouring low-numbered repeats significantly improves optimised primer design. The implementation of all these filters to amino acid sequences is a novel feature of OSTRFPD and benefits users who wish to investigate STRs in a proteome database. Although there are several tandem repeat identification software, such as SciRoKo, Msatcommander, Phobos, 25 TRF 26 , SSRIT, 28 and MISA-Web, many are either closed-source or limited to detection of DNA sequences with no option for simultaneous primer design. 31 Unlike most microsatellite tools, the ability of OSTRFPD to directly implement Primer3 without additional PERL scripts drastically reduces manual postprocessing steps for the construction of microsatellitetargeted primers. A typical microsatellite motif for genotyping markers is 2 to 5 bp in length, which can be handled easily by OSTRFPD. In addition, the software provides the option to detect tandemly repeated RNA sequences, which are rarely investigated, but still might be useful for specific tasks such as ribosomal RNA, transcriptomes, and RNA virus genome analysis. 32 These RNA-associated tandem repeats may influence protein folding, ribosomal constructs, and binding activities of their target proteins or enzymes. 33,34 Implementation of OSTRFPD to directly evaluate tandemly repeated RNA sequences may contribute to the scant information available on studies of repetitive RNA sequences. In addition, lysinerich STRs have been observed in different protozoal parasites, including Plasmodium falciparum and Leishmania major. These parasites may generate these STRs de novo to modulate host protein targeting efficiency. 8,35 Simple amino acid repeats may provide flexibility for optimal folding of structural or functional domains; thus, the OSTRFPD may assist researchers interested in proteome-wide quantification of such repeats. Furthermore, inclusion of an option to implement a user-specified motif dictionary enables highly customisable searches for organism-specific motif identification as well as estimation of specific oligonucleotide or peptide sequence density. OSTRFPD runs relatively slower than native C-compiled tools (ie, Phobos and SciRoKo) owing to the limitation of Python's architecture; however, the flexibility, unique features, ease of operation, and open-source nature of this software may compensate for its few drawbacks depending on the requirements of the user. Author Contributions VBM and MI designed the study. VBM wrote the source code, manuscript, and conducted data analysis. MI and AMD assisted in logistics and theoretical overview. All authors read and approved the final manuscript.
4,310.6
2019-01-01T00:00:00.000
[ "Computer Science", "Biology" ]
The influence of protein corona on Graphene Oxide: implications for biomedical theranostics Graphene-based nanomaterials have attracted significant attention in the field of nanomedicine due to their unique atomic arrangement which allows for manifold applications. However, their inherent high hydrophobicity poses challenges in biological systems, thereby limiting their usage in biomedical areas. To address this limitation, one approach involves introducing oxygen functional groups on graphene surfaces, resulting in the formation of graphene oxide (GO). This modification enables improved dispersion, enhanced stability, reduced toxicity, and tunable surface properties. In this review, we aim to explore the interactions between GO and the biological fluids in the context of theranostics, shedding light on the formation of the “protein corona” (PC) i.e., the protein-enriched layer that formed around nanosystems when exposed to blood. The presence of the PC alters the surface properties and biological identity of GO, thus influencing its behavior and performance in various applications. By investigating this phenomenon, we gain insights into the bio-nano interactions that occur and their biological implications for different intents such as nucleic acid and drug delivery, active cell targeting, and modulation of cell signalling pathways. Additionally, we discuss diagnostic applications utilizing biocoronated GO and personalized PC analysis, with a particular focus on the detection of cancer biomarkers. By exploring these cutting-edge advancements, this comprehensive review provides valuable insights into the rapidly evolving field of GO-based nanomedicine for theranostic applications. Graphical Abstract Introduction Over the years, 2D nanomaterials have provided fertile ground for the emergence of high-performance technologies in nanomedicine [1].Among 2Ds, graphene-based ones have been largely exploited in the field due to their structural characteristics deriving from the unique atomic arrangement.The manifold applications of these materials (including Graphene oxide (GO), reduced graphene oxide (rGO), and graphene quantum dots) have engendered in the biological arena, including but not limited to nanocarrier fabrication [2], drug delivery [3], cancer therapy [4], and tissue engineering [5].However, due to their high hydrophobicity, most of these materials demonstrated high toxicity within biological systems, thus limiting their use [6,7].Despite the interesting properties, the use of graphene flakes in biological environments without any modification proved to be quite challenging.The use of GO overcomes this issue.In fact, the presence of oxygen functional groups on GO surface guarantees enhanced dispersion in water solution and easier functionalization with biological molecules, finally providing a material with improved stability, reduced toxicity, and tunable surface properties [8,9].In addition, the presence of oxidated functional groups confers to the nanomaterial a high affinity towards biomolecules, such as DNA or proteins, allowing easy functionalization for targeting intent or biomarkers detection [10][11][12].All these aspects enable new and promising opportunities in biomedical research, particularly in the domain of cancer investigation.Despite the abundant progress, there are still primary concerns and urgent challenges that need to be addressed before the clinical application of GO.One major concern is the toxicity and biosafety of GO, as nanomaterials require rigorous evaluation before clinical approval.Although numerous studies have investigated the in vitro and in vivo toxicity of GO and its derivatives, there are still uncertainties regarding their clinical application.To facilitate the clinical translation of GO, factors such as stability in physiological conditions, interaction with cells, cellular response, uptake mechanism, biodistribution, transformation and metabolism in vivo need to be carefully considered.Size and surface properties significantly influence the toxicity of nanomaterials, and researchers can tailor suitable GO-based nanomaterials by controlling their size, oxidation degree, and surface modification by biocompatible agents.Surface engineering of GO is crucial to empower nanomaterials with superior properties for biomedical applications, such as hydrophilicity, stability, affinity, and biodegradability.The covalent or non-covalent modification enables the decoration of GO surface with various agents, including PEG, PEI, PLA, PLL, and RGD [11,13].However, some surface agents are not biodegradable in vivo and may pose risks, while others may be unstable in physiological environments [14].Achieving a suitable conjugation ratio while maintaining a balance between the defects and desired biomedical functions are both critical factors for a successful application of GO.The size of GO is important for efficient passive tumor targeting through the enhanced permeability and retention (EPR) effect, considering the limitations of endocytosis for large-sized nanomaterials and rapid clearing for ultra-small-sized nanomaterials.Tumor targeting performance plays a key role in tumor diagnosis and therapy, where agents need to be efficiently delivered and retained in the tumor tissue.Specific active tumor targeting can be achieved by conjugating targeting agents to GO and concomitantly exploiting the overexpression of receptors on tumor cell membranes.Moreover, leveraging endogenous and exogenous stimuli to achieve smart regulation of GO-based nanoplatforms within tumors is essential for precise diagnosis and therapy.The rapid development of personalized medicine necessitates the integration of multiple functions within a single nanoparticle.Building on the foundation of GO, functional agents can be used to provide multimodal functions.However, current strategies face challenges such as complex design, laborious synthesis, low integration efficiency, lack of synergistic functions, and uncertain biological responses.Designers must carefully consider the rational combination of necessary functions on GO, aligning with the biological demands of clinical practice.The application of nanotechnology in cancer research allowed to tackle many limitations of conventional therapeutic or diagnostic technologies [15,16].Notably, emerging studies on the interaction between nanomaterials and biological systems have provided novel insights and perspectives for the design of nanomedicine.In a physiological environment, nanomaterials encounter various fluids, including blood.Blood counts with a protein concentration of about 60-80 mg/ ml with 3700 types of proteins identified to date, including high-abundance proteins such as human serum albumin (HSA) and transferrin, stroke proteins such as receptor ligands and cytokines, and low-abundance proteins such as those derived from tissue or cell secretions [15].Given their high abundance, proteins inevitably attach to the surface of nanomedicines leading to the formation of a "protein corona" (PC) [17].PC alters the surface conformation and physicochemical properties of the pristine nanomaterials (i.e., their "synthetic identity"), thus, shaping a new "biological identity" that ultimately leads to a specific physiological response [18,19].Exploring the bionano interactions with the biological milieu has therefore emerged as the missing link between benchtop discoveries and the clinical applicability of nanomedicines.The formation of PC on graphene-based materials has been the subject of recent studies [20,21].For instance, Liu et al. studied the influence of HSA on GO surface at different pH values and demonstrated that the attachment of GO to a model cell membrane was reduced in the presence of HSA corona [22].In another study, a thorough examination was conducted to understand the impact of GO nanosheets on cells when exposed to various levels of fetal bovine serum (FBS).When FBS concentration was low (1%), human cells exhibited sensitivity towards GO and demonstrated cytotoxicity that varied with FBS concentration.Surprisingly, the cytotoxic effect of GO was significantly reduced when the FBS concentration was increased to 10%, which is typically used in cell culture media [23].Compared to the numerous review papers already existing in the literature, this work aims at discussing the role of the bio-nano interactions between GO and plasma proteins in the theranostics field.To this end, we will first detail the use of GO for the delivery of nucleic acids and drugs.Particularly, we will show how the physicochemical and functional properties of GO are modified by the adsorption of a PC allowing for active cell targeting, and efficient cargo release but also alteration of cell receptor interaction and cell signalling pathways.Lastly, a comprehensive exploration of the diagnostic applications of biocoronated GO will be provided, emphasizing the emerging concept of personalized protein corona (PC).In this context, the focus will be on the analysis of PC derived from clinically relevant biological fluids, showcasing the potential and relevance of this approach.Notably, we will present the possibility of cancer detection through an outstanding analytical technology that exploits the personalized PC of GO as a sensor for biomarker detection.With this review, our aim is to offer readers a comprehensive overview of the latest and most noteworthy advancements in the realm of biocoronated GO applications.By doing so, we strive to provide a refreshed perspective on the significant discoveries in this field (Fig. 1). Exploring the evolution of Graphene Oxide-Based gene vectors: from synthetic constructs to Biological entities The impressive progress made in gene therapies, such as gene silencing and editing has spurred efforts in identifying nucleic acid delivery vectors that are efficient, safe, and can be easily scaled up and produced consistently.To date, viral vectors have been the most popular option in gene-therapy clinical trials, outshining their non-viral counterparts in gene-transfer efficiency [24].However, packaging restrictions and large-scale production constraints, in addition to the controversial safety profile, limited the introduction of viral vectors into clinics [25].On the other hand, promising developments by non-viral carriers, mainly consisting of NPs of different sorts, circumvented some of such limitations [26].Among these, 2D nanomaterials, including GO, have gathered considerable interest in biomedical applications thanks to their high surface-to-volume ratio, and ability to enhance cargo loading and transport [27].Notably, GO is characterized by oxygen functional groups on its surface that allow for covalent and non-covalent functionalization, high aqueous dispersibility, and compatibility with biological environments [28], making it a building block for the fabrication of versatile functional nanomedicines.Despite these advantages, GO application in nucleic acid delivery is hindered by unfavorable electrostatic interactions resulting from negative charges in both vector and cargo.This is particularly relevant when double-stranded oligonucleotides are used, since the hydrophobic and π-π interactions between nucleobases and the GO lattice are stymied [29].Previous studies have used GO to deliver double-stranded nucleic acids intracellularly [30], including plasmid DNA and small interfering RNA, but they relied on functionalizing the material with cationic polymers (e.g., polyethyleneimine (PEI), amine-functionalized dendrimers, polystyrene etc.,) [31][32][33],), polysaccharides (e.g., chitosan, starch, alginate, hyaluronic acid, and cellulose) [34,35], or cell-penetrating peptides [36,37] that have less-than-ideal biocompatibility.For instance, among cationic polymers, PEI suffers from the critical shortcoming of non-degradability that leads to severe cytotoxic effects [38].Amine dendrimers interact with negatively charged cell membranes, disrupting their integrity and promoting cell apoptosis [39].Studies also showed a correlation between cytotoxicity and dendrimer physicochemical properties.For example, the cytotoxicity of poly(amidoamine) (PAMAM) and poly(propylene imine) (PPI) dendrimers is directly proportional to concentration and the number of primary amine terminal zones [40].Cationic polysaccharides, on the other hand, are hampered by their high dimension and potential immunogenicity.To surpass these limitations, one fascinating possibility involved coating GO sheets with lipids to create hybrid platforms.However, since the interaction between GO and lipid molecules is difficult to monitor, this strategy has always resulted challenging.In fact, hybrid platforms that include lipids are generally prepared either by breaking down their larger counterparts or assembling them from their building blocks.This later technique can be performed by a change in solvent polarity, temperature, or mixing of [40].However, when anionic liposomes were replaced by cationic ones, the resulting composites aggregated in solution.Recent research by Frost et al. demonstrated that the interaction between GO and liposomes was strongly influenced by particle size [41].If the liposome size is similar to or larger than that of the GO sheets, liposomes remain intact, and undesired aggregates form.When the size of the GO sheets is much larger (500 nm-5 μm) than that of the liposomes (200 nm), liposome rupture occurs, resulting in the decoration of the GO surface.Thus, it appeared clear that control over the size had to be a priority to guarantee efficient transfection.To this end, microfluidic devices provided ideal conditions for preparing hybrid nanosystems for gene delivery [42].Microfluidics involves the manipulation of fluids in the microscale range.Under these conditions, minute volumes of fluids injected or pumped into the device are efficiently mixed under controlled flow conditions.We employed a microfluidic device to produce a hybrid gene delivery system made of GO nano-sheets surface-functionalized with the cationic lipid 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP) and loaded with plasmid DNA [43] (Fig. 2a).The resulting gene delivery complexes, hereafter indicated as grapholipoplexes, were then validated through a multistep experimental strategy that involved (i) physical-chemical characterization in terms of size and surface charge through dynamic light scattering (DLS), (ii) biological validation through transfection efficiency (TE) and cell viability experiments, and (iii) cell internalization study through confocal microscopy.To ascertain the optimal ratio of DNA/grapholipoplex for cellular administration, we investigated the alterations in complex size and zeta potential by varying the weight ratio of DNA to grapholipoplex (Rw) (Fig. 2b).DOTAP grapholipoplexes exhibited typical features of lipoplexes such as charge inversion and re-entrant condensation as a function of the Rw [44].Rw = 2 was chosen as combined low dimensions with negatively charged surface charge assuring complete surface coating with DNA.These optimized grapholipoplexes demonstrated remarkable efficiency in transfecting human cervical cancer cells (HeLa) while exhibiting minimal cytotoxicity when compared to pristine GO and DOTAP liposomes (Fig. 2c).To further interpret TE data, we explore HeLa uptake through confocal microscopy on both DNA-labeled GO and grapholipoplex (as shown in Fig. 2d).Hela cells treated with GO/DNA complexes contained just a few bright spots suggesting that the complexes were not efficiently internalized within cells.On the opposite, most of the cells treated with DOTAP grapholipolplex were found to be highly fluorescent-positive.This result aligned with TE findings and support the proof that grapholiplexes were more efficient in transfecting HeLa cells with respect to pristine GO. A well-established concept in lipid-mediated gene delivery states that lipid mixtures are more fusogenic than single lipids [45,46].Incorporating very different lipid headgroups and/or aliphatic chains in lipid shells has been shown to generate asymmetric vesicles that enhance the biocompatibility and flexibility of conventional systems [47].To take advantage of this, we decorated GO with lipid blends of cationic, and zwitterionic lipids [48].The generated library of multicomponent grapholipoplexes was validated by the same multistep experimental strategy used for DOTAP grapholipoplexes (Fig. 2e).Since positively charged gene vectors can efficiently interact with cells by electrostatic attraction with negatively charged cell proteoglycans, here we selected both positively and negatively (Rw = 2) charged grapholipoplexes for the next biological validation.As expected, for each particle formulation, positively charged complexes were more efficient than their negatively charged counterpart.Furthermore, we noticed a significant impact of the lipid composition on the transfection efficiency (TE) of positively charged complexes.This led to TE values that varied by approximately one order of magnitude across different formulations.This finding aligns with the transfection behaviour commonly observed with cationic lipid-based systems.In fact, several studies have indicated that lipid composition plays a crucial role in determining the endosomal escape of lipid vesicles and the subsequent cytosolic release of the gene payload [49,50].Among positively charged grapholipoplex formulations (i.e., #4, #5, #6 and #8 in Fig. 2f ), the grapholipoplex#8 (Rw = 0.2) made of DOTAP, (3β-(N-(N0,N0-dimethyl-aminoethane)-carbamoyl))-cholesterol DC-Chol and neutral cholesterol (Chol) (25%, 25%, and 50%, molar ratios respectively), resulted to be the best compromise between high TE and low cytotoxicity, even if compared with Lipofectamine 3000, the gold standard for lipid transfection.This can be attributed to the increased presence of cholesterol and cholesterol-like molecules that promote the formation of nonlamellar phases in the membranes of endosomes, thereby enhancing their propensity for endosomal escape [51].Cellular uptake experiments performed on the worst and the best formulations (respectively #4 and #8) confirmed TE results (Fig. 2g).Approximately only 20% of HeLa cells treated with grapholipoplexes#4 showed positive fluorescence with a very limited number of cells engaged in DNA delivery, as represented by the complexes intracellular size distribution in the left panel.Conversely, when grapholipoplexes#8 complexes were administered to HeLa, these displayed approximately 90% of positive fluorescence cells with most of them arranged in the perinuclear region, as quantitively confirmed by the intracellular size distributions shown in the right panel.In summary, the hybrid platforms comprising lipid-covered GO have emerged as ideal candidates for gene transfection.These platforms demonstrate efficient gene condensation and protection, enhanced cellular uptake, controlled gene release, and high TE making them highly promising for gene delivery applications.In a more recent work, we asked whether the biomolecular corona of grapholipoplexes may have an impact on their TE and cytotoxicity [52].To this end, we incubated the complexes with different percentages of HP, and we investigated the impact of protein concentration on their size and zeta potential (Fig. 2h).Biocoronated grapholipoplexes demonstrated a significant increase in size and a rapid transition of zeta potential from positive to negative values.As plasma proteins are predominantly anionic at physiological pH, even at a low protein concentration of 1% HP, the cationic surface charge of grapholipoplexes quickly shifted to negative values (zeta potential around − 20 mV).With increasing HP concentration, the zeta potential remained consistently negative with minimal fluctuations, indicating complete protein coverage of the complexes.Furthermore, a more complex size evolution pattern was observed.At 1% HP, the complexes exhibited larger sizes, indicating rapid particle clustering due to charge neutrality.As the HP concentration increased, there was a notable decrease in size until reaching a plateau of around 5% HP exposure.As a next step, biocoronated grapholiplexes were administrated to two breast cancer cell lines, i.e., MDA-MB and MCF-7 and one colorectal cancer cell line i.e., CACO-2 cells (Fig. 2i).TE exhibited a decreasing trend with increasing protein concentration, while a non-monotonic trend was observed for cell viability among the different conditions.Pristine grapholipoplexes reduced cell viability by up to 59.3%.On the other side, biocoronated grapholipoplexes increased cell viability up to 94.3% until HP = 10%vol.Further increase in protein concentration led to a further cell viability decrease.Our findings seemed to suggest that the interaction between the composition of the PC and the receptor profiles of cancer cells can influence the association between particles and cells, as well as the signalling of apoptosis-inducing ligands.While more in-depth research is necessary to confirm this suggestion, findings displayed in Fig. 2 are in accordance with previous studies [53,54].In general, the PC can have both detrimental and protective effects.On one hand, the PC may undergo denaturation and expose immunogenic epitopes, leading to a cytotoxic mechanism [55].On the other hand, it can provide protection by creating a stealth effect that reduces the uptake of nanosystems by immune cells [56].In addition, PC has also been shown to influence the intracellular localization of NPs [57].Among the possible intracellular destinations, lysosomes are detrimental to gene vectors posing a significant obstacle to efficient transfection [58].Therefore, we investigated the fate of fluorescently labelled grapholipoplexes.In Fig. 2j we reported confocal microscopy images of MDA-MB cells treated with fluorescently labelled pristine (left panel) and biocoronated grapholipoplexes (HP = 20%) (right panel).Lysosomal staining (red) was performed on the cells. As a result, the colocalization of grapholipoplexes with lysosomes led to the formation of yellow clusters.Pristine grapholipoplexes demonstrated a favorable capacity to evade lysosomal degradation, while their coronated counterparts tended to accumulate within lysosomes.These findings align with the results obtained from TE experiments and support the hypothesis that the PC formed in a protein-rich environment, such as the physiological one, can impede the escape of gene delivery systems from endosomes.This, in turn, leads to their accumulation in lysosomal compartments, diminishing their effectiveness.However, recent research has demonstrated that pre-coating NPs with plasma proteins allows for the creation of artificial coronas with tailored physicochemical properties, enhancing transfection outcomes. According to these findings, biocoronated grapholipoplexes coated with artificial coronas formed at low protein concentration (HP < 2.5%) exhibited excellent TE while minimally affecting cell viability.This indicates that pre-coating grapholipoplexes could be a viable strategy to modulate their transfection behavior in vivo. Graphene oxide potential in drug delivery and cancer therapy: protein Corona Studies GO has attracted increasing interest in the fields of drug delivery and cancer therapy owing to its planar and π-conjugated structure, which endows it with an excellent ability to immobilize substances such as metals, drugs, biomolecules [59][60][61].Additionally, the high concentration of reactive oxygen groups on GO surface enhances its functionalization ability with polar polymers or polar molecules, making it an excellent candidate for GO/polymer composites [62,63].These active groups are also perfect for immobilizing molecules on the GO surface, making it hydrophilic and an excellent choice for the delivery of drugs.D. Ananya and R. Vimala developed a unique drug delivery system made of chitosan polymerised GO to attain an anticancer drug delivery towards MCF-7 breast cancer cells [64].Among functionalization methods utilized to improve GO properties, PEGylation (PEG-polyethylene glycol), resulted in the most suitable since proved enhanced biocompatibility, solubility, and stability of GO in physiological conditions.As an instance, the use of PEG-functionalized GO as a nanocarrier to bind water-insoluble anticancer drugs was evaluated for its cytotoxicity towards human colon cancer cells by Z. Liu et al., [65].If the comparison is extended to conventional delivery systems, such as lipidbased systems, graphene-based nanomaterials in several cases proved to be more efficient for drug loading and delivery [4].As instance, in our previous study, we demonstrated the superior efficiency of GO in delivering the anticancer drug doxorubicin (DOX) respect to a commercially approved DOX-loaded liposomal formulation (Doxoves ® ), whose use has raised numerous controversies for the potential toxicity at high dosages [66].DOX exerts its therapeutic effects by intercalating into nuclear DNA.Consequently, to maximize the anticancer efficacy of DOX, the drug must be efficiently internalized by cancer cells and subsequently delivered to the cell nucleus.To investigate the intracellular distribution of DOX in cancer cells, we employed confocal microscopy.Figure 3a displays representative confocal images of two breast cancer cells, i.e., MCF-7 and MDA-MB-231 cells, treated with Doxoves ® and GO-DOX complexes.The quantitative analysis of nuclear and cytoplasmic signals presented in the histogram plots shows that the nuclear fluorescence in cells treated with GO-DOX complexes was about five times higher than that observed in cells treated with Doxoves ® for both cell lines.To get insights into the intracellular and intranuclear DOX behaviour, we conducted fluorescence lifetime imaging microscopy (FLIM) on cells treated with GO-DOX, using the free drug as a control (Fig. 3b).FLIM can distinguish free DOX from DOX adsorbed/attached to GO.In the upper panels, the FLIM analysis is presented as a phasor representation of lifetimes measured in cells exposed respectively to free DOX (left panels) and GO-DOX (right panels).The phasor plot displays clusters of data points representing pixels with similar lifetime spectra.These clusters can be identified and isolated using specific regions of interest (ROI).In the left panels, green ROI and red ROI identify the areas with pixels related to DOX in the cytoplasm and the nucleus respectively.In the right panels, violet, orange, and yellow clusters identify ROIs related to the naked carrier (GO) and the released drug (both free or associated with cellular membranes).These findings collectively emphasize the presence of specific micrometric patches along the cell border, as better illustrated in the lower right panels.We attributed these patches to the areas where GO-DOX complexes are adhering to the cell membrane and eventually releasing the drug.Our data are in agreement with previous evidence indicating that GO likely binds to integrins at the cancer cell's plasma membrane, activating the integrin-FAK-Rho-ROCK pathway and rendering cancer cells more susceptible to chemotherapeutic agents [67].To harness the full potential of GO, it is imperative to gain a comprehensive understanding of the mechanisms that govern GO-cell interactions.By unravelling these intricate mechanisms, we can pave the way for innovative strategies and drive advancements in the market of nanoparticle-based therapies for cancer treatment [68].However, successful incorporation of GO into cancer therapeutics requires a comprehensive understanding of the interface between GO itself and the biological environment [69].Motivated by the necessity of developing reliable GO-based anticancer therapeutics, we validated the anticancer capacity of GO in both its synthetic and biological forms and we got insights into the molecular mechanisms underlying the GO anticancer potential [70].We found that exposing GO to increasing percentages of HP resulted in a high impact on the GO anticancer activity with a marked increase of cell viability in three different models of cancer cell lines i.e., U-87 human glioblastoma multiforme cell line, HeLa cell line, and CasKi human cervical epidermoid carcinoma cell line, with respect to naked GO (Fig. 3c).This suggested that in a protein-enriched physiological environment, the anti-cancer effect of GO may be impaired probably due to a reduction in cell penetration.To validate this hypothesis, we further studied the impact of naked GO and GO incubated with a high percentage of HP on human epidermal growth factor receptor 2 (HER-2) expression in SK-BR-3 human breast cancer cells, a model system of HER-2 positive cancer cells.A western blot analysis on treated SK-BR-3 showed that GO treatment led to a significant reduction in overall HER-2 levels, accompanied by down regulation of expression and activation of HER-2-driven signalling pathways such as phosphatidylinositol-3-kinase (PI3K)/ proteinkinase B (AKT) and mitogen-activated protein kinase (MAPK)/extracellular signal-regulated kinase (ERK) pathways, which mediate cancer cell survival and proliferation.However, PC reversed the impact of GO on HER-2 expression and its downstream molecular effects, bringing them back to the control level (Fig. 3d).These results demonstrated that PC overrides GO anticancer ability by interdicting GO physical interaction with HER-2 exposed to cell membranes.In conclusion, PC plays a significant role in modulating the behaviour and efficacy of nanocarriers.Understanding the interactions between nanocarriers and the PC is essential for harnessing their full potential in clinical translation.Further studies are needed to explore and optimize the bio-nano interactions, considering the complex biological environment, to pave the way for advanced nanomedicine design and improved cancer therapies. Interrogating the personalized protein Corona of Graphene Oxide: a new approach for early disease detection Numerous investigations have elucidated that the protein patterns bound to nanosystems are not mere representations of the human proteome composition [71].In fact, only a few dozen plasma proteins, accounting for approximately 99% of the total plasma volume, are typically present on the surface of nanosystems.Conversely, nanomaterials serve as effective protein accumulators, exhibiting a distinctive affinity and a low dissociation rate for proteins [72].Recent studies have highlighted that a protein with low abundance in the plasma can become one of the most abundant proteins in the PC around a nanosystem [73,74].These discoveries have introduced the concept of "personalized PC," wherein the composition is influenced by changes in the concentration and structure of individual plasma proteins in each patient [75,76].In other words, when nanoparticles are incubated with plasma from patients with different pathologies, distinct PCs may form.Several diseases, including cancer, are associated with alterations in the patients' proteome, leading to significant changes in the identity of PCs.The discovery of personalized PCs has revolutionized the field of nanomedicine, expanding its applications to tumour diagnosis and prognosis.Currently, most techniques for PC analysis rely on proteomics, with mass spectrometry (MS) being fundamental in most of the proposed experiments [77].The exceptional sensitivity of MS enables the detection of subtle changes in the human proteome, allowing the identification of individual protein biomarkers and providing information about the composition and function of PCs.However, these approaches have limitations due to their labour-intensive and costly procedures, making them unsuitable for large-scale production.The World Health Organization (WHO) emphasizes that cancer screening and detection procedures must meet the REASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and robust, Equipment-free, and Deliverable to endusers) criteria [78].Therefore, researchers are exploring the integration of low-resolution benchtop techniques to develop cost-effective and efficient screening procedures.In this regard, nanoparticleenabled blood (NEB) tests have emerged as a rapid and economical technology for characterizing PCs in early cancer detection [79][80][81].NEB tests involve the evaluation of NP-PC characteristics, such as size, surface charge, and composition, using simple techniques like DLS, microelectrophoresis (ME), and one-dimensional sodium dodecyl sulfate-polyacrylamide gel electrophoresis (1D-SDS-PAGE).For instance, by incubating NPs with biological fluid from healthy individuals and those affected by cancer, information about the clinical status of subjects can be obtained by analysing the upregulation or downregulation of corona proteins within specific molecular weight (MW) ranges of the SDS PAGE profile [82].Compared to conventional proteomic techniques such as MS, the key advantage of NEB tests lies in their ability to provide a comprehensive evaluation of the protein pattern.This allows the differentiation between donor groups based on systematic alterations in multiple proteins, considering changes in NPs, tumour stage, or cancer type.Typically, NEB tests are performed in a step-by-step workflow as schematically represented in Fig. 4a.These steps include (i) the collection of clinically relevant body fluids from healthy and oncological subjects.To date, only serum and plasma have been used, while other fluids are currently under investigation; (ii) the synthesis of a library of NPs with different physicalchemical properties, (iii) the choice of exposure conditions between NPs and body fluids to generate nanoparticle-protein complexes, (iv) the analysis of protein composition of the complexes, and (v) the statistical study of experimental data to obtain the final diagnosis.The test structure has many degrees of freedom that can affect its prediction ability including the physical-chemical properties of NPs, the exposure conditions such as protein concentration, shear stress, exposure time, and temperature or the biological source (e.g., plasma, serum, saliva etc.,) [83].Among these, the detection technique for PC analysis may include two different methodological approaches, i.e., direct analysis of the PC isolated from the nanoparticle surface and indirect analysis of the PC which consists of an in-situ evaluation of the NP-PC complexes.Finally, the outcomes of NEB tests can be further paired to clinically relevant parameters in multiplexed strategies, to improve the classification ability of the test.As an illustrative example, the combination of blood levels of haemoglobin (Hb), albumin, lymphocyte, and platelet has emerged as a paramount prognostic factor for postoperative survival among patients diagnosed with pancreatic ductal adenocarcinoma (PDAC) [84].Additionally, systemic inflammatory response biomarkers (SIRBs), including white blood count (WBC), neutrophils to lymphocytes ratio (NLR), derived-NLR (d-NLR), and platelets to lymphocytes ratio (PLR), have garnered significant attention in the realm of tumor diagnosis and prognosis [85].Consequently, the vast amount of information amassed by medical and laboratory teams can be systematically evaluated and interlinked to yield a highly accurate diagnostic test.In line with this notion, a considerable portion of our recent research efforts has been dedicated to developing multiplexed tests that intricately integrate clinical biomarkers with the readouts obtained from NEB tests [86].Among nanomaterials selected for our NEB tests, GO emerged for its lowcost production, high dispersibility in water solvents, and the presence of reactive oxygen groups on its surface.Additionally, GO lower affinity toward albumin, the most abundant blood protein, allows for preferential bonding with proteins present at lower concentrations in blood, enhancing the sensitivity of differentiation between different protein classes [87].In one of our works, we adopted a multiplexed GObased blood test that paired the outcomes from SDS-PAGE profile, performed on personalised PC derived from healthy and PDAC affected donors, with clinical biomarkers such as Hb, lymphocyte, WBC, NLR, d-NLR, and PLR [88].1D-SDS-PAGE is particularly suitable for distinguishing protein patterns within NEB tests, as it offers qualitative outcomes that enable simultaneous resolution and distinction of various protein coronas resulting from different NP incubation conditions [89].We observed that the judicious fusion of low-molecular-weight proteins between 20 and 30 kDa (referred to as Area 2 in Fig. 4b) with Hb blood levels (Fig. 4c) resulted in an area under the curve (AUC) of 0.961, thus overcoming the prediction ability of a single parameter (Fig. 4d).Over ten years, our research has conclusively demonstrated that NEB tests serve as powerful tools for early cancer detection and hold the potential to catalyse the development of innovative technologies for the discovery of new biomarkers.Nonetheless, it is important to acknowledge that this technology is not exempt from limitations.Among the challenges faced, the isolation of PC necessitates a multitude of intricate steps, which, in turn, may introduce inter-operator variability, thereby compromising the reliability of the obtained results.To address this concern, indirect methods for PC characterization have gained prominence in recent years as promising alternatives to streamline the experimental steps without compromising the effectiveness of the test, while concurrently enhancing reproducibility, especially when dealing with extensive datasets [79].Indirect approaches for PC characterization involve examining the NP-protein complex as a cohesive entity, enabling the extraction of valuable information about its size, shape, surface charge, nanostructure, and mass.Techniques such as DLS, ME, and fluorescence lifetime analysis have proven to be invaluable in this regard.Notably, the employment of magnetic levitation (Maglev) has emerged as a robust technique for the indirect characterization of NP-protein complexes [90,91].This methodology leverages the application of an intense magnetic field to differentially separate objects [92].When a diamagnetic NP is injected in the test cuvette of a MagLev device it can levitate and equilibrate at different heights depending on the intensity of magnetic field gradient, exposure time and, most importantly, on the particle density.Since personalised PCs have different compositions and densities the levitating profiles along the magnetic field gradient can be used to distinguish healthy from oncological donors.In several of our recent studies, we harnessed the power of Maglev to characterize GO-PCs originating from both healthy subjects and oncological individuals affected by various types of cancer [93].Among different Maglev signatures, the 'starting position' of the PC-NP complexes i.e., the position reached when the complexes were exposed to the magnetic field, and the area of the levitating fraction of the sample at the equilibrium state (referred to as 'levitating fraction area') were identified as the most discriminant Maglev signatures to distinguish healthy from oncological subjects.Particularly, as shown in the left scatterplot of Fig. 4e, linear discriminant analysis (LDA) performed by coupling Maglev starting position and levitating fraction area of corona-coated GO complexes derived from 10 healthy and 10 PDAC-affected individuals, allowed high discrimination between the two classes, with only two PDAC subjects misclassified, meaning a specificity of 80%, sensitivity of 100%, and overall classification accuracy of 90%.To validate the aforementioned classification by MagLev fingerprints, a blind validation test was also performed on a cohort of 5 healthy and 5 PDAC samples.As shown in the right panel, only one healthy sample was misclassified by the test, which thus reached a global accuracy value of 90%.Finally, since we demonstrated that a proper combination of non-specific laboratory data (e.g., low Hb levels), with the outcomes of GO-based NEB tests, discriminated PDAC patients from healthy controls with high diagnostic accuracy, in a recent work we assessed the ability of the MagLev test in detecting PDAC when coupled with the blood levels of glycemia, cholesterol, and triglycerides (Fig. 4f ) [94].The multiplexed strategy was validated using a sample cohort made of 24 PDAC patients and 22 healthy volunteers and its most optimised version was obtained by coupling the starting position with the patients' glycemia levels, obtaining an AUC of 0.96 (Fig. 4g).Although still in the exploratory phase, the potential implications of this technology, if substantiated on a large cohort, are poised to revolutionize clinical practice by enabling rapid and robust cancer detection methodologies. Conclusions In summary, a glimmer of opportunity is opening in the development of clinically applicable theranostic solutions thanks to the exploitation of GO in the biomedical field.Passing from gene delivery to drug delivery and diagnostics, GO seems to provide interesting new alternatives for the development of highly-performing vectors for nucleic acids, drugs, and biomolecules, in many cases surpassing the technologies already on the market in terms of biocompatibility, reproducibility and costs.Notably, GO also holds great promise in the fields of gene therapy and drug delivery for cancer treatment.Efforts have been made to improve the efficiency and safety of nucleic acid delivery vectors, with GO emerging as a valuable non-viral option.Functionalizing GO with lipids has been explored to enhance its gene delivery capabilities.Microfluidic devices have been used to monitor GO-based hybrid gene vectors which have demonstrated efficient gene transfection with low cytotoxicity.In drug delivery, GO's planar structure and functionalization abilities have made it suitable for loading and delivering drugs.It has shown advantages over conventional lipid-based systems in terms of drug loading and stability showing superior efficacy in delivering anticancer drugs compared to approved formulations.Based on the collective experimental findings presented in this review, it can be inferred that PC has a substantial impact on various interactions involving GO.PC exerts inhibitory effects on the cytotoxicity induced by GO on tumor cells or influences immune response activity and biodistribution.Given the challenges associated with precisely controlling protein interactions in vivo, many strategies aimed at modulating the PC rely on functionalization with artificial corona that suppress protein adsorption and reduce lysosomal escape.The unique properties of biocoronated GO hold potential for specific cell targeting applications.Although the compositions of GO corona are still being studied, initial data are encouraging.For example, the enrichment of ApoE residues in the graphene-based materials corona could facilitate the traversal of the blood-brain barrier and enable targeting of the cerebrovascular endothelium for the treatment of neurological diseases [95].Moreover, when immersed in plasma from oncological patients, GO-PC exhibits unique characteristics that can be exploited to develop PC-based diagnostic methods. In this review, we summarized and critically discussed the main achievements regarding the use of GO in biomedical applications over the past decade.The upcoming one is expected to definitively bring GO technologies from basic research to clinical practice.Notably, the arising concept of PC in addition to revolutionizing most nanotechnologies, will bring new opportunities, especially for graphene-based materials.In conclusion, we expect that the achievements thus far represent just the beginning of a long journey towards new fascinating applications of graphene-based materials in theranostics. Fig. 1 Fig. 1 Application of biocoronated Graphene Oxide in gene delivery, drug delivery and diagnostics Fig. 2 Fig. 2 Synthetic evolution of hybrid gene delivery systems made of GO nano-sheets surface-functionalized with lipids described by a multi-step validation approach.a Sketch of the synthesis' procedure of the GO-based complexes from their 'synthetic identity' (grapholipoplexes) to their 'biological identity' (biocoronated grapholipoplexes).b Physical chemical characterization of DOTAP grapholipoploxes in terms of size and zeta potential changes by varying DOTAP/complex weight ratio (Rw) through DLS measurements.c Transfection efficiency (TE) measured as relative light units (RLU) to milligrams of proteins, and cell viability of GO, DOTAP and grapholipoplex once administered to HeLa cells.d Confocal microscopy images of Hela cells treated with DNA-red labeled GO complexes (left panel) and grapholipolexes (right panel).Cell nuclei are marked with DAPI.The same three characterization steps (i.e., DLS characterization, TE and cell viability and confocal microscopy experiments) were performed on multicomponent grapholipoplexes (panel e, f,and g, respectively) and multicomponent grapholipoplexes once incubated with different percentages of human plasma (HP) (panel h, i, and j, respectively).Sketch Adapted from Di Santo, et al.Nanoscale 11.6 (2019): 2733-2741; Di Santo, et al.Applied Physics Letters 114.23 (2019): 233,701 and Quagliarini, et al.Pharmaceutics 12.2 (2020): 113 Fig. 3 a Fig. 3 a Confocal microscopy images of MCF-F and MDA-MB 231 cells treated with commercial liposomal doxorubicin (DOX) Doxoves ® and DOX-loaded graphene oxide (GO) formulation (GO-DOX).The hystogram plots show the fluorescence intensity of nuclear and cytoplasmic signals in cells related to Doxoves ® and GO-DOX complexes.b Phasor fluorescence lifetime imaging microscopy (FLIM) analysis performed on MDA-MB 231 cells treated with free DOX (upper left panel) and GO-DOX (upper right panel).The phasor plots contain cluster of points corresponding to pixels with similar lifetime.The cluster are identified by specific region of interest (ROIs) related to each molecular species (e.g., free DOX with red ROI, DOX attached to biological membranes with green ROI etc.,).In bottom panels, intensity and lifetime images of DOX-treated cells and GO-DOX-treated cells coloured according to the ROIs.c Cell viability of U87, Hela and CasKi cells treated with naked GO and GO incubated with different percentages of human plasma (HP).d Densitometric quantification of HER-2, ERK, and AKT expression, normalized on β-actin, and of pHER-2/HER-2, from three independent experiments; one-way ANOVA test followed by Tukey's multiple comparison test (*p < 0.05, **p < 0.01, ***p < 0.001).Adapted from Quagliarini et al., Nanomaterials 10.8 (2020): 1482.and Cui et al.Nanoscale Advances 4.18 (2022): 4009-4015 Fig. 4 a Fig. 4 a Schematic workflow of nanoparticle-enabled blood (NEB) test for cancer detection.Human plasma is collected from healthy and oncological individuals and incubated with nanoparticles (NPs) to generate personalised NP-protein coronas (PCs) complexes further characterised by direct or indirect analysis.The PC characterization readouts can be paired with clinical blood levels to enhance the diagnostic power of the test.b 1D profiles obtained by SDS-PAGE images derived from direct analysis of personalised graphene oxide (GO)-PCs related to 34 healthy (green) and 34 oncological (red) individuals.Black lines identify the most discriminant molecular weight (MW) region between 20-30 kDa (Area 2).Boxplot of the computed Area 2 for all the processed samples is reported in the inset.** indicate a Student p-value < 0.001.c Box plots of electrophoretic and clinical blood levels for oncological (red) and healthy (green) sample distributions.Asterisks correspond to Student p-values: * p < 0.05; ** p < 0.001.d AUC obtained by coupling Area 2 and haemoglobin (Hb) as classifiers.e Scatter plot of the Maglev signatures derived from indirect analysis of personalized NP-PCs complexes from 10 healthy and 10 oncological subjects.The black line is the output of linear discriminant analysis (left panel).The output of a blind validation test performed on 5 healthy and 5 oncological samples and superimposed with the distribution of the training test (ellipses) (right panel).f Distributions of Maglev fingerprint and blood levels of 22 healthy and 24 oncological subjects g Receiving operating curve and AUC calculated from the coupling between glycemia blood level and Maglev starting position of the 22 healthy and 24 oncological subjects.Figure adapted from Caputo, D. et al., Cancers 13.1 (2020): 93.; Digiacomo, L. et al.Cancers 13.20 (2021): 5155.and Quagliarini, E. et al.Cancer Nanotechnology 14.1 (2023): 1-12
9,472.4
2023-08-11T00:00:00.000
[ "Medicine", "Materials Science" ]
Teaching our grandchildren to suck eggs? Introducing the study of communication technologies to the “Digital generation” Abstract It has been argued that age-related and generational differences in communication technology use and more generally in learning style and mindset increasingly divide lecturers from students. This paper reports an investigation of one cohort of level 1 students’ current communication practices and learning styles conducted in order to adapt a module in direct response to student need. A small scale survey of communication and web use was undertaken and students completed the Kolb learning style inventory. The results demonstrate that the sweeping generalizations of generational or age related difference are not a firm foundation for pedagogy. For example, familiarity and use of Web2.0 technologies was patchy and students seemed to prefer to be consumers not producers, though they did show a preference for immediate communication. This reinforced our sense of the need to teach students about many Web2.0 technologies, especially the content creation aspects. Students had diverse learning styles and their preferences did not suggest a radical change from the past. The need continues to be to offer a variety of learning opportunities for a diverse student body. The paper demonstrates the value of systematic data collection about students’ existing knowledge and practices and of assessed reflective activities to stimulate students to be more active in negotiating a successful learning experience for themselves. Introduction In the light of the many claims that student use of the web and other communication technologies, indeed their fundamental learning styles may be changing, this paper reports a small scale investigation into level one undergraduates' use of Web2.0 and other technologies. The objective was to fit the module design to student need and behaviour patterns. Background It is a recurrent fear in Higher Education that lecturers are out of touch with students especially undergraduates straight out of school. Sacks (1996), for example, articulated fears about the conflict resulting from different views of education between "baby boomers" and "Generation X". Another generation gap has seemingly opened up with the identification of "Millennials" aka Generation Y, the Internet Gen or Nexters (e.g. Zemke, 2001, Oblinger, 2003, Raines, 2007 who are "digital natives" or the "google generation". Apparently Millennials are "sociable, optimistic, talented, well-educated, collaborative, open-minded, influential, and achievement orientated" (Raines, 2007). They are supposedly tied together by a shared set of demographics and by having lived through a set of defining historical events. A degree of scepticism about these alleged trends is surely justified. It seems odd, as the concept of a 'Millennial' generation does, for example, to bracket together everyone born since 1982; and the concept globalises American social trends. Even the wikipedia entry on the subject of "Generation Y" at the time of writing contained a large number of banners marking parts of the text as containing "original research or unverified claims" (Wikipedia, 2007). Discussion of these alleged changes tends to be rather woolly and speculative. For example, Oblinger (2001) quotes Frand (2000) for a number of trends characteristic of the "information-age mindset" of the "new students". However, when probed these trends seems questionable. The first claim is that "computers aren't technology", i.e. that computers are taken for granted. However, clearly some technologies remain new and exciting, even if other usages have slipped into the background as obvious and taken for granted. Other claims in this discourse are simply not true, e.g. "The Internet is better than TV" implies that TV has been overtaken by the Internet. In fact, while the Internet is eroding into TV watching time, TV still occupies more hours (Ofcom 2006). Furthermore, the Internet may be used for watching TV. One accepts the truth of many of Frand's trends but they seem to affect us all, e.g. the growing intolerance of delay. The death of the real was heralded in the 1970s. Very broad changes do seem to be occurring and it is reasonable to suppose that those who are young now are more affected by them, but the generational framework for thinking about it seems at best simplistic. It is more plausible to see many differences identified as generational as more properly reflecting differences of life stage. Thus Szeto (2005) quotes a schematic for seven ages of financial behaviour, based on life style and income tied to age. Presumably such a logic also applies to many aspects of behaviour. For example, university students have a certain set of communication needs and their personal social network has a particular (rapidly changing) shape; one that alters radically as they enter employment or get married. (Prensky, 2001a, 2001b, Update 2007, Digital natives, 2007. In this concept the digital native is habituated to "twitch-speed, multitasking, randomaccess, graphics-first, active, connected, fun, fantasy, quick-payoff world of their video games" (Prensky, 2001b (Horrigan, 2007) or "eengagement" in the UK (spatial-literacy, 2007). Age is not necessarily the dominant variable, but commentators have found significant differences emerging that may be relevant to learning: There is also evidence of a significant difference in communications usage patterns between young adults and the general population: for example, 16-24 year olds spend on average 21 minutes more time online per week, send 42 more SMS text messages, but spend over seven hours less time watching television. (Ofcom, 2006) Like many past generations "children in each of the past several decades have always been exposed to new technologies -and made emotional and rational tradeoffs among them" (Szeto, 2005). It is important to be precise here, however. Excited writing about Web2.0 tends to imply that it has been adopted most actively by "young people". Yet the most active in adopting Web2.0 technologies according to a 2007 Pew study, "the Omnivores", have a median age of 28 (Horrigan, 2007, p.6). The Spire project survey gives a different impression, though again it does not indicate marked generational differences (White, 2007). Interestingly, the 2005 Oxford Internet Survey actually found a small and declining number of people were trying to set up a web page (Dutton et al., 2005, p.4,6). Setting aside the more millenarian writing about generational difference and on purely pragmatic grounds it seems important to systematically learn more about students existing knowledge and pattern of communication use. Doubly so, as in our case, for a module on communication. The module: "Information and Communication Networks in Organizations" (Hamm, 2007). So Web2.0 technologies may be quite quickly adopted in organizational settings. This process potentially empowers the students by valuing their knowledge of the latest communication technologies. Certainly, we have anecdotal evidence that students' familiarity with the latest communication technologies will be valued by first employers. In the module we also try to build up general principles that can be used to apply web paradigms inside organizations. Naturally, students are particularly interested in research in mobile communication or IM because they themselves use it daily. Information and Communication Networks in A second strand in the module is encouragement of the students to reflect more about their own personal learning and communication preferences. Practical sessions discuss learning styles, and this is assessed by a weekly, online learning log. Our premise was that students reflecting on their learning and communication within the module, and being more aware of their own style/preferences would encourage a 'deeper' approach to learning (as conceptualized by Entwistle 1998) and hence greater understanding of organisational information and communication networks. Experience of encouraging students to reflect upon their learning during this module suggested this was not a straightforward task. Students prioritise their time and balance their University commitments with social and other needs. Unless there is some element of compulsion, many students would delay recording any reflective thoughts until nearer the coursework 'hand-in' date. There is also a tendency among some students to 'simply' describe their learning experiences without engaging in any meaningful personal and academic reflection. Our solution was to use one of the University's Virtual Learning Environment (VLE) tools (WebCT Personal Journal) that enabled students to draft and post entries throughout the semester. These could be viewed and formatively commented upon by tutors, minimising the risk of students misunderstanding the coursework aims. The postings were 'time-stamped' by the VLE enabling us to know how regularly and frequently the students posted. As the regularity and frequency of their postings was a component of the final mark, it was hoped that would be sufficient motivation to take this aspect of the coursework seriously. To introduce this strand of the coursework and for students to gain a better understanding of their own learning style, and the various key conceptions and debate surrounding learning styles (e.g. Coffield et al, 2004), a series of 'practical' sessions was planned. These sessions were also used as an opportunity to promote the reflective element in the Department's framework for Personal Development Planning (PDP). Our research The purpose of the research reported here, therefore, was to collect some systematic data from students at the beginning of the module about their use of communication technologies. This would be used to help shape the module to better meet student need/ability and knowledge. We also planned to investigate learning preferences and encourage students to be more reflective about their practices and preferences. Our research question in response to claimed generational changes was "to identify students' communication practices, Web use and learning preferences" to shape the content and style of the teaching of the module. More specific objectives were, firstly, to gauge students' familiarity and use with Web2.0 technologies. Secondly, we wished to explore their communication channel preferences. Thirdly we also were to investigate their learning styles. Method In undertaking the research the module teaching team worked closely with Stephen Tapril a research student making "An investigation into the impact of the Millennials Generation on academic library services and the skills of library staff". Together we designed a short questionnaire that students could be asked to fill in. It encompassed use of classic Web2.0 sites, general internet and mobile use and preferred learning styles. The full questionnaire is reproduced below as an appendix. Questions were derived from our own knowledge of the field, both of new technologies and characteristic issues, such as around addiction or willingness to meet people first encountered online in person. The research was cleared with University of Sheffield ethics principles. Also, in administering the questionnaire it was emphasised to students that there was no requirement to participate. Submissions were anonymous. Students were asked to complete the questionnaire in the first practical and preliminary results were reported to them in the lecture in week two. 25 out of 45 registered students completed the survey. This response rate was probably influenced more by technical difficulties saving the file after download and attendance rates than a reluctance to participate in the research. 5. Survey results 5.1. Knowledge and use of Web2.0 The main question in the questionnaire asked students to say how frequently they used 13 resources or types of tool. If they reported that they had never heard of it an item was scored as 0, having heard of it but never used it 1, using it occasionally 2, weekly 3 and daily 4. The majority of respondents gave as their favourite website or resource a site that could be seen as a content sharing site: that is, video sharing portals such as YouTube and Alluc.org. Sports websites followed in popularity. In particular, sports sites tended to relate to football clubs of which presumably respondents were fans. Social networking (sites such as MySpace and Facebook) were less mentioned, than might have been expected. Figure 1: blog/website ownership Of the total respondents less than one third, acknowledged owning or maintaining a blog or website. While Web 2.0 is based upon the premise of content sharing, it appears that in this small sample at least, individuals in this age group were satisfied with consuming content and did not participate in using technologies to create or share content. Communication practices The survey examined the use of a variety of methods of communication, including traditional telephone calls, in order to assess the extent to which there was a preference for online or offline communication. We were also interested in whether the advent of voice over IP (VoIP) and instant messaging displace traditional methods of communication. The results indicate that students still place emphasis on the telephone for communicating with family and friends. Instant messaging does seem to have made some in-road into traditional methods and was rated higher than physical face-to-face visits. The question did include blogs as an option but none of the respondents considered the method to be important. Interestingly, email was rated quite low in importance. It could be that 'real time' conversation is valued more by respondents, which would explain why telephone use and instant messaging are rated so highly. The low rating for email is still surprising, when it is considered that all respondents acknowledged frequently using e-mail accounts other than that provided by the university (Question 10). The majority (56%) reported having one additional account, while the remaining 44% reported having two. No respondent acknowledged running three or more additional email accounts. Since the students reported the main reason for using the Internet was to "communicate with family and friends", (followed by "music" and "studying"), the relatively low importance assigned to email in the findings might be explained in an overall assessment of online versus offline communication methods. That is, the findings seem to suggest: • The student group placed heavy emphasis on the Internet as a medium for communication • Methods of communication are valued for being able to support 'real time' conversation, and for convenience, not in terms of 'offline'or 'online' preference Respondents were also asked whether they maintained online friendships, and whether those friendships had extended to meeting people offline. The findings, illustrated by Figure 5, suggest that in fact most of the sample did not have relationships with someone they had first met online. This suggests that the majority of communication taking place online is with existing friends and relations. However, those that do maintain online friendships seem to exhibit enough trust in those relationships to warrant meeting people offline: 80% of respondents said they had met people from the Internet. Learning preferences The final survey question asked students describe their learning preferences through responding to four groups of paired descriptors. The descriptors were kept as simple as possible in order that participants could relate to them more readily. The aim was to investigate the claim in the literature that this age group are collaborative and active learners. Figure 5: learning preferences by paired descriptors While scarcely conclusive, these findings illustrate that the majority of the students preferred to work alone, with instructions and support available. Participants appeared to be evenly divided on the matter of preferring practicalbased learning or theory-based learning. Quite apart from the small size of the sample, it is possible that there is a bias in the result because not all students attended the practicals or lectures where the questionnaire was distributed and so did not complete it, and students who saw themselves as more self-reliant would be more likely to be non-attendors. Yet the findings contrast quite sharply with the socially-oriented but independent learning preference claimed for this age group in the literature. The primary conclusion to be drawn was that there was diversity of learning preferences which needed to be accomodated in the module. Learning styles In addition to reporting the survey results to them, at a later practical session we also asked students to complete a more standard learning style self assessment. Kolb's (1999) Learning Style Inventory (LSI) is recognised to be a potentially useful tool for profiling a group of students (Coffield, 2004). Whilst the LSI is not a diagnostic tool, all students said that they recognised many of the characteristics in their individual profiles. The results are set out in Table 3. Our approach was to stimulate students themselves to reflect more on their learning style and media use. We decided to encourage this through an assessed weekly learning journal. This would also give us further insight into the learning preferences of our students, with potential to fit the module to their apparent needs. Learning Journal Anecdotal evidence suggested that in previous years even the most conscientious of students made few reflective notes during the course of the module and the quality of their reflections was generally descriptive and evaluative. In this presentation of the module the use of a VLE to 'timestamp' students' postings throughout the module had a significant effect upon the regularity of the students' reflective thoughts. 42% (19/45) of students were judged to have reflected regularly (i.e. approximately once every two weeks). Unfortunately, the remaining 58% of students were deemed to have not regularly posted to the VLE. Furthermore, 29% of students were judged to have been sufficiently critical, deeper and personal in their postings. For those students who did engage with the activity there were signs of a sophisticated awareness of personal need, that could feed into module design. For example, this particularly personal and deep reflection: " However, some students, occasionally by their own admission, continued to adopt a 'surface' (Entwistle, 1998) Overall, there was a strong sense of diversity and the need to provide diverse routes into learning, rather than assuming all students have the same style. Conclusions In relation to our first research objective, to find out about patterns of Web2.0 use, there was quite a clear message that familiarity with Web2.0 sites was patchy, and even those that were known, were used as consumers not contributors. The finding for our second research objective was to emphasize the importance of IM, text messaging and communication through SN sites (probably Facebook). No clear pattern emerged in relation to the third objective, in the area of learning styles, but if anything these students were seeking guidance and support. For our immediate purposes the response rate on the questionnaire was satisfactory, and subsequent experience suggested that the major findings for the group were representative. For the reader, the sample must be problematically small. However, the findings do challenge easy assumptions about trends in the use of Web2.0, for example. The general point is made that systematic investigation of the student skill set is of value in shaping learning content and support styles. 6. Practical implications 6.1. Teaching of Web2.0 Perhaps they should not have been, but several of the results of the study were unexpected to us. For example, we were surprised that many of the classic Web2.0 sites, such as Del.icio.us and Flickr, were unknown or at least little used. This could possibly have reflected our being behind the times in identifying the most "cool" sites. Like Poulter (2007) we were unfamiliar with Veoh which was mentioned frequently as a favourite site. The stress on IM and mobile was expected, but it was salutary to see the preference confirmed in hard statistics. Despite students' relative lack of use of blogging, we retained the topic in the module as a technology with a momentum towards increased use in the corporate sector (e.g. Cass et al., 2005, Lewis Global Relations, 2007. We set up practicals to examine how both blogs and wikis worked behind the scenes. Talking to students in these sessions confirmed the hunch that Wikipedia is heavily used as a source, but few students contribute to the content or understood how it worked (cf White, 2007, p.12). Many "Web2.0" technologies do need to be taught to the digital generation. In addition, in the final session of the module a substantial block of time was devoted to an exercise in which students were invited to explore how the application of principles of management taught in relation to online communities Learning journal Our approach in relation to learning styles, recognizing the diversity in the group, was twofold. Firstly, we offered a diversity of learning experiences e.g. a rich mix of lectures, practicals, group work and online material. Much of the practical material could be conducted independently, since everything required was available via WebCT. We did consider offering "virtual practicals" where there was a requirement to complete the work but it could be done at any time. It could perhaps be supported using IM. In fact, the main obstacle here was the difficulty of implementing this within WebCT. Ultimately, however, the results of our surveys did suggest a desire for support and instruction. More importantly, we tried to use the assessed reflective journal to stimulate students to themselves think harder about their own preferences and to empower them to make choices about how they managed their own learning within the resources made available in the module. This was not entirely successful. Whilst the individual reflective journal was a relative success in that there was a significant improvement in the regularity and the quality of reflections compared to previous years, we were still concerned that almost a third of the students failed this aspect of the coursework. Our conclusion is not that the approach is wrong, but to recognize how far such reflective work needs to be supported. Implicit in our demand to complete a journal was a requirement to reflect at a personal level and to write in the first person. Writing reflectively is a specific style, which it is difficult to learn. Elsewhere, of course, we were also requiring writing in an academic style where the passive voice is usually preferred and the approach is to be critical and synthetic rather than reflective. Anecdotal evidence suggests that our students struggle to decide when and where it is acceptable to include their opinion their work. Furthermore, sharing reflections relies upon a particular form of trust between the tutor and the student. Students were unlikely to explore and reveal sensitive and deep issues with a tutor they know little or relate to in a particular way. Arguably, therefore there may be a correlation between the mark for this aspect of the coursework and the relationship between the tutor and student (if it could be measured). We conclude, therefore, that there is a conflict in our demands for this aspect of the coursework and that for a Level 1 undergraduate module we need to be more explicit about what is expected and simplify the requirements. Conclusion The results of the survey were from a tiny sample: one cohort in one department in one university in one country. We might had very different results (and drawn different conclusions) if we were teaching English, for example, because of a different pattern of preference about communicating using IT. However, we do think the results are interesting at a general level as undercutting simplistic thinking about how student knowledge and attitudes are changing. Certainly our own patterns of communication technology use, as middle aged adults (Stephen excepted) are quite different from that of our students. We use email heavily, IM not at all. For all its virtues, Wikipedia is not terribly good for academic work. Blogs seem rather outmoded. We are only slowly coming to see a value in Youtube. Students stressed the importance of the fact of services like Youtube or Veoh being free, whereas we have money and less time. Nevertheless, their knowledge of Web2.0 technologies, for example, is quite patchy and the theoretical constructs developed for CMC continue to be relevant. Conducting the survey was a useful way to reflect about differences and contrasts in behaviour between ourselves and students and to explicitly discuss them within the module. It empowered the students to recognise their own expertise in certain technologies, some of which we frankly acknowledged our own ignorance of, but equally it identified gaps we needed actively to fill. It helped us to think through how we needed to support an arguably increasingly diverse student population, while avoiding the easy assumption that they are equally knowledgeable across all "new" technologies or wish to learn in a particular way. The introduction of a substantial level of reflective work into the module proved challenging but can be built on to encourage students to negotiate the learning experience that fits their needs.
5,607
2008-06-01T00:00:00.000
[ "Education", "Computer Science" ]
Ancient bacterial genomes reveal a formerly unknown diversity of Treponema pallidum strains in early modern Europe Sexually transmitted (venereal) syphilis marked European history with a devastating epidemic at the end of the 15th century, and is currently re-emerging globally. Together with non-venereal treponemal diseases, like bejel and yaws, found in subtropical and tropical regions, it poses a prevailing health threat worldwide. The origins and spread of treponemal diseases remain unresolved, including syphilis’ potential introduction into Europe from the Americas. Here, we present the first genetic data from archaeological human remains reflecting a previously unknown diversity of Treponema pallidum in historical Europe. Our study demonstrates that a variety of strains related to both venereal syphilis and yaws were already present in Northern Europe in the early modern period. We also discovered a previously unknown T. pallidum lineage recovered as a sister group to yaws and bejel. These findings imply a more complex pattern of geographical prevalence and etiology of early treponemal epidemics than previously understood. Introduction Treponemal infections, namely yaws, bejel (endemic syphilis), and most notoriously, the sexually transmitted syphilis, represent a reoccurring, global threat to human health. Venereal syphilis, caused by Treponema pallidum ssp. pallidum (TPA) infects the worldwide human population with millions of new cases every year 1 ,2 . The two endemic treponemal subspecies closely related to T. pallidum ssp. pallidum are T. pallidum ssp. pertenue, (TPE) and T. pallidum ssp. endemicum (TEN). TPE is common in the tropical regions of the world, where it causes yaws and a form of treponematosis in non-human primates. TEN is the causative agent of bejel, which is mostly found in hot and arid environments. Both of these treponematoses are usually milder in manifestations than syphilis, and have a lower incidence on the population level, yet their transmission rates have prevailed over the recent years 3,4 . Resistance against second-line antibiotics has recently developed in T. pallidum ssp. pallidum 5 whereas penicillin treatment still remains effective 1,6 . Sexually transmitted syphilis progresses slowly, while afflicting considerable damage to bone, internal organs and the nervous system 7,8 . The endemic types of T. pallidum (TPE, TEN) are transmitted and mainly manifested through skin lesions, but can also affect bones and joints in a comparable way to the venereal form 4,6 . TPA frequently transmits congenitally, resulting in various disorders for both mother and child during pregnancy, birth and infancy 7,8 . For TPE and TEN infections today, this form of transmission is atypical, if not entirely unprecedented 9,10 . Although the three T. pallidum subspecies can be separated by genetic distinctions 11,12 , their clinical symptoms in skeletal material are difficult to distinguish 6 . 4 The re-emergence of syphilis is a reminder of the formidable threat it may represent with its continuous adaptations 5,13 . Devastating outbreaks of syphilis have been documented in historical times. Early medical reports from the late 15 th century portray the most well-known epidemic, a rapid and Europe-wide spread of venereal syphilis in the wake of the 1495 Italian war 14,15 . These statements also describe it having gradually changed into a milder, more chronic disease in the subsequent decades, similar in manifestation to the cases of modern day syphilis 16,17 . These events coincide with historical expeditions, and have ignited a long-persisting hypothesis suggesting that syphilis was introduced to Europe by Columbus and his crew upon their return from the New World in 1493 18,19 . The alternative multiregional hypothesis contradicts this assumption, and presumes a pre-Columbian prevalence of syphilis on the European continent, potentially as a result of prehistoric spread of the disease through African and Asian routes [20][21][22] . Skeletal evidence from human remains carrying pathological marks characteristic for treponematoses and dated prior to Columbus' return from the New World, have been reported 23,24 . However, to date no genetic evidence exists that could confirm the existence of pre-Columbian syphilis in Europe 18,25 . Potential syphilis infections are mentioned in the literature since Medieval times, but these diagnoses may have been confounded by symptomatic similarities with other diseases, and delusively called 'venereal' or 'hereditary' leprosy 17,25 . Misdiagnoses occurred until recently, due to the challenges in recognizing T. pallidum infections from other diseases and its subspecies from each other 26 . In the presentday, the treponematoses are largely segregated in medical diagnostics either with traditional serological tests or with the more recently developed multi-locus sequence typing (MLST) schemes [27][28][29][30] . Before the modern genetic classifications were introduced, supporters of the 'unitarian hypothesis' claimed that all treponematoses were in fact one and the same disease 31,32 . Although this theory was justified in questioning the geographical distribution and clinical symptoms as the main means of categorization, the understanding of phylogenetic cladality between the treponemal subspecies has since disputed its more general principle. Genomic studies have found that the TPA and TPE strains, although clearly separated phylogenetically, remain extremely similar, and their geographical origin and time of emergence have proved complicated to confirm 11,15,33,34 . A recent genetic study on modern lineages of treponematoses supported a common ancestor of all current TPA strains in the 1700's 34 , whereas the more general diversification of T. pallidum into subspecies has been addressed in previous studies and assumed to have happened in prehistoric times 19,35 . Since modern genomes reflect the evolutionary situation of their isolation time, the mutation rate estimates drawn from them can be biased by natural selection yet to happen 36 . Past lineages may also reveal lost variation unrepresented by the currently known pathogen strains available for research 37 . For these reasons, reconstructed ancient bacterial genomes have an unprecedented potential to illuminate their species' unresolved divergence times and origin. Several historically significant pathogens have been successfully reassembled for investigation and their reconstructed genomes have greatly contributed to our understanding of the evolution and spread of reemerging infectious diseases [38][39][40] . Ancient DNA (aDNA) studies concerning treponematoses have so far remained scarce for both biological and methodological reasons. The treponemal spirochetes survive poorly outside their host organism and are present in extremely low quantities during late stage infections, often evading detection even in living patients 41 . The final, tertiary stage produces the most notable alterations to the skeleton in response to the human immune system, making these treponemal cases most frequently recognized, but less likely to yield genetic evidence, due to the clearance or latency of the pathogen 42,43 . Most notably, the bones that likely contain a large amount of treponemal agents belong to congenitally infected neonates. These fragile remains rarely survive and are, even when present, often overlooked in the archaeological record 44,45 . Previously, it would not have been feasible to use samples with low bacterial loads to detect pathogen DNA; however, recent advances in target-enrichment, high-throughput sequencing and sensitive screening methods for aDNA have aided overcoming this issue 46 . Currently, the technical advancements, together with careful selection of samples affected by treponemal pathogenesis, are enabling genomic studies on this elusive pathogen for the first time. The means to recover T. pallidum from historic human remains were recently established in a study on perinatal and infant individuals from colonial Mexico, in which two ancient genomes of T. pallidum ssp. pallidum and one ancient T. pallidum ssp. pertenue genome were described 45 . However, attempts to retrieve T. pallidum DNA from historical adult individuals have so far been unsuccessful. Here, we analyze ancient bacterial genomes from four novel historical T. pallidum strains retrieved with target enrichment from pathological human remains, including adult and subadult individuals, originating in central and northern Europe. The newly reconstructed ancient genomes represent a variety of T. pallidum subspecies including a formerly unknown form of treponematosis phylogenetically basal to both bejel and yaws lineages. For the first time, treponemal genomes dated temporally close the New World contact have been retrieved from European samples, including closely related strains to the endemic types today mostly restricted to the tropics and subtropics. Geographical origins and osteological analyses of samples For this study, remains from nine individuals were included: five from the Crypt of the Holy Spirit in Turku, Finland, one from the Dome churchyard in Porvoo, Finland, one each from St. Jacob's cemetery and from St. George's cemetery in Tartu, Estonia, and finally, one from Gertrude's Infirmary in Kampen, the Netherlands (Supplementary Table 1 All four positive samples were radiocarbon dated (in Klaus-Tschira-Archäometrie-Zentrum am Curt-Engelhorn-Zentrum, Mannheim, Germany, in the Laboratory of Chronology, Finnish Natural History Museum, Helsinki, and in the AMS laboratory, ETH Zürich). For PD28, the calibrated dates range from 1666 to 1950 CE. The church cemetery, however, was replaced in 1789 CE, indicating the last possible burial time for the individual 49 . CHS119 and SJ219 both show AMS results dating the samples starting from the 15 th century CE. For CHS119 the upper limit of dating ranges to the 17th century, whereas for SJ219, two independent laboratory estimates confirmed an upper limit within the 15 th century CE. The disarticulate bone KM14-7 is C 14 dated to a range from the late 15 th to early 17 th century CE 50 . Marine and freshwater reservoir effects can cause an offset in C 14 ages between contemporaneous remains of humans or animals mainly relying on terrestrial food sources, and those principally using food sources from aquatic environments 51 . Calibration corrections for the affected radiocarbon results are feasible, but require a local baseline of isotopic signature which is currently unavailable for many regions in the world. Alternatively, other available carbon-based materials from the grave can be used to confirm the dating of an individual in uncertain cases. The putatively pre-Columbian samples CHS119 and SJ219 went through additional attestment of the dating procedures: A fragment of the wooden coffin of the individual SJ219 was used for additional dating analysis and the reservoir effect corrections were produced for the sample CHS119 52,53 . The resulting estimates, however, gave both individuals a date range upper limit reaching 17 th century CE. For more details on radiocarbon dating and reservoir effect correction, see Supplementary Note 2. Genome reconstruction and authenticity estimation of ancient DNA A screening procedure using MALT 54 and Megan extension for visualizing 55 Table 3 from an earlier study 45 and three of our historical European genomes, namely PD28, CHS119 and SJ219 (Table 2). Sample KM14-7 was excluded from the recombination analysis due to its sporadic placement in the Maximum-Likelihood (ML) tree topologies, which were derived for entire genomes and for each gene individually. Congruence between the complete genome alignments and gene trees was tested after evaluating the corresponding phylogenetic signal for each gene. For 40 loci, the phylogenetic signal and incongruence was significant. For those cases, we further verified the presence of at least three consecutive SNPs supporting a recombination event. Twelve loci passed this test and also correspond to those found as recombinant loci in a more extensive study of modern T. pallidum genomes (n=75) 76 . Two of the recombining genes identified in Arora et al. 22 were also confirmed in association with the ancient European genomes. Of our ancient genomes, PD28 was possibly involved in one recombinant event of the TP0136 gene as a putative recipient, along with the Nichols clade, with the TPE/TEN clade, CHS119 and colonial Mexican 133 genomes as putative donors. The same possibility was observed in the recombination event detected in the TP0179 gene, although only with TPE/TEN clade and 133 as presumptive donors. One putative recombination event concerning the TP0865 gene was identified between the TPE/TEN clade, including the CHS119 and 133 genomes and the SEA86, NE20 and SEA81-4 lineages. Finally, there is another recombination event concerning the TP0558 gene, with the TPE/TEN clade and CHS119 genome as potential donors and the SS14 clade, MexicoA, 94A and 914B from colonial Mexico, and PD28 genomes as recipients. Other assumed events involving recombination events between the modern strains and the previously published ancient genomes from the New World are listed in Supplementary Table 5 Median estimates and 95% HPD intervals are given in Supplementary Tables 6.B and 6.C. TMRCA (time for the most recent common ancestor) calculated for the whole T. pallidum family is placed far in the prehistoric era, at least 2500 BCE. However, time-dependency of molecular rates (TDMR) may lead to underestimating deep divergence times when mutation rates are inferred from genomes collected within a relatively restricted time period 77,78 . Applying a model accounting for TMDR may be possible, but would require an inclusion of genomes sampled over wide and distinct time-periods 79,80 . The latest common ancestor of the venereal syphilis strains was placed between the 10 th and 15 th century CE. The divergence of TPE and TEN (yaws and bejel) was dated between the 9 th century BCE and the 10 th century CE, while the most recent common ancestor of TPE was placed between the 14 th and 16 th century CE. Among the TPA strains the TMRCA of the Nichols clade (13 th to 17 th century CE) was clearly older than that of the SS14 clade (18 th to 20 th century CE). Due to the inclusion of 4 historical genomes, the above divergence times are substantially older than the times reported in Arora et al (2016) 34 . Similarly, the estimated mean molecular clock rate (median estimate 1.037 × 10 −7 s/s/y, 95% HPD 6.856 × 10 −8 -1.447 × 10 −7 s/s/y) is slower than the clock rates reported in recent studies 5,34 for either T. pallidum as a whole or for TPA strains exclusively. Nonetheless, the 95% HPD of the mean clock rate overlaps with the estimates previously reported 5 . For more information, see Supplementary Figure 5). Molecular clock dating allows us to refine the sampling date estimates of three of the four historical genomes (Figure 4b). The posterior distributions of the sampling dates of PD28 and CHS119 place most of the weight on more recent dates, while that of 133 favours an older sampling date. This is especially pronounced for CHS119 and 133 with the 95% HPD interval not including any dates older than 1526 CE for CHS119 or younger than 1773 CE for 133. On the other hand, for SJ219, the 95% HPD of the sampling date spans nearly the entire range defined by radiocarbon dating, making it impossible to exclude a pre-Columbian sampling date (posterior probability 0.26) (Supplementary Table 6. C). Early emergence of syphilis in Europe In this study, four ancient Treponema genomes were retrieved from human skeletal remains dating to early modern Europe, providing unprecedented insights into the first reported epidemics of syphilis at the end of the 15 th century. Two of our ancient genomes, PD28 and SJ219, were identified as TPA strains, the causative agent of syphilis, representing the first molecularly identified specimen of T. p. pallidum from historical Europe. These genomes fall within the modern variety of the TPA strains. They form a sister clade to the modern TPA branch relatively basal to all lineages, although their precise position regarding the two major clades, Nichols and SS14, could not be recovered with high confidence. The TPA-carrying sample PD28 was placed in the early modern period by combined analyses of archaeological context and C 14 dating. Two independent radiocarbon analyses were performed on the sample SJ219 Yaws in Europe Of our four historical genomes, two fall outside the variation of TPA. One of them, the genome CHS119 from Finland, clusters with the present TPE group (causative agent of yaws). Although the direct radiocarbon dating places the sample in the 15 th -16 th century, a full confidence of the exact date cannot be gained, due to the potential marine reservoir effect 52,53 . This sample provides the first evidence of yaws infections in historical Northern Europe, far from the tropical environment in which present-day yaws is typically found. Strikingly, the contemporaneous genome KM14-7 from the Netherlands falls basal to both the bejel-causing lineage and all known strains causing yaws, unveiling a previously unidentified lineage of T. pallidum. Due to the endogenous DNA coverage retrieved for the KM14-7 sample, the inclusion of this genome in the recombination and time-calibrated phylogenetic analysis was limited. Despite this, the ML tree topology with the KM14-7 in the basal position to TPE and TEN clades could be further confirmed with a closer nucleotide level inspection. The lineage shows genetic similarities to both currently existing syphilis (13 unique SNPs) and yaws (25 unique SNPs), but it represented a distinct form from both, and had apparently diverged from their common root before the cluster consisting of yaws and bejel today, that we dated to at least 1000 years BP. Altogether, these different ancient treponemal genomes from northern and central Europe point to an early existing variety of T. pallidum in the Old World. Their existence does not refute the potential introduction of new strains of treponemes from the New World in the wake of the European expeditions, but provides credibility to an endemic origin of the 15 th century epidemics. Whereas recombination events between the three modern day treponemal subspecies are deemed rare 76,86 , such events were observed across the subspecies in our study. These recombination events presumably happened in the past, before the geographical niches were acquired by the TPA, TPE, and TEN agents 34,45,76 . The historical cases of syphilis and yaws in an overlapping area provides a plausible opportunity for recombination. The potential recombination events observed in this study involved both the lineages present in the modern day variation and the ancient genomes PD28 and CHS119 from Europe and 94A and 914B genomes from Mexico 45 . Overall, our observations point towards recombination events happening in the direction of syphilis-causing clades from the yaws and bejel -causing ancestors. These recombinations between the clades further support a geographically close common history of the TPA and TPE lineages, which cannot be concluded from the geographical distribution of modern day lineages. belonged to carried genetic similarities to both currently existing syphilis and yaws, yet it appears to be a distinct form from both. The pathogenesis of this agent may have resembled the endemic types of treponematoses, since the majority of its recovered SNPs are shared with the yaws and bejel lineages. It has been suggested that yaws or its ancestors represent the original form of treponematosis that appeared and spread around the world thousands of years ago, was re-introduced to the Iberian Peninsula via the Central and Western African slave trade, some 50 years before Columbus' travels, and eventually gave rise to the venereal syphilis 35,87 . It is indeed possible that the more severe venereal form in the Old World developed from a mild endemic type of disease, enhanced by genetic recombination events or in response to a competition between the various existing pathogens 31,84 . Likewise, recombination events may have occurred between the endemic European strains and novel lineages introduced at the wake of the New World contact, precipitating the epidemic events at that time. While cladality between the different subspecies clearly exists in both the past and modern day, it now seems likely that recombination has interconnected these clades in the past, and that the genetic differences do not necessarily define the treponemal pathogenesis observed in the archaeological remains. Since diagnostic signs in skeletal remains are hard to distinguish between yaws and venereal syphilis, and so far undiscovered early treponematoses may have existed simultaneously, only further genetic studies on samples originating from all continents can properly address hypotheses about the direction of spread and order of the epidemic events. Presumably many past treponemal lineages remain unknown today and, once revealed, will prove pivotal in uncovering the relationship between treponemal strains and in dating their emergence. Outlook and implications on sampling strategies Retrieving treponeme's DNA from skeletal material is highly challenging, and the feasibility of the effort has been strictly questioned before the recently published colonial Mexican genomes 42,43,45 . Here, four out of the nine included individuals yielded a sufficient amount of treponemal aDNA for in-depth genomic analyses (Supplementary Table 1. a). While the previously published Mexican genomes were obtained from neonates and infants only 45 , we were able to recover T. pallidum DNA also from subadult (KM14-7, SJ219) and adult (CHS119) individuals, including one with only a tentative diagnosis of the disease (SJ219). Using bone tissue directly involved with ongoing inflammation or possessing an ample blood flow, such as an active lesion (KM14-7) and dental pulp (CHS119), probably facilitated the successful sampling, although in the case of the neonate (PD28), even a petrous bone proved highly successful, yielding an entire genome of historical TPA strain up to 136-fold. This first retrieval of pathogen DNA from a petrous bone was likely due to a systemic condition related to an early congenital infection with an extremely high bacterial load 88 . Notably, one of the colonial Mexican infants from a previous study 45 likely suffered from yaws infection, as well as one of the Finnish individuals (CHS119) in this study. These cases lend credibility to the notion that the different treponemal agents cause essentially similar skeletal alterations, and are highly adaptable to environmental circumstances 33,89 . We therefore propose that the geographical separation criteria between the treponemal diseases should be used with caution, especially when it comes to earlier forms of treponematoses and their diagnostic manifestations in the archaeological record. Overall, the reconstruction of novel treponemal genomes from these various ancient sources further proves the feasibility of retrieving treponemal aDNA from skeletal material and raises the hope of achieving progress with the prevailing cases of advanced and latent infections. Improving methodologies targeted for samples with low bacterial load and genomic coverage may soon aid in recovering positive aDNA results from putative cases of treponematoses from early-to pre-historic contexts, thereby illuminating the most persistent quandaries of the field, such as the ultimate origin of venereal syphilis. Sample processing Documentation and UV-irradiation of the bone material for decontamination, as well as laboratory procedures for sampling, DNA extraction, library preparation, and library indexing were all conducted in facilities dedicated to ancient DNA work at the University of Tübingen, with necessary precautions taken including protective clothing and minimum contaminationrisk working methods. Sampling and DNA extraction Before extracting DNA from the samples, all surfaces were irradiated with ultraviolet light (UVirradiated) to minimize potential contamination from modern DNA. DNA extraction was performed according to a well-established extraction protocol for ancient DNA 56 . For DNA extraction, 30-120 mg of bone powder was used per sample. The bone powder was obtained by drilling bone tissue using a dental drill and dental drill bits. For different individuals, variable amounts of extracts were produced. During each extraction one positive control (ancient cave bear bone powder sample) and one negative control were included for every ten samples. Library preparation In this study, double-stranded (ds) and single-stranded (ss) DNA-libraries were produced. All DNA library preparation procedures that were applied in this study are described in the following paragraphs. The whole-genome capture was performed as described above using the same array enrichment strategy. In addition to the blocking oligonucleotides for double-stranded libraries, specific blocking oligonucleotides 4, 6, 8, and 11 57 were used for single-stranded libraries. The wholegenome enrichment for treponemal DNA was produced in three rounds of array capture and a maximum of two libraries from different individuals were pooled for each array. Enrichment pools were diluted to 10 nMol/L for sequencing. In-solution capture for KM14-7 An additional in-solution capture procedure was performed for sample KM14-7 to obtain higher coverage. Genome-wide enrichment of single-stranded libraries was performed through custom target enrichment kits (Arbor Bioscience). RNA baits with a length of 60 nucleotides and a 4bp tiling density were designed based on three reference genomes (Nichols, GenBank: CP004010. Read Processing, Mapping and Variant Calling. The capture data from the sequencing runs were merged sample-wise and data processing was performed using EAGER version 1.92.37 (Efficient Ancient GEnome Reconstruction) 62 Genomic dataset and multisequence alignment We constituted a genomic dataset representative of the extant diversity of T. pallidum and including the three previously published ancient genomes from Mexico (Supplementary Table 2). Raw sequencing data was gathered for strains that were high-throughput sequenced. The previously exposed procedure was then applied to obtain vcf files for each genome. We then used MultiVCFAnalyzer 97 to produce alignments with the following parameters: bases were called if covered by at least two reads with a mapping quality of 30 and a consensus of at least 90% (with the one-read-exception rule implemented in MultiVCFAnalyzer). The resulting alignment was realigned with already assembled genomes (isolates BosniaA, CDC2, Chicago, Fribourg, Gauthier, MexicoA, PT_SIF1002, SS14, and SEA81_4_1), using AliView version 1.21 98 . SNP quality assessment Calling SNPs from ancient bacterial DNA data is challenging due to DNA damage, potential environmental contamination and low genome coverage which may lead to the recovery artifactual genetic variation in reconstructed DNA sequences. This can interfere with all subsequent analyses and, in particular, lead to artificially long branches in phylogenetic trees and impede time-calibrated analyses. Artifactual SNPs resulting from environmental contamination shared between several samples may also lead to biases in inferred phylogenetic tree topologies or generate misleading evidence of genetic recombination. In order to filter artifactual SNPs, we used the SNPEvaluation tool 99 as proposed by Keller and colleagues 100 . More specifically, for all newly generated ancient genomes, as well as for all previously published genomes for which the mean sequencing coverage was below 20, we reviewed any unique SNP and any SNP shared by less than 6 genomes that had at least one of the following features in a 50-bp window around the SNP: (i) some positions were not covered, (ii) the reference was supported by at least one read or (iii) the coverage changed depending on the mapping stringency (i.e. we compared the initial alignments with "low-stringency" alignments produced with bwa parameter n=0.01). Any SNP supported by less than four reads was excluded (i.e. N was called at that position) if at least one read supported the reference or if the SNP was "damage-like" (i.e resulting from a C to T or G to A substitution). Furthermore, the specificity of the reads supporting the SNPs was verified by mapping them against the full GenBank database with BLAST 101 . Any SNP supported by reads mapping equally or better to other organisms than T. pallidum was excluded. Since many of these false SNPs likely arising from non-specific mapping were located in tRNAs, we excluded all tRNAs from the alignments. The list of excluded positions was written in a gff file (Supplementary Data 1) and removed from the full alignment generated by MultiVCFAnalyzer still contains excluded SNPs, which we removed using an in-house bash script. Phylogenetic and recombination analysis. KM14-7 SNP analysis In the phylogenetic tree, KM14-7 was placed basal to TPE and TEN clades. Although bootstrap support was very high, we decided to further evaluate the authenticity of this remarkable position because this genome contained a large fraction of missing data. We investigated genomic positions for which the ancestral variant of TPE/TEN and TPA was likely different. Our rationale was that if KM14-7's position on the branch connecting TPE/TEN and TPA was authentic, the genome should contain a significant number of both TPE/TEN and TPA-like variants. In practice, we looked at positions (i) resolved in KM14-7, (ii) for which the majority variant was differing between TPE/TEN and TPA clades, but (iii) shared by more than 90% of the (modern) genomes within each clade. We then looked at the proportion of TPE/TEN and TPA-like variants in KM14-7 and compared that with all other genomes. Because KM14-7 was not included in the recombination analysis, we did not trim the recombining region for this analysis in order to avoid a bias. Finally, we also produced ML trees based on positions resolved in KM14-7 (141 SNPs). The resulting trees corresponded to the previously observed topology, with the KM14-7 recovered basal to TPE/TEN ( Supplementary Figures 2 and 3). The strength of the molecular clock signal in the dataset was investigated by regressing the root-to-tip genetic distance (measured in substitutions/site) of genomes against their sampling dates 106,107 (Figure 3.A). Root-to-tip genetic distances were calculated on a midpoint-rooted ML tree estimated in RAxML v. 8.2.11 108 using the same procedure described above (Figure 3.B). Sampling dates of historical sequences were fixed to the middle of the date range defined by radiocarbon dating (Supplementary Table 6.A). To assess the significance of the correlation we permuted the sampling dates across genomes and used the Pearson correlation coefficient as a test statistic 109 ,106 (Figure 3.C). We performed 1,000 replicates and calculated the p-value as the proportion of replicates with a correlation coefficient greater than or equal to the truth (using the unpermuted sampling dates). Divergence times and substitution rates were estimated using BEAST v. 2.6 110 To confirm clock-like evolution we performed a Bayesian date randomization test 107,109,115 (DRT) by permuting sampling dates across genomes and repeating the analysis. We performed 50 replicates and assessed significance by comparing the molecular clock rate estimates of the replicates to those estimated under the true sampling dates. As in the permutation test for the root-to-tip regression analysis above, we fixed the sampling dates of historical genomes to the middle of the date range defined by radiocarbon dating. MCMC chains were run for 50 million steps and parameters and trees sampled every 5000 steps. Convergence was assessed in Tracer 116 after discarding 30% of the chains as burn-in and Treeannotator was used to compute MCC trees of the resulting posterior tree distributions. Results were visualized in R using ggplots2 117,118 , ggtree 119 and custom scripts. Virulence analysis Virulence factors represented by the four ancient European genomes were assessed through a gene presence/absence analysis as described by Valtueña and colleagues 81 . A set of 60 sequences previously associated with putative virulence functions 45,82,83,85 were examined based on the annotated Nichols reference genome (NC_021490.2) and without preliminary quality filtering. The coverage over each gene was calculated using genomeCoverageBed in BEDTools version 2.250 120 . The heatmap visualization of the gene-by-gene coverage of reads was created using ggplot2 package in R 117,118 ).
7,032.2
2020-06-10T00:00:00.000
[ "Medicine", "Biology", "History" ]
Reassessment of the genetic basis of natural rifampin resistance in the genus Rickettsia Abstract Rickettsia, a genus of obligate intracellular bacteria, includes species that cause significant human diseases. This study challenges previous claims that the Leucine‐973 residue in the RNA polymerase beta subunit is the primary determinant of rifampin resistance in Rickettsia. We investigated a previously untested Rickettsia species, R. lusitaniae, from the Transitional group and found it susceptible to rifampin, despite possessing the Leu‐973 residue. Interestingly, we observed the conservation of this residue in several rifampin‐susceptible species across most Rickettsia phylogenetic groups. Comparative genomics revealed potential alternative resistance mechanisms, including additional amino acid variants that could hinder rifampin binding and genes that could facilitate rifampin detoxification through efflux pumps. Importantly, the evolutionary history of Rickettsia genomes indicates that the emergence of natural rifampin resistance is phylogenetically constrained within the genus, originating from ancient genetic features shared among a unique set of closely related Rickettsia species. Phylogenetic patterns appear to be the most reliable predictors of natural rifampin resistance, which is confined to a distinct monophyletic subclade known as Massiliae. The distinctive features of the RNA polymerase beta subunit in certain untested Rickettsia species suggest that R. raoultii, R. amblyommatis, R. gravesii, and R. kotlanii may also be naturally rifampin‐resistant species. most extensively studied species within this genus are those pathogenic causing human diseases, such as R. prowazekii, the causative agent of epidemic typhus, and R. rickettsii, responsible for Rocky Mountain spotted fever (Gillespie & Salje, 2023;Perlman et al., 2006;Weinert, 2015). Rifampin (also termed rifampicin) is one commonly prescribed antibiotic for treating bacterial infections (Goldstein, 2014;Tupin et al., 2010), although it is not considered as a first-line treatment for rickettsial infections in humans (Blanton, 2019).Rifampin is a broadspectrum antibiotic inhibiting bacterial RNA polymerase, thereby disrupting RNA synthesis and impeding bacterial protein production (Goldstein, 2014;Koch et al., 2014;Tupin et al., 2010).Rifampin is effective against certain Rickettsia species from diverse groups (Table 1). Indeed, members of the Typhus group, such as R. prowazekii and R. typhi, are naturally sensitive to rifampin, as well as most species of the Spotted Fever group (Table 1).However, rifampin susceptibility is not a universal feature within the Spotted Fever group, as experimental studies have identified at least four naturally rifampin-resistant species in this group: R. massiliae, R. rhipicephali, R. montanensis, and R. aeschlimanii (Eremeeva et al., 2006;Rolain et al., 1998Rolain et al., , 2002)). In bacteria, the most common mechanism underlying rifampin resistance involves missense mutations within the rpoB gene, which encodes the RNA polymerase β subunit (Goldstein, 2014;Koch et al., 2014;Tupin et al., 2010).In most bacterial genera, rifampinresistant clinical isolates typically harbor mutations that map to the center of the rpoB gene sequence in three clusters (I, II and III), at positions 500-700 corresponding to the enzyme's active center (Goldstein, 2014;Koch et al., 2014;Tupin et al., 2010).The majority of these mutations are located within a small region in cluster I dubbed the Rifampin Resistance Determining Region (RRDR).These mutations adversely impact the rifampin binding site, resulting in decreased affinity for the antibiotic (Goldstein, 2014;Koch et al., 2014;Tupin et al., 2010).Additionally, in the opportunistic pathogen Nocardia farcinica, the rifampin resistance mechanism also involves a rpoB paralog gene, which encodes a rifampin-refractory β subunit (Ishikawa et al., 2006).In a few pathogenic bacteria, other alternative resistance mechanisms include rifampin inactivation by specific enzymes (Hoshino et al., 2009;Liu et al., 2018;Spanogiannopoulos et al., 2014;Stogios et al., 2016;Tribuddharat & Fennewald, 1999) or excretion by efflux systems, whereby bacteria pump out the antibiotics to the external environment using transporter proteins (Chandrasekaran & Lalithakumari, 1998;Hui et al., 1977;Louw et al., 2009). In Rickettsia, rifampin resistance mechanisms have exclusively been associated with residue changes in the RNA polymerase β subunit, resulting from missense mutations in the rpoB gene (Drancourt & Raoult, 1999;Kim et al., 2019;Rachek et al., 1998;Troyer et al., 1998).Indeed, resistance associated with rpoB mutations has been artificially selected in the laboratory in three species that are primarily susceptible to rifampin, R. conorii, R. typhi and R. prowazekii (Kim et al., 2019;Rachek et al., 1998;Troyer et al., 1998) (Table 1).For naturally rifampin-resistant species, a previous genetic investigation concluded that a single point rpoB mutation resulting in a phenylalanine-to-leucine change at position 973 (Phe-973→Leu-973) is the mechanism driving natural rifampin resistance (Drancourt & Raoult, 1999).However, this assertion was based on observations of Rickettsia species within the Spotted Fever group exclusively, and no further investigation has been conducted into other groups. In this study, we investigate natural rifampin resistance patterns within the Rickettsia genus.We first assessed rifampin resistance in a previously untested Rickettsia species of the Transitional group, R. lusitaniae, for which no culture is currently available.To this aim, laboratory-reared Ornithodoros moubata ticks naturally infected by the R. lusitaniae R-Om strain (Duron et al., 2017(Duron et al., , 2018) ) were subjected to rifampin treatment, and then Rickettsia density was monitored using specific qPCR assays.Subsequently, we compared the complete rpoB gene sequences of R. lusitaniae R-Om with sequences of other Rickettsia species previously characterized as susceptible or resistant to rifampin and extended this analysis to include most other Rickettsia groups.We further explored available Rickettsia genomes for potential alternative resistance mechanisms, and retrace the evolutionary emergence of natural rifampin resistance in the genus.As a whole, our observations refute the prevailing notion that the residue Leu-973 is the key driver of natural rifampin resistance in Rickettsia species. | Ticks, housing conditions and antibiotic treatment Ticks were from a laboratory colony of O. moubata sensu stricto (Neuchâtel strain), which was established from field specimens collected in Southern Africa (Duron et al., 2018).Around two-thirds of specimens of this laboratory colony are naturally infected with the R. lusitaniae R-Om strain, which exhibits 100% nucleotide identity with the gltA gene sequence of the R. lusitaniae type strain (Duron et al., 2018) primarily identified in the tick Ornithodoros erraticus (Milhano et al., 2014).Ticks were maintained in the laboratory at 26°C with 80-90% relative humidity under complete darkness (Buysse et al., 2021;Duron et al., 2018).A blood meal made of heparinized cow blood was offered to ticks every 7 weeks using an artificial feeding system.Ticks were allowed to feed on blood through a parafilm membrane using a specific apparatus including: (i) A tick chamber closed on top by a nylon cloth to avoid tick escape and closed below by the parafilm membrane, (ii) a blood chamber containing a magnet, and (iii) a hot magnetic steering device to mix and warm blood at 38°C.After feeding, each batch of ticks was kept in separate plastic containers until the next feeding.To test for the antibiotic resistance pattern of R. lusitaniae R-Om, a rifampin solution was added to the blood meal at a final concentration of 10 mg/ml (Duron et al., 2018).Twenty randomly sampled ticks were fed with rifampin-treated blood, while 20 other ticks were fed with nontreated blood as a control.The ticks obtained their initial blood meal at nymphal stage 1, followed by another blood meal at nymphal stage 2 (7 weeks later).They were then kept until molting at nymphal stage 3, at which point they were analyzed to check the quantity of Rickettsia.No additional rifampin-treated blood can be provided afterwards because, following two rifampin-treated blood meals, most of the treated ticks ceased feeding.This was due to the antibiotic's elimination of their obligate nutritional endosymbiont, a Francisella-like endosymbiont, required for their normal growth through B vitamin provisioning (Duron et al., 2018). | Fluorescence in situ hybridization and imaging Visualization of R. lusitaniae was conducted through fluorescence insitu hybridization (FISH) assays following a protocol modified from Manz et al. (Manz et al., 1992).We focused on Malpighian tubes of ticks since these organs typically host a high density of intracellular bacteria (Buysse et al., 2019;Duron & Gottlieb, 2020) and was designed for this study.The organs were then hybridized for Rickettsia species, identified in this study, in the genome of R. rhipicephali strain 3-7-female6-CWPP.Details are presented in Table A3 and Figure A1 in Appendix 1. from the tick's whole body using the DNeasy blood and tissue kit following the manufacturer's instructions (QIAGEN).qPCR was performed with a Light Cycler 480 (Roche) using the SYBR Green Master Mix.Two qPCRs were performed for each tick: one was specific for the Rickettsia R-Om gltA gene, and the other was specific for the O. moubata OmAct2 gene (Table A1 in Appendix 1).Since both genes are present in a single copy per haploid genome of the tick and the bacterium, the ratio between gltA and OmAct2 concentrations provides the number of Rickettsia R-Om genomes relative to the number of O. moubata genomes, thus correcting for the quality of DNA template.Each DNA template was analyzed in triplicate for gltA and OmAct2 quantifications.Standard curves were plotted using dilutions of a pEX-A2 vector (Eurofins) containing one copy of each of the gltA and OmAct2 gene fragments. | Statistical analyses All statistical analyses were carried out using R (https://www.rproject.org).We tested for the effect of rifampin treatment on R. lusitaniae through quantitative analyses.To determine if rifampin modifies the R. lusitaniae density within each infected tick, qPCR results of the control and treated groups were analyzed using a Wilcoxon-Mann-Whitney test. | Analysis of rpoB gene sequences Neither genomic nor complete rpoB gene sequences of R. lusitaniae were available in public databases.Thus, we reexamined the raw reads from a prior metagenomics investigation of O. moubata Neuchâtel laboratory colony, specifically targeting the Francisellalike endosymbiont also present in this tick species (Duron et al., 2018). The ticks used for the rifampin test were sourced from the same laboratory cohort, collected during the same period, and reared in identical conditions to those used for the metagenomics sequencing. We further compared the complete rpoB gene sequence of R. lusitaniae R-Om strain obtained in this study with the complete rpoB gene sequences of Rickettsia spp.either susceptible (n = 16) or naturally resistant (n = 5) to rifampin available in GenBank (Table 1). Alignments of nucleotides and amino acids were performed using ClustalO (Sievers et al., 2011).The Unipro UGENE software (Okonechnikov et al., 2012) was used to visualize mutations throughout the rpoB gene sequences, and the BioEdit software (Hall, 1999) to identify residues with analogous functions in proteins.Rickettsia species and strains (Gagniuc, 2021).We also examined all Rickettsia genomes for the presence of rpoB paralogs using tBLASTn (Altschul et al., 1990). | Alternative rifampin resistance mechanisms in Rickettsia genomes We further investigated the whole genomes of Rickettsia for potential alternative mechanisms associated with natural rifampin resistance that are not dependent on rpoB.We compared the whole genomic content of naturally rifampin-resistant (n = 3) and -susceptible (n = 15) Rickettsia species (Table A2 in Appendix 1) to identify orthogroups specific to the rifampin-resistant Rickettsia species.The list of orthogroups specific to naturally rifampin-resistant Rickettsia species were then obtained using Orthofinder (v2.3.11)(Emms & Kelly, 2019) and Roary (Page et al., 2015).To control for false positives, we used tBLASTn to verify that orthogroups putatively associated with natural rifampin resistance were indeed absent in rifampinsusceptible species. We used tBLASTn to check whole genomes of naturally rifampin-resistant (n = 3) or susceptible Rickettsia species (n = 15) for the presence of genes known to encode enzymes inactivating rifampin and efflux systems pumping out rifampin described in rifampin-resistant bacteria (Brandis et al., 2012;Comas et al., 2011;Song et al., 2014).In addition, we used the Resistance Gene Identifier (RGI) tool of the Comprehensive Antibiotic Resistance Database (CARD) (Jia et al., 2017).The RGI tool was parametrized to search for genes associated with rifamycin resistance (the class of antibiotics including rifampin) and to generate a list of putative rifamycin resistance genes in Rickettsia genomes. | Inaccuracy of the rpoB Phe-973→Leu-973 mutation for rifampin resistance Alignments of rpoB gene sequences showed that the mutation Phe-973→Leu-973 primarily associated with natural rifampin resistance do not explain alone the pattern observed in Rickettsia species and strains.As expected, the residue Leu-973 is present in naturally rifampin-resistant Rickettsia species and strains and absent in susceptible species and strains of the Spotted Fever group (Figure 3a-c).However, the residue Leu-973 is also present in all rifampin-susceptible Rickettsia species belonging to the other Rickettsia groups (Figures 3b,c).Indeed, analysis of raw reads obtained from the O. moubata metagenome allowed us to reconstruct the complete rpoB gene sequence of the R. lusitaniae R-Om strain (length: 4123 nucleotides; 1373 amino acids). While the R. lusitaniae R-Om strain is susceptible to rifampin, the residue Leu-973 is present in its rpoB gene sequence.Similarly, the residue Leu-973 is consistently present in other rifampinsusceptible Rickettsia species of the Transitional group (R. australis, R. felis) and in rifampin-susceptible Rickettsia species of the Typhus group (R. typhi, and R. prowazekii) (Figures 3b,c).As a result, the residue Leu-973 is not specific to naturally rifampin-resistant Rickettsia species. Further comparisons with the rpoB gene sequences of additional Rickettsia strains and species for which the rifampin resistance profile is unknown showed that the residue Leu-973 is conserved across all Rickettsia groups (Figure 4a-c).Only the rifampin-susceptible species belonging to the Spotted Fever group do not harbor the residue Leu-973, but instead harbor Phe-973.Examination of Rickettsia genomes confirmed that rpoB is a single-copy gene, with no paralog present in the genus.Phylogenomic analyses revealed that the residue Leu-973 is ancestral to the Rickettsia genus.The residue Phe-973 is rather a derived trait, which has only evolved in a subclade of the Spotted Fever group from the ancestral residue Leu-973 (Figure 4a-c).In this context, the nomenclature Leu-973→Phe-973 is thus more appropriate. Phylogenomic analyses also indicated that all naturally rifampinresistant species cluster in a monophyletic subclade, termed Massiliae, of the Spotted Fever group (Figure 4c).Remarkably, the Massiliae subclade also includes a number of Rickettsia strains and species, for which the rifampin resistance pattern is unknown: | Distinct sets of mutations in rpoB are associated with naturally rifampin-resistant Rickettsia The rifampin-resistant clusters I, II, and III exhibited a low amino acid polymorphism in the genus Rickettsia, with only one amino acid variant identified in cluster I in RRDR (Ser-524→Asn-524, Figure 3a). The residue Asn-524 is associated with some naturally rifampinresistant species, but not all: R. aeschlimani strain MC16 harbors the residue Ser-524, which is shared with all the susceptible species, suggesting that residue Asn-524 is not involved in natural rifampin resistance.Outside of the three cluster regions, there is no single specific residue in the rpoB sequences associated with naturally rifampin-resistant species (Figures 3b,c).Notably, we identified a set of 10 residues (Ile-279, Ala/Val-409, Asn-641, Ile-890, Leu-973, Val-1010, Glu-1053, Ile-1180, Ser-1189, and Lys-1203) which are shared by naturally rifampin-resistant species, but also with part of susceptible species (Figure 5a).However, pairing specific residues at positions 409 or 973 with specific residues at positions 279,641,890,1010,1053,1180,1189, or 1203 fits the rifampin resistance pattern (Figures 5a,b). Indeed, the pairing of Leu-973 and Ser-1189 is specific to rifampinresistant Rickettsia and is never found in other Rickettsia species and strains.A total of 16 potential pairings exist, each matching the resistance pattern (Figures 5a,b).None of these residues is located within the hypothetical rifampin binding site (Figure 5d-g).Although there are a few subtle variations between Rickettsia species, no noticeable differences are observed between the hypothetical rifampin binding sites of resistant and sensitive Rickettsia species (Figure 5d-g). Hence, the RNA polymerase β subunit of naturally rifampin-resistant species tends to contain a higher proportion of residues with larger side chains than those of susceptible species.The steric hindrance may potentially reduce its susceptibility to binding with rifampin. Furthermore, the number of residue pairs putatively associated with resistance patterns can be reduced by considering exclusively nonanalogous residues (i.e., amino acids with different chemical or functional properties) differing between rifampin-resistant and susceptible species. This approach led to the exclusion of residues 279, 890, and 1180 (Figure 5c).Indeed, at each of these three positions, residues in resistant and susceptible species belong to the aliphatic hydrophobic group and share similar chemical properties.Hence, this could limit the number of residue pairing putatively associated with rifampin resistance to 10, all of which were absent in susceptible species (Figure 5c). | Putative other mechanisms of rifampin resistance Examination of Rickettsia genomes reveals the presence of 18 additional genes potentially involved in natural rifampin resistance (Table 2, Table A3 and Figure A1 in Appendix 1).These candidate genes include eight genes potentially associated with rifampin metabolism and harboring specific mutation polymorphisms, and 11 other genes present in naturally resistant Rickettsia species, but absent in susceptible species. Five candidate genes are involved with rpoB in the formation of the RNA polymerase complex: rpoA (α subunits), rpoC (β' subunit), rpoD (σ subunit), and rpoZ (ω subunit) genes (Table 2).Three of them (rpoA, rpoC and rpoD) harbor at least one residue specific to rifampinresistant strains (Figure A1 in Appendix 1).Although these RNA polymerase subunits are not the binding site of rifampin, they are physically organized all around the β subunit.Three other candidate genes are involved in antibiotic efflux pump systems, consistently present in all Rickettsia genomes, and also harbor residue specific to rifampin-resistant strains: YajC (encoding a subunit of the Sec membrane complex), TolC (a porin of outer membrane), and MsbA1 (a subunit of a multidrug efflux ABC transporter) (Table 2, Figure A1 in Appendix 1). All the 11 other candidate genes have been identified through pangenomic analyses and found specifically present in naturally rifampin-resistant Rickettsia species and absent in susceptible species (Table 2, Table A3 in Appendix 1).These 11 candidate genes are present either in the main chromosome (n = 5) or in plasmids (n = 6) of naturally rifampin-resistant Rickettsia species.However, none of these 11 genes could be associated with antibiotic resistance.One gene is homolog to ParA, which encodes a plasmid stability protein driving the isolation and allocation of plasmids into daughter cells during cell division (Ebersbach & Gerdes, 2005).Another gene is homolog to CopG, which encodes a DNA-binding protein involved in the control of plasmid copy number (Gomis-Ruth, 1998).The nine other candidate genes encode for short truncated protein fragments: One for a 97 amino acid fragment of the surface antigen encoded by the ompA gene, three for pseudogenized transposases, and five for short hypothetical proteins (59-92 amino acids) of unknown functions (Table 2, Table A3 in Appendix 1). | DISCUSSION Our analysis of current genetic data expands our understanding of the mechanisms and evolution of rifampin resistance within the Rickettsia genus.Indeed, we characterize R. lusitaniae as a species susceptible to rifampin although its rpoB gene sequence contains the residue Leu-973.We further observe that the residue Leu-973 is conserved in all Rickettsia groups, being present in all rifampinsusceptible species except those of the Spotted Fever group. Consequently, the rpoB residue Leu-973 solely cannot be further The rifampin resistance mechanism of Rickettsia is distinct from mechanisms observed in most other resistant bacteria.While an accumulation of missense mutations in RRDR is typically observed in rifampin-resistant bacteria (Forrest & Tamura, 2010;Goldstein, 2014), this pattern is not observed in RRDR of naturally resistant Rickettsia species.Polymorphism of residues exists in other rpoB regions, and some residues, if combined two-by-two, perfectly match the natural rifampin resistance phenotype.These mutations induce no substantial structural changes in the RNA polymerase β subunit, but they most often code for amino acids with larger side chains than those found in susceptible Rickettsia species.This can result in crowding at the binding site, thereby inhibiting the rifampin molecules from binding to the RNA polymerase.Similarly, the α, β', and σ subunits, all assembled in close proximity to the β subunit in the RNA polymerase, harbor missense mutations specific to naturally rifampin-resistant species.These peripheral structural changes can also prevent or limit rifampin access to the binding site on the RNA polymerase β subunit and then confer resistance to Rickettsia species, as suggested in a few other bacteria (Brandis et al., 2012;Comas et al., 2011;Liu et al., 2018;Song et al., 2014).However, while changes in the β subunit typically result in high-level rifampin resistance, alterations in other RNA polymerase subunits lead to smaller, yet still substantial, reductions in susceptibility to rifampin (Brandis et al., 2012). Efflux pumps are present in both rifampin-resistant and susceptible Rickettsia species, but three of their key genes, YajC, TolC, and MsbA1, harbor residues specific to naturally resistant species.The YajC and TolC genes are involved in the regulation of the main multidrug efflux machinery, imparting resistance to broadspectrum antibiotics in bacteria (Du et al., 2015;Gill & Garcia, 2011;Jia et al., 2022;Okusu et al., 1996;Ramos et al., 2014).MsbA1 is a drug transporter gene that forms part of the inner membrane ATPbinding cassette (ABC) transporter, which can extrude antibiotics from the cell and induce resistance (Alexander et al., 2018;Díez-Aguilar et al., 2021;Jia et al., 2022;Reuter et al., 2003;Woebking et al., 2008).In addition, analysis of the pangenome of naturally rifampin-resistant Rickettsia species led to the identification of a specific duplication of the porin ompA gene.In bacteria, ompA plays a crucial role in regulating cellular permeability and can be associated with efflux systems in the inner membrane to facilitate the extrusion of antibiotics (Choi & Lee, 2019;Nie et al., 2020).However, the second copy of the ompA gene in naturally rifampin-resistant Rickettsia species is truncated and may be nonfunctional, hindering any definitive conclusions.Additional analyses are required to validate the role of these alternative mechanisms of natural rifampin resistance in Rickettsia species. Analysis of Rickettsia phylogeny reveals that the distribution of natural rifampin resistance is not random across species and suffers from a major phylogenetic constraint.Rifampin-susceptible species are scattered along the phylogeny, belonging to different groups, suggesting that rifampin-susceptibility is an ancestral trait in the genus Rickettsia.Furthermore, the extended genome-wide analysis also revealed that some other Rickettsia species of the Spotted Fever group, non-tested for rifampin resistance, shared key genetic features with species known to be naturally rifampicin-resistant. These non-tested species include R. amblyommatis and R. raoultii, which also belong to the Massiliae subclade, suggesting that they may be other naturally rifampin-resistant species.In addition, two other non-tested species, R. gravesii and R. kotlanii, shared similar genetic features with species known to be naturally rifampicinresistant, including key residues in their RNA polymerase β subunit, suggesting that they could be other resistant species.However, neither R. gravesii nor R. kotlanii belong to the Massiliae subclade, although they belong to the Spotted Fever group.All susceptible species in the Spotted fever group cluster in a distinct monophyletic subclade nested among the resistant or putative resistant species, suggesting that they have a unique evolutionary origin.The emergence of rifampin resistance is found at the root of the Spotted Fever group, as indicated by the phylogenetic partition of resistant (or putative resistant) versus susceptible strains.Additionally, resistance has possibly reverted to susceptibility during the subsequent diversification of Spotted Fever species.Previous phylogenetic investigations have estimated the origin of the Spotted Fever group to be 25 million years ago (Weinert et al., 2009;Weinert, 2015), providing unequivocal evidence that the emergence of natural rifampin resistance predates the use of antibiotics by humans. To conclude, our study challenges previous assumptions regard- 1 h 30 min to 3 h maximum at 46°C, then washed for 20 min at 48°C in 200 µl of a washing buffer containing: 20 mM Tris (pH 8), 70 mM NaCl, 5 mM EDTA (pH 8) and 0.01% SDS 10%.Subsequently, the organs were gently rinsed in bi-distilled water in a Petri dish placed on a glass slide, dried, and embedded in CitiDAPI (DAPI 10 mg.ml −1 ; Citifluor AF1 antifading, Citifluor, England).Negative controls were obtained by incubating organs without Rickettsia-specific probes and by checking for tissue autofluorescence.Confocal images were acquired on an Olympus confocal laserscanning microscope (Olympus IX81) and FV3000 2.0 software (Olympus), installed on an inverted microscope IX-83 (Olympus, Tokyo, Japan).Multiple fluorescence images were acquired sequentially with a 60x objective (UplanXAPO, water immersion, 1.42 NA; Olympus) or a 100x objective (UPLAPO, oil immersion, 1.5 NA; Olympus).Fluorescence was excited with the Helium-Neon laser line (543 nm, for Cy3) and a blue diode (405 nm, for DAPI) and the emitted fluorescence was detected through spectral detection channels between 430-470 and 570-670 nm, respectively.2.3 | Quantitative PCR assayWe developed a real-time quantitative PCR assay (qPCR) to quantify the density of R. lusitaniae R-Om strain in ticks.DNA was extracted T A B L E 2 List of candidate genes potentially associated with natural rifampin resistance.Genes present in all Rickettsia genomes but harboring residues specific to rifampin-resistant species rpoB (MCC_01550 a ) β subunit of RNA polymerase 1373 rpoA (MCC_06070 a ) α subunit of RNA polymerase 340 rpoC (MCC_01555 a ) β' subunit of RNA polymerase 1372 rpoD (MCC_00410 a ) σ subunit of RNA polymerase 634 rpoZ (MCC_05500 a ) ω subunit of RNA polymerase 127 YajC (MCC_05555 a ) Subunit of the Sec membrane in rifampin-resistant Rickettsia species (without orthologs in susceptible species) in the genome of Rickettsia rhipicephali strain 3-7-female6-CWPP (GenBank CP003342.1).b Putative additional genes without orthologs in rifampin-susceptible The resistance clusters I, II, and III were identified along the Rickettsia rpoB gene sequences through alignment with the rpoB sequence of E. coli strain NCM3722 (GenBank CP011495; clusters I, II, and III at positions 507-533, 563-572, and 678, respectively).The final alignments of the rpoB gene sequences were then used to identify mutations specifically associated with natural rifampin resistance in the genus Rickettsia.Consensus sequence logos were created from aligned rpoB gene sequences of rifampin-resistant and susceptible FISH confirmed the presence of R. lusitaniae R-Om in O. moubata and revealed a high concentration of Rickettsia in Malpighian tubules (Figure 1a-f).Real-time qPCR also showed that R. lusitaniae R-Om is present at high concentration in most ticks fed on untreated blood (control ticks) (Figure 2a,b).A bimodal distribution is observed for the controls, with seven out of 20 ticks having an estimated R. lusitaniae R-Om concentration close to 0. This infection pattern was expected since not all O. moubata specimens in this lab colony are infected by R. lusitaniae R-Om (Duron et al., 2018).However, the density of R. lusitaniae R-Om was 58x lower in the rifampin-treated group F I G U R E 1 Detection of Rickettsia lusitaniae R-Om using epifluorescent microscopy within Malpighian tubules of the tick O. moubata.(a, d) R. lusitaniae R-Om appears in purple owing to the co-localization between the FISH probe in red (greyscale: b, e) and the DAPI staining in blue (greyscale: c, f).The nuclei are labeled in blue with DAPI only (greyscale: c, f).Rickettsia lusitaniae R-Om tends to cluster nearby nuclei and are rod-shaped.Scale bars: 10 µm.F I G U R E 2 Effect of rifampin treatment on the density of Rickettsia lusitaniae R-Om in third instar nymphs of the tick Ornithodoros moubata.Boxplots show Rickettsia lusitaniae R-Om densities in control and rifampin-treated groups.Infection densities in ticks were quantified through qPCR by the ratio of Rickettsia R-Om gltA gene per O. moubata OmAct2 gene.(average of gltA/OmAct2 ratios ± SE: 0.018 ± 0.020, n = 20) than in the control group (mean ± SE: 1.048 ± 1.386, n = 20) (Wilcoxon-Mann-Whitney test, two-sided, p = 0.002).This result indicates that R. lusitaniae R-Om is susceptible to rifampin. F I G U R E 3 Alignment of 1373 amino acids of the rpoB gene for Rickettsia strains and species either susceptible or resistant to rifampin.(a) Highlights of residues variable along the rpoB gene sequence.The height of each peak indicates the proportion of Rickettsia strains and species harboring amino acid residues variable along the sequence.Only one polymorphic residue is located in one of the three clusters I, II, and III.The residues in contact with the hypothetical rifampicin binding site are annotated in yellow.(b) Consensus sequence logos of amino acid variants in the rpoB gene sequences for rifampin-resistant (R) and susceptible (S) Rickettsia species and strains.(c) Details of variable residues in the rpoB gene sequences of the Rickettsia strains and species.Positions with conserved residues compared to R. conorii A-167 placed as reference are depicted in black; positions with substitutions by analogous residues are shown in gray; positions with substitutions by non-analogous residues are represented in white.The red arrow points to the residues Leu-973 and Phe-973. F I G U R E 4 Whole-genome phylogenetic relationship in the genus Rickettsia.(a) Whole-genome phylogenetic relationship of the Rickettsia species and strains (including the two outgroups Candidatus Megaira strain MegaNEIS298 and Orientia tsutsugamushi strain Boryong) constructed from 229 SCOs (56,554 amino acids) (ML, JTT + I + G4 model).(b) Focus on the whole-genome phylogenetic relationship of species of the Rhyzobius and Torix groups, including outgroups (Orientia and Ca.Megaira species).Analysis was based on Rickettsia species and strains from 229 SCOs (56,554 amino acids) (ML, JTT + I + G4 model).(c) Focus on the whole-genome phylogenetic relationship of species of the Spotted Fever, Scapularis, Transitional, Typhus, Helvetica, Canadensis, Adalia and Belli groups.Analysis was constructed from 402 SCOs (99,416 amino acids) (ML, FLU + I + G4 model).R (in red), naturally rifampin-resistant Rickettsia species and strains; S (in black), rifampin-susceptible Rickettsia species and strains.The 10 residues associated with rifampin resistance are shown on the right of the trees.Conserved amino acid residues are represented in black across all or diverse sequences.Positions with conserved residues are depicted in black; positions with substitutions by analogous residues are shown in gray; positions with substitutions by non-analogous residues are represented in white.Clade robustness was assessed by bootstrap analysis using 1000 replicates. used to diagnose natural rifampin resistance for Rickettsia species and strains.Alternative resistance mechanisms thus exist, which could F I G U R E 5 Residues in the rpoB gene sequences specific to rifampin-resistant and susceptible Rickettsia species and strains.(a) The 10 residue putatively associated with rifampin resistance.(b) The 16 residue pairing (including substitutions by analogous and non-analogous residues) associated with rifampin resistance.(c) The 10 residue pairings (including only non-analogous residues) associated with rifampin resistance.R (in red), naturally rifampin-resistant Rickettsia species and strains; S (in black), rifampin-susceptible Rickettsia species and strains.(d) 3D structure of the beta subunit of R. conorii A-167 (rifampin-sensitive), (e) R. rickettsi R (rifampin-sensitive), (f) R. rhipicephali 3-7-female-6-CWPP (rifampin-resistant), and (g) R. aeschlimannii MC16 (rifampin-resistant). Candidate residues for rifampin sensitivity/resistance are shown.The hypothetical rifampin binding site is indicated in yellow.involveeither mutations reducing access to the rifampin binding site through alterations in the structure of the RNA polymerase β subunit or related subunits, or genes detoxifying rifampin through efflux pumps.There is no genomic evidence suggesting that Rickettsia can inactivate rifampin by a specific enzymatic activity.Crucially, the observed resistance pattern across Rickettsia groups shows that natural rifampin resistance is restricted to a unique monophyletic subclade, and potentially to a few other related species, within the Spotted Fever group.This pattern reveals that the emergence of natural rifampin resistance was driven by a major phylogenetic constraint, resulting from ancient genomic features shared by a unique set of closely related Rickettsia species, rather than being a consequence of recent selection due to exposure to rifampin.Such a phylogenetic constraint implies that the mutations specific to rifampin-resistant Rickettsia species may be conserved due to shared evolutionary history rather than being directly related to rifampin resistance.Consequently, this phylogenetic constraint obscures the true genetic factors responsible for rifampin resistance in the genus Rickettsia. ing natural rifampin resistance in Rickettsia.Currently, the most reliable predictor of natural rifampin resistance is based on phylogenetic pattern: This phenotype is inherently linked with the Massiliae subclade, although it is likely that other closely related species, such as R. gravesii and R. kotlanii, are also resistant.These observations emphasize the importance of ongoing surveillance and research to understand how rifampin interacts with rickettsial targets and how Rickettsia can evolve antibiotic resistance.This is particularly crucial considering that new Rickettsia strains, species and groups are described each year, frequently without information on their antibiotic resistance status(Binetruy et al., 2020; Buysse & Accession numbers refer to genomes of R. massiliae strain AZT80 (CP003319), R. rhipicephali strain 3-7-female6-CWPP (CP003342), and R. aeschlimannii (CCER01000003-15).a provisional names.F I G U R E A1 Missense mutations identified in the gene sequences of rpoA, rpoC, rpoD, rpoZ, YajC, TolC, and MsbA1 specific to rifampicinresistant and susceptible Rickettsia species and strains.R (in red) denotes naturally rifampicin-resistant Rickettsia species and strains, while S (in black) represents rifampicin-susceptible Rickettsia species and strains.Positions with conserved residues are depicted in black; positions with substitutions by analogous residues are shown in gray; positions with substitutions by non-analogous residues are represented in white; positions with deletions are shown by hyphens. T A B L E 1 List of rifampin-resistant (R) and susceptible (S) Rickettsia species in the literature b unassembled reads.c complete rpoB sequences; n.a., non-available.
7,459.2
2024-07-31T00:00:00.000
[ "Biology" ]
Genome-Wide Co-Expression Distributions as a Metric to Prioritize Genes of Functional Importance Genome-wide gene expression analysis are routinely used to gain a systems-level understanding of complex processes, including network connectivity. Network connectivity tends to be built on a small subset of extremely high co-expression signals that are deemed significant, but this overlooks the vast majority of pairwise signals. Here, we developed a computational pipeline to assign to every gene its pair-wise genome-wide co-expression distribution to one of 8 template distributions shapes varying between unimodal, bimodal, skewed, or symmetrical, representing different proportions of positive and negative correlations. We then used a hypergeometric test to determine if specific genes (regulators versus non-regulators) and properties (differentially expressed or not) are associated with a particular distribution shape. We applied our methodology to five publicly available RNA sequencing (RNA-seq) datasets from four organisms in different physiological conditions and tissues. Our results suggest that genes can be assigned consistently to pre-defined distribution shapes, regarding the enrichment of differential expression and regulatory genes, in situations involving contrasting phenotypes, time-series, or physiological baseline data. There is indeed a striking additional biological signal present in the genome-wide distribution of co-expression values which would be overlooked by currently adopted approaches. Our method can be applied to extract further information from transcriptomic data and help uncover the molecular mechanisms involved in the regulation of complex biological process and phenotypes. Introduction Uncovering the genetic architecture behind complex phenotypes involves analyzing a large variety of genes that interact with each other to respond to environmental stimuli [1]. Therefore, gene co-expression studies are becoming increasingly popular in the quest of going beyond differential expression (DE) and recovering the functional information from relevant tissues [2]. A gene co-expression study requires the computation of the co-expression correlation coefficient between a given gene and all the other genes under scrutiny, potentially numbering in the thousands. However, defining which gene-level features are relevant is not a simple task and main strategies include focusing Algorithm We aimed to cluster genes based on them sharing a density distribution of genome-wide correlation coefficients. For that, eight shapes were used as templates, varying between unimodal, bimodal, skewed or symmetrical, representing different proportions of positive and negative correlations ( Figure 1). Shapes were determinate considering specific proportions of each 0.25-bins as a nominal value, obtained in an ad-hoc basis, to add to 100 across the 8 bins while producing the desired distribution shape in terms of symmetry and uni-or bi-modality. Starting with a normalized expression matrix, the Pearson correlation coefficient was computed for each possible gene pair across all the samples. For each gene, the number of correlations within the eight 0.25-bins of the distributions was recorded and summary statistics calculated (i.e., mean, Standard Deviation (SD), skewness, and kurtosis). Then, based on the proportion of correlations falling in each bin and the summary statistics, the Euclidian distance between each gene distribution to each of the eight template distribution shapes was computed. We did this by comparing the observed values for a given gene to the expected values for each template shape listed in Supplementary File 1: Table S1. In algebraic terms, the distance of the i-th gene to the j-th template shape (D i,j ) was computed as follows: where: subscripts i, j, and k indicate gene, template distribution shape, and 0.25-bin within a shape, respectively; OBP i,k is the observed bin proportion of the i-th gene in the k-th bin, which is the proportion of all the co-expression correlation coefficients from the i-th gene that fall within the k-th 0.25 bin; EBP j,k is the expected bin proportion of the k-th bin in the j-th template distribution shape; OMN i is the observed mean of all the co-expression coefficients from the i-th gene; EMN j is the expected mean of all the co-expression coefficients in the j-th template distribution shape; OSD i is the observed SD of all the co-expression coefficients from the i-th gene; ESD j is the expected SD of all the co-expression coefficients in the j-th template distribution shape; OSK i is the observed skewness of all the co-expression coefficients from the i-th gene; ESK j is the expected skewness of all the co-expression coefficients in j-th template distribution shape; OKU i is the observed kurtosis of all the co-expression coefficients from the i-th gene; EKU j is the expected kurtosis of all the co-expression correlation coefficients in j-th template distribution shape. Genes 2017, 8, x FOR PEER REVIEW 3 of 13 expected mean of all the co-expression coefficients in the j-th template distribution shape; is the observed SD of all the co-expression coefficients from the i-th gene; is the expected SD of all the co-expression coefficients in the j-th template distribution shape; is the observed skewness of all the co-expression coefficients from the i-th gene; is the expected skewness of all the co-expression coefficients in j-th template distribution shape; is the observed kurtosis of all the co-expression coefficients from the i-th gene; is the expected kurtosis of all the coexpression correlation coefficients in j-th template distribution shape. Distances were transformed into similarities ( , ) of a given gene to belong to each of the template distribution shape as follows: where , and , are the minimum and maximum , , respectively. Finally, similarities were transformed to probabilities ( , ) of a given gene to belong to each template distribution shape so that the sum of all probabilities for a given gene added to one: ( Finally, a gene was assigned to the j-th distribution template shape if its , was the largest across all j's. Assessing Biological Relevance To gain insight into biological drivers of gene co-expression distribution, genes were categorized according to their reported biological relevance (e.g., DE, transcription factors). A hypergeometric test was applied to identify enriched or depleted categories in each shape, using the function "phyper" in the R environment [7]. Therefore, we compared the within-shape proportion of genes in a given category to the proportion of overall genes in that category. To test the association between categories and types of distribution, a chi-square test of independence was applied. Results were considered significant if p-value ≤ 0.05. Distances were transformed into similarities (S i,j ) of a given gene to belong to each of the template distribution shape as follows: where Min{D i,j } and Max{D i,j } are the minimum and maximum D i,j , respectively. Finally, similarities were transformed to probabilities (P i,j ) of a given gene to belong to each template distribution shape so that the sum of all probabilities for a given gene added to one: Finally, a gene was assigned to the j-th distribution template shape if its P i,j was the largest across all j's. Assessing Biological Relevance To gain insight into biological drivers of gene co-expression distribution, genes were categorized according to their reported biological relevance (e.g., DE, transcription factors). A hypergeometric test was applied to identify enriched or depleted categories in each shape, using the function "phyper" in the R environment [7]. Therefore, we compared the within-shape proportion of genes in a given Genes 2020, 11, 1231 4 of 13 category to the proportion of overall genes in that category. To test the association between categories and types of distribution, a chi-square test of independence was applied. Results were considered significant if p-value ≤ 0.05. To investigate the relationship between the number of connections per gene (degree) and distribution shapes, we used the same datasets as input to a Partial Correlation and Information Theory (PCIT) analysis [8]. The PCIT algorithm determines significant correlations (connections) between two genes after accounting for all the other genes under scrutiny. We then evaluate: (1) if there was a relationship between distribution shape and the average number of connections (significant correlations) per gene using a one-way ANOVA; (2) if the top and the bottom 5% genes based on degree would be enriched in specific shapes. Finally, we evaluated if the different distribution shapes would capture some general biological process. For that purpose, we used the online platform GOrilla (http://cbl-gorilla.cs.technion.ac.il/) to test a list of genes falling in each shape for each dataset against all genes considered for analysis in that dataset. GOrilla uses the hypergeometric test and false discovery rate (FDR) correction to determine significantly enriched gene ontology terms (Padj < 0.05). For this analysis, we focused on cell components. Data Resources We applied our methodology to five publicly available RNA sequencing (RNA-seq) datasets from four organisms in different physiological conditions, from different tissues. These five datasets focused on different biological questions and were chosen so that we could better explore the utility of the new metric. Supplementary File 1: Table S2 summarizes the characteristics of each dataset. The Cattle Feed Efficiency dataset corresponds to Reference [9,10]. In brief, the data represents 11,662 genes with average log 2 (FPKM; fragments per kilobase of gene per million mapped reads) > 1 across all five tissues (adrenal gland, hypothalamus, liver, skeletal muscle, and pituitary) from 18 Nellore bulls from extremes of feed efficiency. We classified genes as DE (382 genes) and regulators (REG, 1072 genes) according to Reference [9]. The Cattle Puberty dataset is comprised of five tissues (hypothalamus, pituitary, ovary, uterus, and liver) from 6 pre-and 6 post-pubertal Brahman heifers, corresponding to Reference [11][12][13][14]. A total of 16,978 genes that presented average FPKM ≥ 0.2 in at least 1 tissue were used for analysis. Genes were classified as DE (2335 genes) based on the four aforementioned works, and as REG (1584 genes) as described before. The Duck Subcutaneous Preadipocyte Differentiation dataset is available in Additional File 3 of the source article [15]. Data represents preadipocytes cultured in differentiation medium and collected at −48 h, 0 h, 12 h, 24 h, 48 h, and 72 h. We kept for analysis only genes presenting FPKM > 0 in all samples (13,322 genes), which were then log2-transformed. Genes were classified as DE (3321 genes) based on the list provided by the authors (additional file 4 [15]) and as REG (675 genes) based on the Animal Transcription Factor Database 3.0 [16]. The Drosophila Embryogenesis dataset corresponds to [17] investigating a time-course experiment with 14 time points during Drosophila melanogaster embryogenesis. Data were averaged within each time point and log2 transformed prior to implementation. Genes were classified in the categories defined by Reference [17], based on pairwise comparison of genes up-or down-regulated (relative to the first time point, 0 h) in mRNA and protein data, respectively, namely: up/up (511 genes), down/up (1770 genes), down/down (1048 genes), and up/down (326 genes). Genes were also classified as regulators (791 genes) based on the list provided by [18] consisting of essential genes involved in replication and transcription, splicing, DNA repair, and cell division. The human dataset was downloaded from The Genotype-Tissue Expression Project V8 (https: //www.gtexportal.org/) which contain data of non-diseased individuals [19]. We used liver RNA-seq data from 15 individuals, provided as TPM counts. We kept for analysis only genes presenting non-zero counts in all samples, which were then log2-transformed. Genes were classified as REG Genes 2020, 11, 1231 5 of 13 (1153 genes) and tissue enriched (TE, 231 genes) according to information provided by The Human Protein Atlas [20]. Moreover, genes were defined as DE (793 genes) if they were identified by [21] as having high probability (>0.95) of being DE in any experiment, based on a meta-analysis of over 600 DE studies. Overall Co-Expression Distribution Although our methodology aims to evaluate co-expression distributions at the individual gene-level, we did calculate all correlations across all genes for each of the five RNA-Seq datasets we evaluated: Cattle Feed Efficiency, Cattle Puberty, Drosophila Embryogenesis, Duck Preadipocyte, and Human. The overall frequency distributions in each dataset give us an overview of co-expression patterns (Supplementary File 1: Figure S1). The higher number of positive correlations, even though discrete in some datasets, was already expected and documented in previous research [5,22]. The number of positive correlations is especially elevated in the Cattle Feed Efficiency dataset. This can be due to the high inflammatory response found in the liver of those animals, which remains strong even when analyzing all five tissues together and results in a set of highly positively co-expressed genes [9]. On the other hand, among the 5 tissues analyzed in the Cattle Puberty Dataset, only ovary and uterus showed great differences between pre-and post-puberty and the effect of the coordinated mechanisms regulating those differences are not so strong in the overall frequency distribution. Both Drosophila Embryogenesis and Duck Preadipocyte datasets representing developmental processes through time-series data present similar shapes. They show strong bimodal positive and negative correlations because of the tightly coordinated processes the datasets represent. The Human dataset is the one with the frequency distribution closer to a bell shape, but still, the higher presence of positive correlations can be observed. It is important to mention that, while the choice of datasets was somehow arbitrary, we selected datasets that we were familiar with, and, therefore, we were confident about data generation and bioinformatics analysis. Both cattle datasets and the drosophila dataset are associated with previous publications of the authors. The human dataset was chosen based on the credibility of the Genotype-Tissue Expression Project and the possibility of including a physiological baseline dataset, which is not associated to any particular disease or phenotype, something not often found in animal studies. The duck dataset was chosen for being particularly well-designed and unique, giving us the possibility to draw comparisons among time-series datasets. We used the expression values reported in the original studies because we aimed to develop a method that could be incorporated into any pipeline and still produce consistent results. Co-Expression Distribution in Datasets With Contrasting Phenotypes The two cattle datasets represented contrasting phenotypes, i.e., high versus low feed efficiency, and pre-versus post-puberty. Considering co-expression distributions at gene-level, the proportion of genes falling in each distribution shape can be found in Figure 2. When comparing the proportion of each category of genes (DE, DE-REG, or REG) with that of all genes within individual distributions, for the Cattle Feed Efficiency dataset, we identified an over-representation of REG in negatively skewed distributions (i.e., with an overabundance of positive co-expression (Shapes 4 and 8) and an under-representation of those genes in either null distributions (Shapes 1 and 2) or in a positively skewed distribution (Shape 3, Figure 2A). In contrast, DE genes were under-represented in Shape 4 and over-represented in null distributions (Shapes 1 and 2) and in bimodal skewed distributions (Shapes 7 and 8). The exact number of genes falling in each distribution shape for both cattle datasets and the significance of enrichment analysis can be found in Supplementary File 1: Table S3. Similarly, the results using the Cattle Puberty dataset showed an over-representation of REG in a bimodal negative skewed distribution (Shape 8) and an under-representation of those genes in null distributions (Shapes 1 and 2), as well as in a positively skewed distribution (Shape 7, Figure 2B). DE genes also behave similarly to the previous analysis, being over-represented in null distributions (Shapes 1 and 2) and in a bimodal positively skewed distribution (Shape 7). Considering that several genes are expected to present different behavior according to the contrasting condition tested, we applied our pipeline again using the two cattle datasets split by phenotype. We then identified genes that were assigned to different shapes by comparing high to low feed efficiency and pre-to post-puberty. From the 11,662 genes tested using the Cattle Feed Efficiency dataset, 2032 genes were assigned to different shapes depending on the condition, among which 133 are DE and 158 are REG. Likewise, from the 16,978 genes tested using the Cattle Puberty dataset, 2740 were assigned to different shapes depending on the phenotype, among which 620 were DE and 216 were REG. The shift in the proportion of each class of gene depending on the phenotypic condition can be seen in Figure 3. The enrichment analysis for DE and REG among the genes changing behavior according to the phenotype were again concordant, with REG being under-represented and DE being overrepresented (p-value < 0.01) in both datasets. Most REG genes, such as transcription factors and cofactors, are expected to be central genes in the network, presenting co-expression with many genes as a consequence of their regulatory role in highly coordinated biological processes. This can be one reason why they are not particularly enriched among the genes changing co-expression distribution between conditions. Nevertheless, the few REG that do change co-expression distribution between conditions are definitely worth further exploration as potential key regulators. Conversely, a gene identified as DE, being either central to a specific function or the final product of an altered pathway, are more prone to be condition-specific/enriched, not only regarding their expression level but also regarding their relationship with other genes. By identifying DE genes of central or peripheral function in the network based on their enrichment in null or non-null distributions, and exploring changes in their behavior, one can gain insight into the molecular dynamics behind the phenotype regulation. Similarly, the results using the Cattle Puberty dataset showed an over-representation of REG in a bimodal negative skewed distribution (Shape 8) and an under-representation of those genes in null distributions (Shapes 1 and 2), as well as in a positively skewed distribution (Shape 7, Figure 2B). DE genes also behave similarly to the previous analysis, being over-represented in null distributions (Shapes 1 and 2) and in a bimodal positively skewed distribution (Shape 7). Considering that several genes are expected to present different behavior according to the contrasting condition tested, we applied our pipeline again using the two cattle datasets split by phenotype. We then identified genes that were assigned to different shapes by comparing high to low feed efficiency and pre-to post-puberty. From the 11,662 genes tested using the Cattle Feed Efficiency dataset, 2032 genes were assigned to different shapes depending on the condition, among which 133 are DE and 158 are REG. Likewise, from the 16,978 genes tested using the Cattle Puberty dataset, 2740 were assigned to different shapes depending on the phenotype, among which 620 were DE and 216 were REG. The shift in the proportion of each class of gene depending on the phenotypic condition can be seen in Figure 3. The enrichment analysis for DE and REG among the genes changing behavior according to the phenotype were again concordant, with REG being under-represented and DE being over-represented (p-value < 0.01) in both datasets. Most REG genes, such as transcription factors and co-factors, are expected to be central genes in the network, presenting co-expression with many genes as a consequence of their regulatory role in highly coordinated biological processes. This can be one reason why they are not particularly enriched among the genes changing co-expression distribution between conditions. Nevertheless, the few REG that do change co-expression distribution between conditions are definitely worth further exploration as potential key regulators. Conversely, a gene identified as DE, being either central to a specific function or the final product of an altered pathway, are more prone to be condition-specific/enriched, not only regarding their expression level but also regarding their relationship with other genes. By identifying DE genes of central or peripheral function in the network based on their enrichment in null or non-null distributions, and exploring changes in their behavior, one can gain insight into the molecular dynamics behind the phenotype regulation. In general, genes changing behavior between conditions might be DE genes that fail the significance threshold in the DE analysis. They can also be genes that, although not DE between conditions, play different roles depending on the overall gene expression pattern, a feature already widely explored using differential connectivity measures [23]. The advantage of comparing distribution shapes is that it considers the direction (positive or negative) of the correlations and the proportion of the correlation falling in each bin of the distribution, corresponding to the correlation's strength. One can explore all genes changing behavior or focus on specific distribution changes, for instance, from unimodal to bimodal or from symmetric to skewed. In our datasets, 38% of genes changing shapes in Feed Efficiency and 77% in Puberty represents a change from null to non-null distributions or vice-versa (which, in this case, also represents changes from symmetrical to skewed). Respectively, 37% and 76% of genes changing shapes are moving from unimodal to bimodal distributions, or vice-versa. Although percentages are similar, different sets of genes are selected depending on the criteria. In general, genes changing behavior between conditions might be DE genes that fail the significance threshold in the DE analysis. They can also be genes that, although not DE between conditions, play different roles depending on the overall gene expression pattern, a feature already widely explored using differential connectivity measures [23]. The advantage of comparing distribution shapes is that it considers the direction (positive or negative) of the correlations and the proportion of the correlation falling in each bin of the distribution, corresponding to the correlation's strength. One can explore all genes changing behavior or focus on specific distribution changes, for instance, from unimodal to bimodal or from symmetric to skewed. In our datasets, 38% of genes changing shapes in Feed Efficiency and 77% in Puberty represents a change from null to non-null distributions or vice-versa (which, in this case, also represents changes from symmetrical to skewed). Respectively, 37% and 76% of genes changing shapes are moving from unimodal to bimodal distributions, or vice-versa. Although percentages are similar, different sets of genes are selected depending on the criteria. Tables S4 and S5). Nevertheless, it is possible to observe a shift in the proportion of genes falling in each shape, particularly from bimodal to unimodal shapes, implicating in a loss/gain of genes presenting both positive and negative correlations at the same time. Co-Expression Distribution in Time-Series Datasets Both time-series datasets, Duck Fat differentiation and Drosophila Embryogenesis, follow the same pattern identified in the cattle datasets (Figure 4), with REG being under-represented in a Genes 2020, 11, 1231 8 of 13 null distribution (Shape 1) and over-represented in a bimodal skewed distribution (Shape 7 or 8; Supplementary File 1: Table S6). Although, in the Drosophila dataset, the DE genes were subdivided into different classes according to their concordance between mRNA and protein expression data, it is possible to observe in both datasets the enrichment of DE genes being split between null and non-null distributions. Quite remarkably, considering non-null distribution in the Drosophila dataset, genes consistently down-regulated (down-down) are enriched in the negatively-skewed bimodal distribution (Shape 8), while genes consistently up-regulated (up/up) are enriched in the opposite shape -the positively-skewed bimodal distribution (Shape 7, Figure 4A). Moreover, those two classes are under-represented in the symmetrical bimodal distribution (Shape 6) where up/down and down/up are both enriched. Co-Expression Distribution in Time-Series Datasets Both time-series datasets, Duck Fat differentiation and Drosophila Embryogenesis, follow the same pattern identified in the cattle datasets (Figure 4), with REG being under-represented in a null distribution (Shape 1) and over-represented in a bimodal skewed distribution (Shape 7 or 8; Supplementary File 1: Table S6). Although, in the Drosophila dataset, the DE genes were subdivided into different classes according to their concordance between mRNA and protein expression data, it is possible to observe in both datasets the enrichment of DE genes being split between null and nonnull distributions. Quite remarkably, considering non-null distribution in the Drosophila dataset, genes consistently down-regulated (down-down) are enriched in the negatively-skewed bimodal distribution (Shape 8), while genes consistently up-regulated (up/up) are enriched in the opposite shape -the positively-skewed bimodal distribution (Shape 7, Figure 4A). Moreover, those two classes are under-represented in the symmetrical bimodal distribution (Shape 6) where up/down and down/up are both enriched. Another curious observation is that, while both cattle datasets representing contrasting phenotypes present no genes in Shape 6, both time-series datasets not only present gene falling in this shape but also there is an enrichment of DE genes. It is important to notice at this point the impact of species, tissues and even filtering criteria in the proportion of genes assigned to each shape in each dataset. Although some patterns can be identified, each dataset has its idiosyncrasies, and different aspects may be worth investigating. Co-Expression Distribution in a Physiological Baseline Dataset In contrast to the other datasets, the Human dataset consists of several data points all representing a single "non-disease" baseline state and this fact is reflected in the results ( Figure 5). The REG, found in other datasets to be over-represented in skewed distributions, are overrepresented in a null distribution (Shape 2) and under-represented in skewed bimodal distributions (Shapes 7 and 8; Supplementary File 1: Table S7). In the previous datasets, DE genes already showed enrichment in null distributions, but they also showed enrichment in other non-null shapes. Without the effect of contrasting conditions, genes with a high probability of being DE were enriched in null distributions (Shapes 1 and 2) and in Shape 3 but with very few genes falling in this last shape (43 genes-0.2% of the total). Tissue enriched genes (TE) was the only category enriched in non-null Another curious observation is that, while both cattle datasets representing contrasting phenotypes present no genes in Shape 6, both time-series datasets not only present gene falling in this shape but also there is an enrichment of DE genes. It is important to notice at this point the impact of species, tissues and even filtering criteria in the proportion of genes assigned to each shape in each dataset. Although some patterns can be identified, each dataset has its idiosyncrasies, and different aspects may be worth investigating. Co-Expression Distribution in a Physiological Baseline Dataset In contrast to the other datasets, the Human dataset consists of several data points all representing a single "non-disease" baseline state and this fact is reflected in the results ( Figure 5). The REG, found in other datasets to be over-represented in skewed distributions, are over-represented in a null distribution (Shape 2) and under-represented in skewed bimodal distributions (Shapes 7 and 8; Supplementary File 1: Table S7). In the previous datasets, DE genes already showed enrichment in null distributions, but they also showed enrichment in other non-null shapes. Without the effect of contrasting conditions, genes with a high probability of being DE were enriched in null distributions (Shapes 1 and 2) and in Shape 3 but with very few genes falling in this last shape (43 genes-0.2% of the total). Tissue enriched genes (TE) was the only category enriched in non-null distribution (Shape 7). That is probably because genes particular to tissue's specific activities tend to be tightly correlated, as they need to be expressed in a coordinated pattern to keep tissues' functions despite physiological conditions. That behavior can be clearly observed in studies that constructed co-expression networks using multiple tissue transcriptomic data, where tissue-specific genes push genes to cluster by tissue [9,24]. distribution (Shape 7). That is probably because genes particular to tissue's specific activities tend to be tightly correlated, as they need to be expressed in a coordinated pattern to keep tissues' functions despite physiological conditions. That behavior can be clearly observed in studies that constructed co-expression networks using multiple tissue transcriptomic data, where tissue-specific genes push genes to cluster by tissue [9,24]. Relationship Between Gene Categories and Distribution Shapes Considering the similarities observed between the first four datasets and the fact that without contrasting phenotypes genes with regulatory potential appear enriched in null distributions (i.e., Shapes 1 and 2), we evaluated the dependence between DE or REG genes and null versus non-null (Shapes 3 to 8), unimodal (Shapes 1 to 4) versus bimodal (Shapes 5 to 8), and symmetric (Shapes 1, 2, 5, and 6) versus skewed (Shapes 3, 4, 7, and 8) distributions. Again, we found a consistent pattern among most of the datasets, with DE genes being found more frequently than expected (p-value < 0.05) in null, unimodal, and symmetrical distributions, as well as REG being found more frequently than expected (p-value < 0.05) in non-null, bimodal, and skewed distributions (Supplementary File 1: Figure S2). In contrast, for the human dataset, both DE and REG were found more frequently than expected (p-value < 0.05) in null, unimodal, and symmetrical distributions. Relationship Between Gene Degree and Distribution Shapes Because genes that are expected to be highly connected to others (e.g., REG and TE) appeared enriched in non-null distributions, we explored the relationship between the number of significant correlations and the distributions shapes. For all datasets, there was a significant difference in the average number of connections per gene in each distribution shape (p-value < 2 × 10 −16 ; Supplementary File 1: Table S8). As anticipated, the bottom 5% genes regarding degree (least connected genes) were enriched in both null distributions (Shapes 1 and 2) for all datasets (Supplementary File 1: Table S9). The few enrichments out of those 2 shapes were due to the low number of genes assigned to that shape. Conversely, bottom genes were depleted in non-null distributions, particularly in bimodal shapes. The top 5% genes (corresponding to genes with the higher number of significant correlations) were enriched in different shapes depending on the dataset, but exclusively in non-null distribution. Except for the Human dataset, those enrichments were found in shapes that were also enriched for Relationship between Gene Categories and Distribution Shapes Considering the similarities observed between the first four datasets and the fact that without contrasting phenotypes genes with regulatory potential appear enriched in null distributions (i.e., Shapes 1 and 2), we evaluated the dependence between DE or REG genes and null versus non-null (Shapes 3 to 8), unimodal (Shapes 1 to 4) versus bimodal (Shapes 5 to 8), and symmetric (Shapes 1, 2, 5, and 6) versus skewed (Shapes 3, 4, 7, and 8) distributions. Again, we found a consistent pattern among most of the datasets, with DE genes being found more frequently than expected (p-value < 0.05) in null, unimodal, and symmetrical distributions, as well as REG being found more frequently than expected (p-value < 0.05) in non-null, bimodal, and skewed distributions (Supplementary File 1: Figure S2). In contrast, for the human dataset, both DE and REG were found more frequently than expected (p-value < 0.05) in null, unimodal, and symmetrical distributions. Relationship between Gene Degree and Distribution Shapes Because genes that are expected to be highly connected to others (e.g., REG and TE) appeared enriched in non-null distributions, we explored the relationship between the number of significant correlations and the distributions shapes. For all datasets, there was a significant difference in the average number of connections per gene in each distribution shape (p-value < 2 × 10 −16 ; Supplementary File 1: Table S8). As anticipated, the bottom 5% genes regarding degree (least connected genes) were enriched in both null distributions (Shapes 1 and 2) for all datasets (Supplementary File 1: Table S9). The few enrichments out of those 2 shapes were due to the low number of genes assigned to that shape. Conversely, bottom genes were depleted in non-null distributions, particularly in bimodal shapes. The top 5% genes (corresponding to genes with the higher number of significant correlations) were enriched in different shapes depending on the dataset, but exclusively in non-null distribution. Except for the Human dataset, those enrichments were found in shapes that were also enriched for functionally important categories of genes (DE or REG). This overlap, particularly between transcription factors and highly connected genes have been reported before [5]. For all datasets, hub genes were depleted in null distributions. These results reinforce that genes related to tightly coordinated processes are more often found in non-null distribution. Functional Enrichment within Distribution Shape The functional enrichment of genes falling in each shape, although not consistently among the different datasets, demonstrate strong biological signals (Table 1), with some of the enrichments as significant as FDR of 1.51 × 10 −153 (nucleoplasm). This result demonstrates genes grouped according to the distribution of their co-expression correlations can capture specific gene functions. Here, across all datasets, we only reported enrichment results at Cellular Component level. That is because it is not the focus of this work to discuss particular biological mechanisms behind each dataset, but rather a proof of principle that coherent biological signals are contained in the distribution data-something that is clearly illustrated by the Cellular Component enrichment statistics. Nevertheless, we also have found shape-specific enrichment for Biological Process and Molecular Function in all our test datasets. As an example, for the cattle feed efficiency dataset, we found significant enrichment for genes falling in shapes 1 to 4 (Supplementary Files 2 to 5), each shape capturing specific Biological Processes. The animals on the feed efficiency dataset have been already extensively characterized [9,10,[25][26][27][28], with low feed efficiency being associated with inflammatory/immune response and altered lipid metabolism in liver. Interestingly, shape 4, the one presenting the higher number of genes and reflecting a higher number of positive correlations, is not enriched specifically for inflammatory/immune response, but for transcription-related terms, such as mRNA processing (FDR = 7.13 × 10 −19 ) and mRNA splicing (FDR = 4.54 × 10 −15 ). Acute inflammatory response (FDR = 2.0 × 10 −13 ), regulation of humoral immune response (FDR = 1.35 × 10 −8 ), and other related terms were identified in shape 3, together with terms, such as lipid catabolic process (FDR = 3.97 × 10 −18 ), lipid transport (FDR = 6.88 × 10 −8 ) and lipid homeostasis (FDR = 5.34 × 10 −7 ). The fact that both biological processes (immune response and lipid metabolism) were enriched in the same distribution shape, particularly one with higher number of negative correlations, indicate a possible negative regulation supported by the literature in humans [29][30][31] and are worth further investigation. These high significance levels (up to FDR = 2.91 × 10 −30 , Supplementary File 4) were only possible because we are considering all expressed genes as opposed to a limited list of differentially expressed genes. This more holistic approach reflects the paradigm shift in biological research introduced by high throughput technologies, in which one understands that the whole is greater than the sum of its parts, and information processing and knowledge ordering strategies focus on assessing molecular phenotypes more comprehensively first and then determining which aspects would be important to focus on [32]. Conclusions We started with the premise that, in a co-expression network, different genes present different co-expression distributions and can be grouped according to those. Our underlying null hypotheses is that a random gene, presenting no key role to the biological questions being examined, will present a null distribution with most of its co-expression correlations around zero and only a few extremes, likely false positives, on either boundary, ±1. On the other hand, a gene of relevance will reveal a distribution skewed towards the extremes reflecting the (higher than average number of) genes to which it significantly interacts. There is no benchmark dataset of this type of analysis, which makes it difficult to compare our proposed approach to existing gene clustering methods. Indeed, considering the five vastly different datasets we analyzed, genes were assigned consistently to pre-defined distribution shapes, regarding the enrichment of DE and regulatory genes, in situations involving contrasting phenotypes, time-series, or physiological baseline data. Admittedly, there is some subjectivity in the creation of the proposed 8 template shapes. However, we believe 8 is the minimum number of shapes required to capture in a balanced way the symmetry versus skewness contrast in one dimension, as well as the uni-versus multi-modality contrast in another dimension. Similarly, the use of more or less bins within distributions (e.g., having 10 0.10-bins instead of 8 0.25-bins) is worthy of further research. Indeed, across the five datasets analyzed, no gene was allocated to Shape 5. Similarly, caution should be taken when drawing general rules because the five datasets selected differ vastly not only in biological aspects, such as organism, tissue, and phenotype but also in numerical intricacies, including data filtering criteria, transformation, and normalization methods. Conversely, this diversity could be a test of robustness of the proposed approach. Despite these potential limitations, the results clearly highlight that the distribution shape of correlation coefficients can be used as a novel metric to prioritize genes of functional importance and to further explore topological characteristics of gene networks. By considering that highly connected genes will be assigned to particular distribution shapes according to the experimental design underlying the gene co-expression networks, regulatory genes can even be identified in datasets that do not represent physiological contrasts or time-series. (Tables S1-S9) and figures (Figures S1 and S2). Supplementary Files 2 to 5 (SupplementaryFile2_FE_shape1.html, Supplementary File3_FE_shape2.html, SupplementaryFile4_FE_shape3.html, SupplementaryFile5_FE_shape4.html) contain the functional enrichment for genes in cattle feed efficiency dataset falling in shapes 1 to 4. Supplementary File 6 (SupplementaryFile6.zip) contain the FORTRAN95 code, README.txt and example data.
8,570.8
2020-01-17T00:00:00.000
[ "Biology", "Computer Science" ]
Exciton-assisted optomechanics with suspended carbon nanotubes We propose a framework for inducing strong optomechanical effects in a suspended carbon nanotube based on deformation potential exciton-phonon coupling. The excitons are confined using an inhomogeneous axial electric field which generates optically active quantum dots with a level spacing in the milli-electronvolt range and a characteristic size in the 10-nanometer range. A transverse field induces a tunable parametric coupling between the quantum dot and the flexural modes of the nanotube mediated by electron-phonon interactions. We derive the corresponding excitonic deformation potentials and show that this interaction enables efficient optical ground-state cooling of the fundamental mode and could allow us to realise the strong and ultra-strong coupling regimes of the Jaynes-Cummings and Rabi models. We analyze a framework for optical manipulation of the motional state of a suspended carbon nanotube based on deformation potential exciton-phonon coupling. The excitons are confined using an inhomogeneous axial electric field which generates optically active quantum dots with a level spacing in the milli-electronvolt range. A transverse field induces a tunable parametric coupling between the quantum dot and the flexural modes of the nanotube. We show that this interaction enables efficient optical ground-state cooling of the fundamental mode and will allow access to quantum signatures in its motion. Optical transducers underpin a host of high precision measurement techniques and recent developments in optomechanics suggest that they may enable quantum limited control of a macroscopic mechanical degree of freedom [1]. Given the versatileness of mechanical non-linearities this would provide an alternative to atomic systems for fundamental tests of quantum mechanics and the development of quantum technologies [2]. Paradigmatic goals in this direction are the preparation of a mechanical resonator in its quantum ground state [3,4,5,6] and the demonstration of quantum signatures in its dynamics [7,8]. These endeavors are seriously hampered by the mechanical quality of typical materials. In this respect suspended single-walled carbon nanotubes (CNTs) [9] are emerging as a unique candidate. Indeed recent transport experiments in these systems have demonstrated strong coupling of charge to vibrational resonances [10,11] and ultra-high mechanical frequency quality-factor (f Q) products [12]. These developments raise the question of which are the prospects for optical manipulation of motional degrees of freedom in CNTs. The standard paradigm in optomechanics is based on an optical cavity whose frequency is modulated by the motion of one of its mirrors or of a dielectric object inside it via radiation pressure effects [1]. However this approach becomes inefficient for resonators with deep subwavelength dimensions and low polarizabilities like CNTs. Here we propose a solution to this conundrum based on an alternative way of inducing coherent optomechanical transduction which exploits the unique properties of excitons in semiconducting CNTs [13,14,15,16]. The role of the optical cavity is played by an excitonic resonance of the CNT that couples parametrically to the motion via deformation potential electron-phonon interactions [17]. Homodyne detection of the output field of the two level emitter afforded by the excitonic resonance allows then to perform a continuous measurement of the mechanical amplitude. This procedure, which could be implemented using the differential transmission technique [30], is analogous to cavity-assisted schemes [1] and equivalent to ion-trap measurements. We envisage a suspended CNT where the center of mass (CM) of the exciton is localized via the spatial modulation of the Stark-shift induced by a static inhomogeneous electric field. We analyze a tip electrode configuration that effectively engineers a pair of tunable optically active nanotube quantum dots (NTQDs) with excitonic level spacing in the meV range corresponding to a confinement length below 10nm. The quantum confinement is induced by the inhomogeneity in the field component along the CNT axis E . In turn the normal component E ⊥ can be used to induce a tunable parametric coupling between the exciton and the flexural motion of the CNT. This allows for optical ground-state cooling of the fundamental mode at an ambient temperature in the Kelvin range. A major advantage of this mechanical resonator-NTQD system with respect to prior scenarios [4] is the possibility of realizing a mechanical analogue of the strong-coupling regime of cavity-QED [18] with an "optomechanical coupling" in the 100MHz range. Furthermore, this coherent coupling can be switched on and off on demand which offers rich possibilities for deploying quantum-optical schemes to demonstrate quantum signatures in the motion [19]. The electronic structure of a semiconducting CNT can be understood in terms of graphene rolled into a cylinder [20]. In the absence of a magnetic field there is a single bright level: the singlet bonding direct exciton |KK * + |K K * (the conjugated wavefunctions correspond to the hole) [21]. Threading a small Aharonov-Bohm flux φ AB renders the antibonding state |KK * − |K K * weakly allowed so that its spontaneous emission rate Γ can be tuned. Thus we focus on the E 11 direct excitons |KK * ± |K K * , whose zero-field splitting lies in the meV range [15], and consider their deformation potential (DP) coupling to the low frequency phonons corresponding to the compressional (stretching) and flexural (bending) branches for φ AB = 0 [33]. To obtain a tractable model for the excitonic wavefunction suitable for analyzing exciton CM confinement and the excitonsoft phonon coupling we adopt: (i) the k · p graphene zone-folded scheme following Ref. 21 but neglecting intersubband transitions [22], and (ii) an envelope function approximation within each subband. For the latter we arXiv:0911.1330v1 [cond-mat.mes-hall] 6 Nov 2009 adopt the parameterization developed in Ref. 22 but take the Bloch function at K, K as determined by (i) and the assumption of electron-hole symmetry. This leads to the following singlet direct exciton wavefunctions: where the envelope functions |F nm , |F nm satisfy F nm (z e , z h ) = F nm (z h , z e ) -note that the inter-valley mixing preserves the total momentum and angular momentum. The indices n, m correspond to the quantization of the single particle azimuthal momenta (n = 0, ±1, ±2, . . .) so that n = m = 0 for E 11 excitons. The associated subband electronic 1D Bloch functions are given by |K n,± . The symmetric tip electrode configuration sketched in Fig. 1 with voltages V 1 and V 2 allows independent tuning of E ⊥ and E as the reflection symmetries imply that they are determined respectively by (V 1 − V 2 )/2 and (V 1 + V 2 )/2. For the parameters that allow to confine the exciton's center of mass (CM), E ⊥ (z) and E (z) can be regarded as constant across the CNT's cross section and the length scale over which they vary appreciably is much larger than the excitonic Bohr radius. It follows that for sufficiently weak magnitudes (see below): (i) the effect of E is dominated by intrasubband virtual transitions whose effect on the CM motion can be treated adiabatically, while (ii) E ⊥ leads to a weak perturbation ∝ e ±iϕ that only induces intersubband virtual transitions. More precisely, within each pair of subbands n, m (i) We consider E (z) much weaker than the critical field to ionize the exciton so that ψ nm± |Ĥ int |ψ nm± is much smaller than the binding energy. The latter allows to derive an effective Hamiltonian for the ground state manifold of the quasi-1D hydrogenic series associated to n, m by adiabatic elimination of the corresponding excited manifolds. This effective Hamiltonian for the CM motion has a potential part whose leading contribution is second order inĤ int and yields the effective confining potential: V . In a classical picture the field polarizes Effective potential (a.u.) the exciton that thus experiences a force proportional to the gradient of the squared field. The characteristic level spacing of the hydrogenic series implies that the excitonic polarizability satisfies α is the exciton Bohr radius and n−m (q) an appropriate dielectric function. In particular 0 (0) = ≈ 7 corresponds to the intrinsic permittivity along the CNT [24]. As shown in Fig. 1 we have calculated the field generated by this electrode configuration for typical parameters using FEM. These result for a (9, 4) tube [22], in a zero point motion σ CM (z CM ) which can be taken to be Gaussian for the ground state (henceforth |ψ 00± ) [22]. We now consider the effect of E ⊥ on |ψ 00± to lowest order in perturbation theory. In principle the linear correction |ψ (1) 00± involves contributions from all four excitonic manifolds for which |n| = 1, m = 0 or n = 0, |m| = 1, namely E 12 , E 21 , E 13 , E 31 . We find that the contri-butions from E 12 and E 21 vanish identically and obtain |ψ z), and l labels a complete set of envelope functions for the E 13 and E 31 manifolds [36] -⊥ ≈ 1.6 denotes the intrinsic relative permittivity normal to the CNT axis [24]. The Hamiltonian describing the interaction between electrons and low frequency phonons has two distinct terms: (i) a true DP contribution diagonal in sublattice space corresponding to an energy shift of the Dirac point and (ii) a bond-length change contribution off-diagonal in sublattice space [17]. For the aforementioned exciton states we find that the electron-hole and K-K symmetries imply that finite couplings only arise from (ii): Hereτ i andσ i are, respectively, Pauli matrices in valley (K-K ) space and in sublattice (A-B) space, and g 2 is the off-diagonal DP. Given that the relevant phonon wavelengths λ are much larger than the CNT radius R we can adopt a continuum shell model [17] withû ij (r) the corresponding Lagrangian strain and use the lowest orders in R/λ [thin rod elasticity (TRE)]. Both compressional and flexural deformations have the structure of a local stretching so that the strain components satisfy u ϕz = 0, ∂z 2 for flexural modes and u zz = ∂φc ∂z for compressional modes where φ f /c are the 1D fields [25] and σ ≈ 0.2 is the CNT Poisson ratio [26]. Then Eq. (1), the aforementioned approximation for |ψ (1) 00± , and the single particle Hamiltonian (2) allow us to obtain the lowest order contributions in the electric field to the interaction Hamiltonian H QD−ph between the exciton states |ψ 00± and low frequency phonons where we have exploited the completeness of {|F −ν0,l }. Henceforth, we consider parameters for which the flexural and compressional branches are expected to present resonances with a free spectral range larger than the optical linewidth of the zero phonon line (ZPL) of the transition associated to the state |ψ 00− . We focus on laser excitation of the latter near resonant with the ensuing lowest-frequency flexural phonon red sideband. We consider a bridge geometry (cf. Fig. 1) with a length L short enough that the relative strength of these phonon sidebands is weak [cf. Eq. (5)]. Hereafter, we use the formalism developed in Ref. 25 and adopt a resonator-bath representation with the resonator mode (annihilation operator b 0 , angular frequency ω 0 , and quality factor Q) corresponding to the fundamental in-plane flexural resonance that we intend to manipulate and laser cool to the ground state. In turn the bath modes include the 3D substrate that supports the CNT coupled to the other nanotube vibrational resonances. Hence we insert in the effective field operatorsφ f /c in Eq. (3) the resonator-bath mode decomposition, i.e. Here φ 0 (z) is the normalized resonator 1D eigenmode and µ is the linear mass density of the CNT, while u x,q (z) [u z,q (z)] is the x [z] component of the CM displacement of the CNT cross section at z for the bath mode corresponding to the scattering eigenmode q and ρ s is the substrate's density. Thus in a polaronic (shifted) representation [4], the Hamiltonian for the laser driven NTQD coupled to the resonator mode, to the phonon bath (annihilation operators b q ), and to the radiation field (annihilation operators a k and couplings g k ) reads H = H sys + H int + H B with ( = 1): Here B ≡ e η(b0−b † 0 ) , δ is the laser detuning from the ZPL and Ω the Rabi frequency, we have introduced Pauli matrix notation for the optical pseudospin (σ z = 1 corresponds to |ψ 00− and σ z = −1 to the empty NTQD), applied a shift to the phonon modes q, and adopted a rotating frame at the laser frequency ω L . The parameter η, which characterizes the strength of the exciton-resonator coupling (e −η 2 /2 is the Frank-Condon factor) is given by where we have introduced the effective field E ⊥ ≡ ( √ L/q 2 0 ) F 00 | ∂ 2 φ0 ∂z 2 (ẑ e )E ⊥ (ẑ e )|F 00 , q 0 is the TRE phonon wavevector for the resonator mode, h = 0.66Å is the effective thickness for the continuum shell model [26], E = 1TPa is the CNT Young modulus, and σ G = 7.7 × 10 −7 Kgm −2 is the mass density of graphene. We focus on parameters such that η < 0.2. Its favorable scaling as √ L is a direct consequence of the quadratic flexural dispersion. Note that the perturbative treatment of E ⊥ underpinning Eq. (5) implies ξE ⊥ 1. Finally, the couplings ζ q and λ q to the bath modes lead, respectively, to the resonator mode's phonon tunneling dissipation and to pure dephasing of the NTQD. The RWA for the ζ q is justified given their weakness and η 1. These conditions and the anharmonicity of the flexural spectrum also imply that the flexural λ q can be neglected and the pure dephasing is dominated by the compressional branch [25]. We find that for all environmental couplings the Born-Markov approximation is valid and after eliminating the bath phonon modes and the radiation field, we obtain a master equation for the NTQD coupled to the resonator with a Hamiltonian contribution given by H sys and a dissipative contribution of Lindblad form with collapse operators: √ Γ σ − , γ D /2 σ z , ω 0 n(ω 0 )/Q b † 0 , and ω 0 [n(ω 0 ) + 1]/Q b 0 . Here n(ω 0 ) is the thermal equilibrium occupancy at the ambient temperature and γ D is the phonon-induced dephasing rate. Other relevant sources of dissipation beyond those considered in Hamiltonian (4) can be incorporated by adopting modified values of Q [12] and Γ [37]. The dephasing rate γ D is determined by the low frequency behavior of the phonon spectral density J(ω) = π q |λ q | 2 δ(ω − ω q ) (with q ∈ compressional branch). For a bridge geometry the scattering modes derived in Ref. 25 result in an Ohmic spectral density J(ω) = 2πα con ω, for ω much smaller than the fundamental compressional resonance ω c ω 0 , which naturally leads to γ D = 2πα con k B T / . In turn, it is straightforward to determine that the "confined" dimensionless dissipation parameter satisfies α con = α/πQ c , where Q c is the clamping-loss limited Q-value of the fundamental compressional resonance [25] and α the dissipation parameter that would result for an infinite length. The latter can be calculated using Eq. (3) and reads: α = g 2 2 √ σ G (1 + σ) 2 cos 2 3θ/2π 2 R(Eh) 3/2 . It depends on the chirality and may approach unity for small radius zigzag tubes. Thus, when the exciton linewidth is dominated by electron-phonon interactions [27] the phonon confinement in our structure will reduce it by at least the factor πQ c 1. Finally, α con 1 warrants the Born-Markov approximation in the treatment of the pure dephasing in the relevant regime γ D Γ/2. In complete analogy with the Lamb-Dicke limit we expand up to second order the translation operators B and adiabatically eliminate the NTQD to obtain a rate equation for the populations of the resonator's Fock states. This incorporates both the mechanical dissipation and the dissipative effects induced by the scattering of laser light. The latter result in cooling and heating with rates η 2 A ∓ that read the same as in Ref. 4 with the quantum dot Liouvillian L QD including now the pure dephasing γ D . As γ D → 0 the steady state occupancy for Q → ∞, i.e. the quantum backaction limit A + /(A − − A + ), becomes independent of Ω (in stark contrast to atomic laser cooling) and reduces to the same expression valid for the cavity-assisted backaction cooling [5] with the cavity decay rate 1/τ replaced by the spontaneous emission rate. In the resolved sideband regime and for the optimal detuning δ = −ω 0 , this fundamental limit yields (Γ/4ω 0 ) 2 . In the un-shifted representation, for δ = 0 and after a π/2 rotation of the pseudospin aroundŷ, H sys reduces to the Jaynes-Cummings model with the spin degree of freedom afforded by the NTQD states dressed by the laser field. The spin-oscillator coupling and resonance condition are given, respectively, by g = ηω 0 /2 and Ω = ω 0 . Thus, given 1/Q η reaching the strong coupling regime depends on satisfying ηω 0 > Γ/2. This regime is akin to the parametric normal mode splitting in cavity optomechanics [28] and offers a wide range of possibilities for the demonstration of quantum signatures in the motion. In particular a judicious modulation of η locked to pulsed laser excitation allows to emulate the adiabatic passage scheme used in Ref. 19 for performing QND measurements of the oscillator's energy. This would enable the observation of motional quantum jumps. In conclusion, we set forth a scheme for optomechanical manipulation of nanotube resonators via the deformation potential exciton-phonon interaction. This provides a high-performance alternative to radiation-pressure based schemes [1] for an ultra-low mass and high frequency nanoscale resonator leading to large backaction-cooling factors and opening a direct route to the quantum behavior of a "macroscopic" mechanical degree of freedom [2]. Most importantly, these breakthroughs rely on a lifetime-limited zero phonon line much narrower than the smallest CNT linewidths reported so far [13]. Indeed, the envisaged NTQDs will allow to suppress the two most likely linewidth-broadening mechanisms, namely: inhomogeneous broadening and phonon-induced dephasing [27], by providing a controlled electrostatic environment and strong confinement of low frequency phonons. Furthermore, a doped version of these NTQDs will enable a tunable spin-photon interface [29]. IWR acknowledges helpful discussions with N. Qureshi and A. Bachtold.
4,372.2
2009-11-06T00:00:00.000
[ "Physics" ]
Optical Projection Tomography Using a Commercial Microfluidic System Optical projection tomography (OPT) is the direct optical equivalent of X-ray computed tomography (CT). To obtain a larger depth of field, traditional OPT usually decreases the numerical aperture (NA) of the objective lens to decrease the resolution of the image. So, there is a trade-off between sample size and resolution. Commercial microfluidic systems can observe a sample in flow mode. In this paper, an OPT instrument is constructed to observe samples. The OPT instrument is combined with commercial microfluidic systems to obtain a three-dimensional and time (3D + T)/four-dimensional (4D) video of the sample. “Focal plane scanning” is also used to increase the images’ depth of field. A series of two-dimensional (2D) images in different focal planes was observed and compared with images simulated using our program. Our work dynamically monitors 3D OPT images. Commercial microfluidic systems simulate blood flow, which has potential application in blood monitoring and intelligent drug delivery platforms. We design an OPT adaptor to perform OPT on a commercial wide-field inverted microscope (Olympusix81). Images in different focal planes are observed and analyzed. Using a commercial microfluidic system, a video is also acquired to record motion pictures of samples at different flow rates. To our knowledge, this is the first time an OPT setup has been combined with a microfluidic system. Introduction Three-dimensional imaging has become an effective tool for biomedical research. However, a gap between macroscopic imaging technology and microscopic imaging technology led to an inability to observe samples of certain sizes. This gap was filled by optical projection tomography (OPT) technology. OPT technology enables three-dimensional imaging of samples that are 1-10 mm in size. Samples of this size are too large for confocal imaging, and too small for magnetic resonance imaging (MRI), but most vertebrate embryos are in this size range [1,2]. OPT microscopy is especially suitable for imaging samples whose size lies between 0.5 mm and 10 mm. It is difficult to use confocal microscopy to produce high-quality images at depths greater than 0.5 mm, and the resolution of nuclear magnetic resonance (NMR) imaging is too low to observe all tissues and organs. OPT is also capable of utilizing many colored and fluorescent dyes that were developed for tissue-specific or gene-specific staining, which is important to three-dimensional observations of specific tissues because it allows a computer to Micromachines 2020, 11, 293 2 of 12 automatically determine the three-dimensional structure of the target tissue. Many universities [3][4][5][6][7][8][9][10][11] have adopted OPT systems to help them do biomedical research. To improve OPT's imaging quality and imaging speed, many universities and research institutions have focused on basic research on imaging technology. Optimizing the OPT imaging technology has attracted researchers' interest. The OPT technology has also been under continuous development, including the improvement of the system itself, the improvement of algorithms, and expansion to other new imaging technologies. Trull, van der Horst et al. [12] applied the point transfer function of the lens to an iterative reconstruction algorithm, and proposed a new optical tomography reconstruction technique with filtered back projection. Correia, Lockwood et al. [13] used an iterative algorithm to reconstruct a sparsely sampled OPT dataset that significantly reduces the minimum acquisition time and light dose while maintaining the image quality. To image non-transparent tissue in vivo, Marcos-Vidal, Ancora et al. [14] applied the near-infrared band (1300-1400 nm) to OPT imaging. Compared with visible light, near-infrared light can increase the penetration depth and reduce the effects of autofluorescence and scattered light. The mechanism uses lasers in different wavelength bands as light sources to evaluate the imaging characteristics and the advantages of different wavelength bands. There are also several studies [15][16][17][18][19][20][21][22] that focus on improvements to OPT systems. In recent years, many studies have been conducted to improve the performance of OPT [23][24][25]. However, in most of these studies, the effects of stray light are not considered, and the main work focuses on transparent objects or in vitro imaging after drug cleaning. In order to meet the needs of OPT imaging of live samples (non-transparent tissue, dynamic imaging), there is an urgent need to improve OPT's resolution and imaging speed. Living tissue samples cannot be pre-treated, so a higher resolution is required compared with in vitro imaging, and we need to minimize the effects of stray light to improve the image quality. To image live samples, improving the resolution is a significant challenge. However, in order to obtain projection images, the depth of field (DOF) needs to cover at least half of the specimen [26]. As a result, there is a trade-off between image resolution and DOF [27]. For large samples, a low numerical aperture (NA) lens is used to obtain a large DOF while sacrificing resolution. The DOF can also be extended by axially scanning the focal plane of the objective lens through the sample [28]. Using this method, a high-NA objective lens can be used to simultaneously obtain a large DOF and a high resolution. In this paper, we designed an OPT adaptor to perform optical projection tomography on a wide-field inverted microscope. A commercial microfluidic system was used to observe the sample in flow mode. A series of images in different focal planes was observed and analyzed. An algorithm was applied to defocus the images in different positions. A video of the sphere was recorded at a specific flow rate to illustrate the dynamic motion of the samples. The advantage of the system is that it uses a commercial microfluidic system to enable the observation of images at different flow rates. Experimental Setup Our work was performed using an inverted microscope (model number: olympusix81). A high-speed camera (ImagEM X2 EM-CCD camera C9100-23B) was used in our microscopy system. The number of effective pixels was 512 (H) × 512 (V), and the pixel size was 16 µm (H) × 16 µm (V). The camera is characterized by a fast imaging speed, extremely high quantum efficiency in the effective wavelength range (up to 90% or more), and an excellent signal-to-noise ratio under deep cooling conditions. The commercial microfluidic system has three components. The pump can be used to adjust the flow rate, and the fluidic channel is etched and consists of three parallel aisles. The microunit chip holder is designed to fit the stage and hold the channels. The microfluidic system was modified to connect to the stepper motor so that the channels can rotate when the sample flows. We designed an OPT plate. The OPT adaptor was designed to fit into the aperture of a common 160 × 110 mm microscope stage. It consists of three main components that were fabricated using aluminum materials. To provide for controlled adjustment of the tilt angle, we separated the two plates. An aluminum dowel is seated in grooves on each plate at one end. At the other end, there is a fine thread adjustment screw (P25SB100L, Thorlabs Inc, Newton, NJ, USA) that allows the distance between the two plates to be adjusted. The stepper motor is mounted directly onto the side of the sample chamber, with its axle connecting the channels in the microfluidic system. An aluminum mounting port is attached to the motor's axle to allow the microfluidic channels that contain the sample to be easily mounted in the chamber. When we rotate the motor, the microfluidic channels will rotate together. So, we can take pictures of every aspect of our sample. Using images taken from different angles of the sample, we can reconstruct three-dimensional (3D) images of our sample. These images show every detail of the sample (e.g., a colloidal particle). With the high-precision stepper motor (an NM08AS-T4-MC04-HSM8 Stepper motor with a home sensor, NEMA size 08 × 33 mm, single shaft), we can rotate the sample and obtain two-dimensional (2D) projection information at different angles. Then, the 2D projection information is used to reconstruct 3D information about the sample. The filtered back projection algorithm is used in the reconstruction process. As Figure 1 illustrated, we provide an overview of the experimental optical projection tomography system. We combined our microscope platform with a commercial microfluidic system so that we could observe samples in flow mode. A pump was used to control the flow rate. First, a solution of magnetic polystyrene microspheres in ethanol (at different concentrations) was prepared as a sample. The diameter of the microspheres was 19 µm. The microfluidic channels were etched so that we could observe the flow of the microspheres. The polystyrene microspheres, at different flow rates and solution concentrations, were observed using pump-controlled flow rates. Magnetic fields of different strengths were generated by an alternating current power source and a self-made copper wire coil to guide the flow of the polystyrene microspheres. plates. An aluminum dowel is seated in grooves on each plate at one end. At the other end, there is a fine thread adjustment screw (P25SB100L, Thorlabs Inc, Newton, NJ, USA) that allows the distance between the two plates to be adjusted. The stepper motor is mounted directly onto the side of the sample chamber, with its axle connecting the channels in the microfluidic system. An aluminum mounting port is attached to the motor's axle to allow the microfluidic channels that contain the sample to be easily mounted in the chamber. When we rotate the motor, the microfluidic channels will rotate together. So, we can take pictures of every aspect of our sample. Using images taken from different angles of the sample, we can reconstruct three-dimensional (3D) images of our sample. These images show every detail of the sample (e.g., a colloidal particle). With the high-precision stepper motor (an NM08AS-T4-MC04-HSM8 Stepper motor with a home sensor, NEMA size 08 × 33 mm, single shaft), we can rotate the sample and obtain two-dimensional (2D) projection information at different angles. Then, the 2D projection information is used to reconstruct 3D information about the sample. The filtered back projection algorithm is used in the reconstruction process. As Figure 1 illustrated, we provide an overview of the experimental optical projection tomography system. We combined our microscope platform with a commercial microfluidic system so that we could observe samples in flow mode. A pump was used to control the flow rate. First, a solution of magnetic polystyrene microspheres in ethanol (at different concentrations) was prepared as a sample. The diameter of the microspheres was 19 μm. The microfluidic channels were etched so that we could observe the flow of the microspheres. The polystyrene microspheres, at different flow rates and solution concentrations, were observed using pump-controlled flow rates. Magnetic fields of different strengths were generated by an alternating current power source and a self-made copper wire coil to guide the flow of the polystyrene microspheres. Simulated Method In this part, we introduce a program by which to compute an image. In the following, the bold letters represent two-dimensional vectors. T(m) is the spectrum of the specimen, and t(x) is the transmission of the specimen. P o (ξ) is the pupil function of the objective back focal plane (BFP), and P c (ξ) corresponds to part of the condenser's front focal plane (FFP). We use F to denote the Fourier transform. P c is the intensity of the illumination pupil, and P o indicates the amplitude of the objective pupil. The P o (ξ) filters the diffraction orders, and hence acts as a low-pass filter. The filtered power spectrum |T(m)P o (m)| 2 denotes the intensity distribution of the objective back focal plane. The recorded intensity, I = |F m −1 T(m)P o (m)| 2 , is the squared magnitude of the image's amplitude. Therefore, the total image intensity [29,30] is given as Iε(x)dξ, With quasi-monochromatic partially coherent illumination, the 2D image recorded by a microscope, according to the sum-over-source algorithm [29], is as follows: We use I(x,y) to indicate the image intensity, and S(ξ,η) to denote the source intensity's distribution. T(f x ,f y ) indicates the spectrum of the object, and P(f x ,f y ) corresponds to the amplitude of the imaging pupil. The distribution of the effective refractive index is given by n(x,y,z) = (x, y, z). Therefore, the optical path difference profile [24] is The specimen transmission function is given by Experimental Procedure We combined our microscope platform with a commercial microfluidic system so that we could observe samples in flow mode. Through experimental observations, images of polystyrene microspheres with different focal planes were obtained and compared with the simulated images. Images and videos of microfluids at different flow rates were also obtained. Under the illumination of a bright field source, the dimensions of the same polystyrene microspheres were different in different focal planes. As illustrated in Figure 2, when the focal plane was located in the centre of the sphere, we obtained a clear image (i.e., the image is in focus). When the focal plane was located above the sphere, the size of the image (L2) was different from that of the in-focus image (L1). A similar situation occurred when the sphere was located at the top. This analysis shows the relationship between projection size and focal plane. When the focal plane was located at the bottom, the image with the largest projection size was obtained. As the focal plane gradually moved up, the projection size in the image was gradually reduced. Experimental Results The focal plane scanning technique can be used to obtain information at different depths (Z-axis sizes) by moving the position of the focal plane in a sample. The range of depths of a clear image can be obtained. This is called the depth of field (DOF), which can be effectively improved by reducing the numerical aperture of the objective lens. This method can be applied to traditional OPT imaging by placing an aperture directly behind the objective lens, which sacrifices optical resolution to achieve a greater depth of field. The focal plane scanning method can extend the DOF without sacrificing the optical resolution of OPT imaging. In our experiment, a total of 125 images was obtained. The Z-axis indentation between each image was 0.47 μm, the number of effective pixels was 512 (H) × 512 (V), the pixel size was 16 μm (H) × 16 μm (V), the magnification was 32×, and the real space interval between pixels was 500 nm, so the spatial field of view of the entire picture was 256 μm × 256 μm. As the Figure 2 illustrated, the focal plane scanning results of polystyrene microspheres were analyzed. Figures 3 and 4 show maximum out-of-focus images of the polystyrene microspheres and their corresponding histograms. Figure 3 shows that the size of the microspheres is larger when the focal plane is at the bottom; this phenomenon was analyzed in Figure 2. The experimental results are consistent with the theoretical analysis. Comparing Figure 3 with Figure 4, we can see that the image is clearer when the focal plane is at the top. We also can see the reason for this from their respective histograms. The pixel distribution has a higher concentration when the focal plane is at the top. A pixel distribution with a higher concentration provides a better image contrast. Figure 5 shows a clear image when the focal plane was located at the centre of the sphere. Experimental Results The focal plane scanning technique can be used to obtain information at different depths (Z-axis sizes) by moving the position of the focal plane in a sample. The range of depths of a clear image can be obtained. This is called the depth of field (DOF), which can be effectively improved by reducing the numerical aperture of the objective lens. This method can be applied to traditional OPT imaging by placing an aperture directly behind the objective lens, which sacrifices optical resolution to achieve a greater depth of field. The focal plane scanning method can extend the DOF without sacrificing the optical resolution of OPT imaging. In our experiment, a total of 125 images was obtained. The Z-axis indentation between each image was 0.47 µm, the number of effective pixels was 512 (H) × 512 (V), the pixel size was 16 µm (H) × 16 µm (V), the magnification was 32×, and the real space interval between pixels was 500 nm, so the spatial field of view of the entire picture was 256 µm × 256 µm. As the Figure 2 illustrated, the focal plane scanning results of polystyrene microspheres were analyzed. Figures 3 and 4 show maximum out-of-focus images of the polystyrene microspheres and their corresponding histograms. Figure 3 shows that the size of the microspheres is larger when the focal plane is at the bottom; this phenomenon was analyzed in Figure 2. The experimental results are consistent with the theoretical analysis. Comparing Figure 3 with Figure 4, we can see that the image is clearer when the focal plane is at the top. We also can see the reason for this from their respective histograms. The pixel distribution has a higher concentration when the focal plane is at the top. A pixel distribution with a higher concentration provides a better image contrast. Figure 5 shows a clear image when the focal plane was located at the centre of the sphere. Experimental Results The focal plane scanning technique can be used to obtain information at different depths (Z-axis sizes) by moving the position of the focal plane in a sample. The range of depths of a clear image can be obtained. This is called the depth of field (DOF), which can be effectively improved by reducing the numerical aperture of the objective lens. This method can be applied to traditional OPT imaging by placing an aperture directly behind the objective lens, which sacrifices optical resolution to achieve a greater depth of field. The focal plane scanning method can extend the DOF without sacrificing the optical resolution of OPT imaging. In our experiment, a total of 125 images was obtained. The Z-axis indentation between each image was 0.47 μm, the number of effective pixels was 512 (H) × 512 (V), the pixel size was 16 μm (H) × 16 μm (V), the magnification was 32×, and the real space interval between pixels was 500 nm, so the spatial field of view of the entire picture was 256 μm × 256 μm. As the Figure 2 illustrated, the focal plane scanning results of polystyrene microspheres were analyzed. Figures 3 and 4 show maximum out-of-focus images of the polystyrene microspheres and their corresponding histograms. Figure 3 shows that the size of the microspheres is larger when the focal plane is at the bottom; this phenomenon was analyzed in Figure 2. The experimental results are consistent with the theoretical analysis. Comparing Figure 3 with Figure 4, we can see that the image is clearer when the focal plane is at the top. We also can see the reason for this from their respective histograms. The pixel distribution has a higher concentration when the focal plane is at the top. A pixel distribution with a higher concentration provides a better image contrast. Figure 5 shows a clear image when the focal plane was located at the centre of the sphere. From the above experimental data, it can be found that the size of the spheres was significantly larger (larger than the actual size) when the focal plane was located above the small sphere, and the imaging size of the spheres was smaller than the actual size when the focal plane was located below the small sphere. This result is consistent with the previous analysis. The microspheres in flow mode were also observed. The flow rate can be controlled using a pump. At a flow rate of 1 μL/min, video stream data on the polystyrene microspheres were obtained using the modified system. The diameter of the microspheres was 19 μm. A total of 4432 frames was obtained in our video. As shown in Figure 6, four frames were selected to illustrate the motion of the microspheres. From the above experimental data, it can be found that the size of the spheres was significantly larger (larger than the actual size) when the focal plane was located above the small sphere, and the imaging size of the spheres was smaller than the actual size when the focal plane was located below the small sphere. This result is consistent with the previous analysis. The microspheres in flow mode were also observed. The flow rate can be controlled using a pump. At a flow rate of 1 μL/min, video stream data on the polystyrene microspheres were obtained using the modified system. The diameter of the microspheres was 19 μm. A total of 4432 frames was obtained in our video. As shown in Figure 6, four frames were selected to illustrate the motion of the microspheres. From the above experimental data, it can be found that the size of the spheres was significantly larger (larger than the actual size) when the focal plane was located above the small sphere, and the imaging size of the spheres was smaller than the actual size when the focal plane was located below the small sphere. This result is consistent with the previous analysis. The microspheres in flow mode were also observed. The flow rate can be controlled using a pump. At a flow rate of 1 µL/min, video stream data on the polystyrene microspheres were obtained using the modified system. The diameter of the microspheres was 19 µm. A total of 4432 frames was obtained in our video. As shown in Figure 6, four frames were selected to illustrate the motion of the microspheres. As shown in Figure 6, the modified system can be used to observe the dynamic changes in microspheres. OPT requires the DOF of the lens to cover at least half of the sample. There is a trade-off between obtaining a high resolution with a high-NA lens and obtaining a large DOF with a low-NA lens. The DOF of a high-NA objective lens can be extended by scanning its focal plane through the sample. We call this extended DOF image a "pseudoprojection". Images reconstructed from these pseudoprojections have an isometric resolution, which may be identical to the lateral resolution of the high-NA objective lens. The DOF of a high-NA objective lens can be extended by scanning the focal plane through the sample. This can overcome the constraint on conventional OPT, which requires a low-NA objective lens in order to obtain a large DOF. Micromachines 2020, 11, As shown in Figure 6, the modified system can be used to observe the dynamic changes in microspheres. OPT requires the DOF of the lens to cover at least half of the sample. There is a tradeoff between obtaining a high resolution with a high-NA lens and obtaining a large DOF with a low-NA lens. The DOF of a high-NA objective lens can be extended by scanning its focal plane through the sample. We call this extended DOF image a "pseudoprojection". Images reconstructed from these pseudoprojections have an isometric resolution, which may be identical to the lateral resolution of the high-NA objective lens. The DOF of a high-NA objective lens can be extended by scanning the focal plane through the sample. This can overcome the constraint on conventional OPT, which requires a low-NA objective lens in order to obtain a large DOF. The focal plane gradually moves up; so, we set the distance to 0 μm when the focal plane was located at the bottom. The distance from the focal plane to the bottom is marked in the upper left corner of each image in Figure 7. A total of 125 images was obtained. The focal plane indentation distance between each image was 0.47 μm, the number of effective pixels was 512 (H) × 512 (V), the pixel size was 16 μm (H) × 16 μm (V), the magnification was 32×, and the interval between pixels was 500 nm, so the spatial field of view of the entire picture was 256 μm × 256 μm. As illustrated in The focal plane gradually moves up; so, we set the distance to 0 µm when the focal plane was located at the bottom. The distance from the focal plane to the bottom is marked in the upper left corner of each image in Figure 7. A total of 125 images was obtained. The focal plane indentation distance between each image was 0.47 µm, the number of effective pixels was 512 (H) × 512 (V), the pixel size was 16 µm (H) × 16 µm (V), the magnification was 32×, and the interval between pixels was 500 nm, so the spatial field of view of the entire picture was 256 µm × 256 µm. As illustrated in Figure 7, the images in different focal planes of a single sample are different. Images with different focal planes reflect the cross sections of the same object in different dimensions. In Table 1, 'NA' represents the numerical aperture of the objective lens. We used a 4× objective lens with an NA of 0.16 (Olympus). Although the DOF of this objective lens is only about 16.27 μm, our method applies "focal plane scanning" to the OPT system. In contrast, for conventional OPT, a lens with an NA of about 0.025 is needed to achieve a DOF of 1 mm. The "focal plane scanning" method extends the imaging's depth of field without sacrificing resolution. Simulation Results As Figure 8 illustrated, we provide an overview of a simulation program. A matlab program was used to compute our image. The first thing we needed to do was set the refractive index and the wavelength of the light source. The refractive index of the polystyrene microspheres was set to 1.55. The alcohol solution of the polystyrene microspheres flowed between two glasses, the refractive index of the glasses was set to 1.515, and the refractive index of the alcohol was 1.36. The illumination wavelength of the microscope was 0.577 μm, and the diameter of the spheres was 19 μm. These parameters were used to calculate the optical path difference (OPD) (x,y). The specimen's transmission function is given by t(x,y) = exp [iOPD(x,y)], so we can compute the image from the specified transmission. The compute image function accepts a 2D matrix (representing the specimen's transmission) as an input and computes the final intensity image, which will be a 3D matrix if the grid axis along the axial direction is a vector. The specific steps of the simulation are as follows: (1) set the parameters of the microscope and the object; (2) choose a small simulation region for a reasonable runtime; (3) set the parameters of the bright field microscope and the image; and (4) compare the simulated images with the originals. In Table 1, 'NA' represents the numerical aperture of the objective lens. We used a 4× objective lens with an NA of 0.16 (Olympus). Although the DOF of this objective lens is only about 16.27 µm, our method applies "focal plane scanning" to the OPT system. In contrast, for conventional OPT, a lens with an NA of about 0.025 is needed to achieve a DOF of 1 mm. The "focal plane scanning" method extends the imaging's depth of field without sacrificing resolution. Simulation Results As Figure 8 illustrated, we provide an overview of a simulation program. A matlab program was used to compute our image. The first thing we needed to do was set the refractive index and the wavelength of the light source. The refractive index of the polystyrene microspheres was set to 1.55. The alcohol solution of the polystyrene microspheres flowed between two glasses, the refractive index of the glasses was set to 1.515, and the refractive index of the alcohol was 1.36. The illumination wavelength of the microscope was 0.577 µm, and the diameter of the spheres was 19 µm. These parameters were used to calculate the optical path difference (OPD) (x,y). The specimen's transmission function is given by t(x,y) = exp [iOPD(x,y)], so we can compute the image from the specified transmission. The compute image function accepts a 2D matrix (representing the specimen's transmission) as an input and computes the final intensity image, which will be a 3D matrix if the grid axis along the axial direction is a vector. The specific steps of the simulation are as follows: (1) set the parameters of the microscope and the object; (2) choose a small simulation region for a reasonable runtime; (3) set the parameters of the bright field microscope and the image; and (4) compare the simulated images with the originals. Micromachines 2020, 11, x 9 of 12 Images in a "defocused state" were simulated using an algorithm and compared with the real images acquired during our experiment. We also marked the focal plane position in terms of distance to the bottom. As shown in Figure 9, the algorithm was able to compute the images when the focal plane was located in different positions. The results of the simulation show that our program can be used to reliably compute images and simulate images on different focal planes. Images in a "defocused state" were simulated using an algorithm and compared with the real images acquired during our experiment. We also marked the focal plane position in terms of distance to the bottom. As shown in Figure 9, the algorithm was able to compute the images when the focal plane was located in different positions. The results of the simulation show that our program can be used to reliably compute images and simulate images on different focal planes. Images in a "defocused state" were simulated using an algorithm and compared with the real images acquired during our experiment. We also marked the focal plane position in terms of distance to the bottom. As shown in Figure 9, the algorithm was able to compute the images when the focal plane was located in different positions. The results of the simulation show that our program can be used to reliably compute images and simulate images on different focal planes. Conclusions In this study, an OPT system was constructed using an inverted microscope. The combination of a commercial microfluidic system and our microscope platform can be used to observe samples in flow mode. OPT requires the imaging's depth of field to cover at least half of the sample. However, in the traditional method, we need to reduce the numerical aperture of the objective lens by placing a pinhole. This significantly lowers the resolution of the image. To optimize this trade-off, the "focal plane scanning" technology was applied to increase the imaging's depth of field, and a series of focal plane scanning images was obtained and analyzed. The simulation and image calculations were performed on defocused images with different Z-axis sizes, and the computed images were compared with real images. As a 3D imaging tool, OPT plays a crucial role in many applications. A microfluidic system can be used to observe microsamples in flow mode. A combination of OPT and microfluidics enables 3D dynamic monitoring of microsamples. Using this technology, we can obtain three-dimensional and time (3D + T)/four-dimensional (4D) images of samples with a special size. This method is innovative and, in the future, may help us to observe the complex changes that occur in the microworld. In this study, images taken on different focal planes were observed and analyzed. The simulated images were also compared with real images. A video was recorded to show the dynamic changes of the microspheres. This work is of significance to improving the resolution of OPT technology and the dynamic monitoring of microenvironments.
7,341.6
2020-03-01T00:00:00.000
[ "Engineering", "Physics" ]
An RGD-Containing Peptide Derived from Wild Silkworm Silk Fibroin Promotes Cell Adhesion and Spreading Arginine-Glycine-Aspartate (RGD) tripeptide can promote cell adhesion when present in the amino acid of proteins such as fibronectin. In order to demonstrate the bioactivity of an RGD-containing silk protein, a gene encoding the RGD motif-containing peptide GSGAGGRGDGGYGSGSS (–RGD–) derived from nonmulberry silk was designed and cloned, then multimerised and inserted into a commercial pGEX expression vector for recombinant expression of (–RGD–)n peptides. Herein, we focus on two glutathione-S-transferase (GST)-tagged fusion proteins, GST–(–RGD–)4 and GST–(–RGD–)8, which were expressed in Escherichia coli BL21, purified by GST affinity chromatography, and analyzed with sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and mass spectrometry (MS). Target peptides (–RGD–)4 and (–RGD–)8 (6.03 and 11.5 kDa) were cleaved from the GST-tag by thrombin digestion, as verified with MS and SDS-PAGE. Isoelectric point analysis confirmed that target peptides were expressed and released in accordance with the original design. Target peptides self-assembled into a mainly α-helical structure, as determined by circular dichroism spectroscopy. Furthermore, (–RGD–)4 and (–RGD–)8 modified mulberry silk fibroin films were more effective for rapid cell adhesion, spreading and proliferative activity of L929 cells than some chemically synthesized RGD peptides modified and mulberry silk lacking the RGD motif. The RGD tripeptide is considered a recognition sequence for promoting cell adhesion, and was originally found in fibronectin, laminin, vitronectin, fibrin, and collagen, where it binds specifically to the cell surface [14]. RGD tripeptides in A. yamamai and A. pernyi silk fibroins have up to 14 and 12 repeats, respectively [8,9], and these silk fibroins display even higher cell affinity and hold greater promise for biomaterial applications than B. mori silk fibroin. Only a few studies have reported that regenerated RGD tripeptide-containing silk fibroin materials from wild silkworm species are potentially useful biomaterials, with satisfactory cytocompatibility and the ability to promote tissue remodeling [15][16][17]. However, the behaviors of wild silkworm species renders them unsuitable for domestication, resulting low production and minimal scope for biomaterial applications. To provide a theoretical foundation for increasing the applicability of nonmulberry silk fibroins to particular cell lines and tissue engineering, RGD-containing multimers (-RGD-) 4 and (-RGD-) 8 based on the monomer GSGAGGRGDGGYGSGSS derived from A. pernyi or A. yamamai silk fibroins were recombinantly produced in Escherichia coli BL21. Their effects on cell behavior were preliminarily evaluated following grafting onto mulberry (B. mori) silk fibroin films using L929 cells. Protein Purification Fusion proteins were purified using a GST affinity purification system (Novagen, Billerica, MA, USA) as previously described [20]. Briefly, the cell pellet was suspended in GST-bind/wash buffer and sonicated on ice. The lysate was centrifuged at 4 • C and the supernatant was loaded onto a GST affinity column and washed with GST-wash buffer. Finally, the fusion protein was eluted with GST-elution buffer, then loaded onto a Sephadex G-15 zeolite column (Solarbio, Beijing, China) to remove glutathione and salt. Determination of Expression Yield The yield of purified fusion protein was determined using a Smartspec Plus UV/visible spectrophotometer (Bio-Rad, Hercules, CA, USA) by measuring the absorbance at 260 and 280 nm. Protein concentration was calculated using the formula C (mg/mL) = (1.45 × A 280 ) − (0.74 × A 260 ), then converted to the amount per L of bacterial cell culture [21]. Cleavage of Fusion Proteins The (-RGD-) 4 and GST-(-RGD-) 8 peptides from fusion proteins were obtained, as described previously [20]. Briefly, purified fusion protein was digested with thrombin (Novagen) at 20 • C for 16 h, and the reaction mixture was loaded onto a GST affinity column to remove the GST-tag. Finally, samples were freeze-dried and stored at 4 • C. Molecular Weight Determination Molecular weight was determined with sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and mass spectrometry (MS) as described previously [20]. Briefly, cell lysate or purified fusion protein was mixed with loading buffer and boiled for 3-5 min, then loaded onto a 10% (w/v) polyacrylamide gel (Sigma, St. Louis, MO, USA) and stained using Coomassie Brilliant Blue. Molecular weight was qualitatively analysed by referring to protein molecular weight standards. Quantitative analysis of molecular weight was performed using a 4800 MALDI-TOF/TOF mass spectrometer (AB SCIEX, Foster City, CA, USA). Charge Assay The ζ-potential of fusion proteins or liberated peptides was measured using a ZS90 Zetasizer Nano (Malvern Instruments, Malvern, UK) in 5 mM sodium phosphate buffer with 5 mM NaCl at 25 • C. The pH of the buffer was adjusted to 5.0, 6.0, 7.0, 8.0, 9.0, and 10.0 using NaOH or HCl. Circular Dichroism (CD) Assay The ellipticity of 0.1 mg/mL purified (-RGD-) 4 and (-RGD-) 8 peptide solutions was measured using a J-815 CD spectrometer (Jasco, Tokyo, Japan) with a 1.0 mm path-length cell at 25 • C, an accumulation time of 4 s, and a scanning rate of 100 nm/min. A blank solution was measured under the same conditions and subtracted from sample spectra. Cell Adhesion Assay L929 fibroblasts were cultured in Dulbecco's modified Eagles medium (Gibco, Carlsbad, CA, USA) containing 10% (v/v) fetal bovine serum (Gibco, Carlsbad, CA, USA) and 1% (v/v) antibiotics (100 U/mL penicillin and 100 µg/mL streptomycin) at 37 • C in a 5% CO 2 incubator. During the logarithmic growth phase, cells were trypsinized using 0.25% trypsin (Sigma, St. Louis, MO, USA) and resuspended at a density of 1.0 × 10 5 cells/mL. A 1 mL sample of the L929 cell suspension (1.0 × 10 5 cells, N 1 ) was added to each well pre-coated with (-RGD-) 4 -or (-RGD-) 8 -modified mulberry silk fibroin films and incubated at 37 • C in 5% CO 2 . After seeding for 1, 2, or 3 h, loosely adhered or unattached cells were removed and films were carefully washed twice with PBS (pH 7.4). The cell number (N 2 ) in residual liquid was counted using a haemocytometer and an inverted microscope (TH4-200, Olympus, Tokyo, Japan), and converted into cell adhesion rate. The cell adhesion ratio was calculated using the following equation: Cell Viability Evaluation Cells (2.5 × 10 4 cells/well) were added to 24-well tissue culture plates coated with peptide-modified films and incubated at 37 • C in 5% CO 2 . After culturing for 1 or 3 days, cell morphology was observed using the TH4-200 inverted microscope. On day 3, the cell count was determined using the haemocytometer. The cell proliferation ratio was calculated, as described in Section 2.9. Half of the medium was replaced every other day. SDS-Page Analysis of Total Protein from E. coli BL21 Cells Genes were expressed under the control of the Ptac promoter with a translation-enhancing sequence (g10) and a ribosome-binding site for regulation of translation level. A GST-tag was used to purify expression products and a protease (thrombin) recognition site (Leu-Val-Pro-Arg-Gly-Ser) was inserted to enable target peptide release from the GST-tag by cleaving the amide linkage between Arg and Gly [20]. Fusion proteins GST-(-RGD-) 4 and GST-(-RGD-) 8 from crude cell extracts were analyzed with SDS-PAGE ( Figure 1) and bands of a size close to the expected molecular weight (32.1 and 37.7 kDa, respectively) were observed (lanes 3 and 4). The molecular weights were further confirmed by MS. determined using the haemocytometer. The cell proliferation ratio was calculated, as described in Section 2.9. Half of the medium was replaced every other day. SDS-Page Analysis of Total Protein from E. coli BL21 Cells Genes were expressed under the control of the Ptac promoter with a translation-enhancing sequence (g10) and a ribosome-binding site for regulation of translation level. A GST-tag was used to purify expression products and a protease (thrombin) recognition site (Leu-Val-Pro-Arg-Gly-Ser) was inserted to enable target peptide release from the GST-tag by cleaving the amide linkage between Arg and Gly [20]. Fusion proteins GST-(-RGD-)4 and GST-(-RGD-)8 from crude cell extracts were analyzed with SDS-PAGE ( Figure 1) and bands of a size close to the expected molecular weight (32.1 and 37.7 kDa, respectively) were observed (lanes 3 and 4). The molecular weights were further confirmed by MS. Expression levels of fusion proteins were optimized by regulating IPTG concentration, induction time, and initial cell density ( Figure 2). When the initial cell density reached OD600 = 0.6 AU, protein expression was induced by 0.1-1.0 mM IPTG and culturing continued for up to 6 h at 37 °C with shaking. Bands corresponding to fusion proteins were clearly visible following induction with 0.2 mM IPTG, and the optimal IPTG concentration was 0.4 mM for GST-(-RGD-)4 and 0.4-0.6 mM for GST-(-RGD-)8. Expression of target proteins was minimal without IPTG induction, but some E. coli proteins displayed high expression. Using 0.4 mM IPTG and an OD600 = 0.6 AU, expression of both fusion protein variants was enhanced by extending the induction time. Following a 1 h induction, the GST-(-RGD-)4 expression level increased markedly, with no subsequent change until 5 h post-induction. After that, expression of E. coli proteins increased while the GST-(-RGD-)4 expression level gradually decreased. GST-(-RGD-)8 expression levels were already appreciable following a 1 h induction and peaked after 3-4 h, but expression of E. coli proteins increased markedly after a 6 h induction. The optimal pre-induction density was OD600 = 1.5 AU for GST-(-RGD-)4 expression, and OD600 = 0.9 AU for GST-(-RGD-)8 expression. Expression levels of fusion proteins were optimized by regulating IPTG concentration, induction time, and initial cell density ( Figure 2). When the initial cell density reached OD 600 = 0.6 AU, protein expression was induced by 0.1-1.0 mM IPTG and culturing continued for up to 6 h at 37 • C with shaking. Bands corresponding to fusion proteins were clearly visible following induction with 0.2 mM IPTG, and the optimal IPTG concentration was 0.4 mM for GST-(-RGD-) 4 and 0.4-0.6 mM for GST-(-RGD-) 8 . Expression of target proteins was minimal without IPTG induction, but some E. coli proteins displayed high expression. Using 0.4 mM IPTG and an OD 600 = 0.6 AU, expression of both fusion protein variants was enhanced by extending the induction time. Following a 1 h induction, the GST-(-RGD-) 4 expression level increased markedly, with no subsequent change until 5 h post-induction. After that, expression of E. coli proteins increased while the GST-(-RGD-) 4 expression level gradually decreased. GST-(-RGD-) 8 expression levels were already appreciable following a 1 h induction and peaked after 3-4 h, but expression of E. coli proteins increased markedly after a 6 h induction. The optimal pre-induction density was OD 600 = 1.5 AU for GST-(-RGD-) 4 expression, and OD 600 = 0.9 AU for GST-(-RGD-) 8 expression. SDS-PAGE and MS Analysis of Purified Fusion Proteins Protein purification was expedited by the GST-tag encoded in the pGEX-KG vector. SDS-PAGE analysis confirmed the high purity of the fusion proteins following affinity chromatography ( Figure 3). Both fusion proteins were highly stable with no obvious degradation, and the molecular weights were similar to the predicted values of 32. SDS-PAGE and MS Analysis of Purified Fusion Proteins Protein purification was expedited by the GST-tag encoded in the pGEX-KG vector. SDS-PAGE analysis confirmed the high purity of the fusion proteins following affinity chromatography ( Figure 3). Both fusion proteins were highly stable with no obvious degradation, and the molecular weights were similar to the predicted values of 32.1 and 37.7 kDa. Accurate molecular weights determined for GST-(-RGD-) 4 and GST-(-RGD-) 8 SDS-PAGE and MS Analysis of Purified Fusion Proteins Protein purification was expedited by the GST-tag encoded in the pGEX-KG vector. SDS-PAGE analysis confirmed the high purity of the fusion proteins following affinity chromatography ( Figure 3). Both fusion proteins were highly stable with no obvious degradation, and the molecular weights were similar to the predicted values of 32. SDS-PAGE and MS Analysis of Released Peptides Fusion proteins GST-(-RGD-) 4 and GST-(-RGD-) 8 were incubated with thrombin and purified using GST affinity chromatography to obtain target peptides (-RGD-) 4 and (-RGD-) 8 . As shown in Figure 4, GST-(-RGD-) 4 and GST-(-RGD-) 8 (lane 1) were efficiently cleaved into two fragments, GST (lane 2) and target peptide (lane 3). The bands appearing in lane 3 qualitatively indicated the successful expression of (-RGD-) 4 and (-RGD-) 8 according to protein molecular weight standards, and molecular weights were confirmed using MS, which were 5.726 and 11.498 kDa for (-RGD-) 4 and (-RGD-) 8 , respectively, consistent with the predicted values 6.03 and 11.5 kDa. The results of SDS-PAGE and MS confirmed that the target peptides were correctly expressed, highly purified, and stable. SDS-PAGE and MS Analysis of Released Peptides Fusion proteins GST-(-RGD-)4 and GST-(-RGD-)8 were incubated with thrombin and purified using GST affinity chromatography to obtain target peptides (-RGD-)4 and (-RGD-)8. As shown in Figure 4, GST-(-RGD-)4 and GST-(-RGD-)8 (lane 1) were efficiently cleaved into two fragments, GST (lane 2) and target peptide (lane 3). The bands appearing in lane 3 qualitatively indicated the successful expression of (-RGD-)4 and (-RGD-)8 according to protein molecular weight standards, and molecular weights were confirmed using MS, which were 5.726 and 11.498 kDa for (-RGD-)4 and (-RGD-)8, respectively, consistent with the predicted values 6.03 and 11.5 kDa. The results of SDS-PAGE and MS confirmed that the target peptides were correctly expressed, highly purified, and stable. CD Analysis of Released Peptides The functions of bioactive macromolecules depend on their structures. The molecular conformation of a protein presents characteristic absorption peaks in the far-UV area (170-250 nm) of CD spectra. A strong negative cotton effect peak at 195-202 nm is characteristic of random coil structure. A strong positive cotton effect peak at 185-200 nm and a negative wide peak around 217 nm are characteristic of β-sheet structure. The characteristic peaks of α-helical structure include a positive cotton effect peak around 192 nm, a negative cotton effect peak at 207-208 nm, and/or a negative peak around 222 nm. Negative cotton effect peaks around 192.5 nm and 227 nm, and a positive cotton effect peak at 200-205 nm indicate β-turn structure. Figure 5 shows the spectrum of (-RGD-)4 with strong negative cotton effect peaks around 200 nm assigned to random coil, and a 224 nm peak assigned to α-helical or transitional β-turn structures. The spectrum of (-RGD-)8 included characteristic peaks of α-helical structure at 210 nm (negative), 222 nm (negative) and 193 nm (positive). Thus, the CD spectra suggest that the molecular conformation changed as the molecular chain was lengthened. The peptide GSGAGGRGDGGYGSGSS (-RGD-) contains CD Analysis of Released Peptides The functions of bioactive macromolecules depend on their structures. The molecular conformation of a protein presents characteristic absorption peaks in the far-UV area (170-250 nm) of CD spectra. A strong negative cotton effect peak at 195-202 nm is characteristic of random coil structure. A strong positive cotton effect peak at 185-200 nm and a negative wide peak around 217 nm are characteristic of β-sheet structure. The characteristic peaks of α-helical structure include a positive cotton effect peak around 192 nm, a negative cotton effect peak at 207-208 nm, and/or a negative peak around 222 nm. Negative cotton effect peaks around 192.5 nm and 227 nm, and a positive cotton effect peak at 200-205 nm indicate β-turn structure. Figure 5 shows the spectrum of (-RGD-) 4 with strong negative cotton effect peaks around 200 nm assigned to random coil, and a 224 nm peak assigned to α-helical or transitional β-turn structures. The spectrum of (-RGD-) 8 included characteristic peaks of α-helical structure at 210 nm (negative), 222 nm (negative) and 193 nm (positive). Thus, the CD spectra suggest that the molecular conformation changed as the molecular chain was lengthened. The peptide GSGAGGRGDGGYGSGSS (-RGD-) contains hydrophilic side groups -OH, -NH 2 and -COOH, hence it is easier for peptides with a greater number of -RGD-repeats to form intramolecular hydrogen bonds, leading to a change in molecular conformation into a more stable α-helical structure in (-RGD-) 8 . Many bioactive macromolecules include a high proportion of α-helical structure, and this is often correlated with physicochemical properties [23,24]. Polymers 2018, 10, x FOR PEER REVIEW 7 of 13 hydrophilic side groups -OH, -NH2 and -COOH, hence it is easier for peptides with a greater number of -RGD-repeats to form intramolecular hydrogen bonds, leading to a change in molecular conformation into a more stable α-helical structure in (-RGD-)8. Many bioactive macromolecules include a high proportion of α-helical structure, and this is often correlated with physicochemical properties [23,24]. Charge Analysis of Released Peptides Amphoterism is an important characteristic of a protein. The fusion proteins in this study exhibited a negative ζ-potential in neutral aqueous solution, and the measured isoelectric point (pI) for both GST-(-RGD-)4 and GST-(-RGD-)8 was between 6.2 and 6.5 (Figure 6), consistent with the predicted value of 6.61. After digestion, peptides released from fusion proteins exhibited a positive ζ-potential in neutral aqueous solution, and the measured pI values for (-RGD-)4 and (-RGD-)8 were 8.7 and 8.5, respectively, which are very close to the predicted values of 8.72 and 8.55. Cell Adhesion and Proliferation Cell adhesion and cell proliferation activities of (-RGD-)4-and (-RGD-)8-modified mulberry silk fibroin films were evaluated by seeding L929 cells. At 1 h after seeding, more than 60% of L929 cells adhered stably to all materials ( Figure 7A). Pure mulberry silk fibroin film was the most unfavorable for cell adhesion; the cell adhesion rate on RGE-10-modified mulberry silk fibroin film was close to that on unmodified mulberry silk fibroin film. After modification with GRGDS and RGD-10, the cell adhesion rate increased relative to that on unmodified mulberry silk fibroin film, and films modified with (-RGD-)4 or (-RGD-)8 were more favourable for cell adhesion (Figure 7B), and the increase in the cell adhesion rate was concentration-dependent. Maximal cell adhesion rates on silk fibroin films were detected at a 0.01 μmoL/cm 2 dose of (-RGD-)4 and a 0.005 μmoL/cm 2 dose of (-RGD-)8 ( Figure 7C,D), and differences compared to unmodified mulberry silk fibroin film were significant. Charge Analysis of Released Peptides Amphoterism is an important characteristic of a protein. The fusion proteins in this study exhibited a negative ζ-potential in neutral aqueous solution, and the measured isoelectric point (pI) for both GST-(-RGD-) 4 and GST-(-RGD-) 8 was between 6.2 and 6.5 (Figure 6), consistent with the predicted value of 6.61. After digestion, peptides released from fusion proteins exhibited a positive ζ-potential in neutral aqueous solution, and the measured pI values for (-RGD-) 4 and (-RGD-) 8 hydrophilic side groups -OH, -NH2 and -COOH, hence it is easier for peptides with a greater number of -RGD-repeats to form intramolecular hydrogen bonds, leading to a change in molecular conformation into a more stable α-helical structure in (-RGD-)8. Many bioactive macromolecules include a high proportion of α-helical structure, and this is often correlated with physicochemical properties [23,24]. Charge Analysis of Released Peptides Amphoterism is an important characteristic of a protein. The fusion proteins in this study exhibited a negative ζ-potential in neutral aqueous solution, and the measured isoelectric point (pI) for both GST-(-RGD-)4 and GST-(-RGD-)8 was between 6.2 and 6.5 (Figure 6), consistent with the predicted value of 6.61. After digestion, peptides released from fusion proteins exhibited a positive ζ-potential in neutral aqueous solution, and the measured pI values for (-RGD-)4 and (-RGD-)8 were 8.7 and 8.5, respectively, which are very close to the predicted values of 8.72 and 8.55. Cell Adhesion and Proliferation Cell adhesion and cell proliferation activities of (-RGD-)4-and (-RGD-)8-modified mulberry silk fibroin films were evaluated by seeding L929 cells. At 1 h after seeding, more than 60% of L929 cells adhered stably to all materials ( Figure 7A). Pure mulberry silk fibroin film was the most unfavorable for cell adhesion; the cell adhesion rate on RGE-10-modified mulberry silk fibroin film was close to that on unmodified mulberry silk fibroin film. After modification with GRGDS and RGD-10, the cell adhesion rate increased relative to that on unmodified mulberry silk fibroin film, and films modified with (-RGD-)4 or (-RGD-)8 were more favourable for cell adhesion (Figure 7B), and the increase in the cell adhesion rate was concentration-dependent. Maximal cell adhesion rates on silk fibroin films were detected at a 0.01 μmoL/cm 2 dose of (-RGD-)4 and a 0.005 μmoL/cm 2 dose of (-RGD-)8 ( Figure 7C,D), and differences compared to unmodified mulberry silk fibroin film were significant. Cell Adhesion and Proliferation Cell adhesion and cell proliferation activities of (-RGD-) 4 -and (-RGD-) 8 -modified mulberry silk fibroin films were evaluated by seeding L929 cells. At 1 h after seeding, more than 60% of L929 cells adhered stably to all materials ( Figure 7A). Pure mulberry silk fibroin film was the most unfavorable for cell adhesion; the cell adhesion rate on RGE-10-modified mulberry silk fibroin film was close to that on unmodified mulberry silk fibroin film. After modification with GRGDS and RGD-10, the cell adhesion rate increased relative to that on unmodified mulberry silk fibroin film, and films modified with (-RGD-) 4 or (-RGD-) 8 were more favourable for cell adhesion (Figure 7B), and the increase in the cell adhesion rate was concentration-dependent. Maximal cell adhesion rates on silk fibroin films were detected at a 0.01 µmoL/cm 2 dose of (-RGD-) 4 and a 0.005 µmoL/cm 2 dose of (-RGD-) 8 ( Figure 7C,D), and differences compared to unmodified mulberry silk fibroin film were significant. Furthermore, L929 cells underwent better spreading into spindle, triangular, or polygonal shapes on (-RGD-)4-, (-RGD-)8-, and RGD-10-modified mulberry silk fibroin films compared with other samples on day 1, and spindle cells were particularly prevalent in (-RGD-)4 and (-RGD-)8 samples. Cells showed satisfactory proliferation activity, fully covered samples, and many cells were beginning to emerge at 3 days after seeding ( Figure 8A). The cell proliferation activity on mulberry silk fibroin films modified with (-RGD-)4 (p < 0.01) or (-RGD-)8 (p < 0.01) was higher than that on unmodified film, cell culture plate controls, and the RGE-10 sample, and was also higher than on any other modified materials ( Figure 8B). However, there was no significant difference in cell proliferation rate on (-RGD-)4-or (-RGD-)8-modified mulberry silk fibroin films compared with film modified with the chemosynthetic peptide GRGDS. After modification by (-RGD-)4 and (-RGD-)8 at a same dose of 0.015 μmoL/cm 2 , the cell proliferation rate on mulberry silk fibroin films was increased significantly ( Figure 8C,D), but a further increase in the dose did not increase cell proliferation any further. Furthermore, L929 cells underwent better spreading into spindle, triangular, or polygonal shapes on (-RGD-) 4 -, (-RGD-) 8 -, and RGD-10-modified mulberry silk fibroin films compared with other samples on day 1, and spindle cells were particularly prevalent in (-RGD-) 4 and (-RGD-) 8 samples. Cells showed satisfactory proliferation activity, fully covered samples, and many cells were beginning to emerge at 3 days after seeding ( Figure 8A). The cell proliferation activity on mulberry silk fibroin films modified with (-RGD-) 4 (p < 0.01) or (-RGD-) 8 (p < 0.01) was higher than that on unmodified film, cell culture plate controls, and the RGE-10 sample, and was also higher than on any other modified materials ( Figure 8B). However, there was no significant difference in cell proliferation rate on (-RGD-) 4 -or (-RGD-) 8 -modified mulberry silk fibroin films compared with film modified with the chemosynthetic peptide GRGDS. After modification by (-RGD-) 4 and (-RGD-) 8 at a same dose of 0.015 µmoL/cm 2 , the cell proliferation rate on mulberry silk fibroin films was increased significantly ( Figure 8C,D), but a further increase in the dose did not increase cell proliferation any further. Discussion RGD repeat motifs are present in fibronectin, a well-characterized extracellular glycoprotein that interacts strongly with other extracellular matrix molecules to promote cell adhesion and spreading, and this tripeptide motif is also present in some other extracellular matrix proteins. RGD peptides have been used to modify synthetic polymers to manipulate cell behavior, promote cell adhesion, proliferation and spreading, and induce stem cell differentiation [25][26][27][28][29]. When encapsulated within three-dimensional hydrogel systems, RGD peptide presentation can guide cell motility of encapsulated cells [30][31][32]. In recent oncotherapy research, RGD-decorated nanoparticles were shown to enhance cell targeting and uptake, leading to more effective anti-tumor effects and demonstrating high potential for targeted chemotherapy of cancer cells [33][34][35][36]. Not all RGD tripeptide-containing peptides or proteins can promote cell adhesion. For example, Arg-Gly-Asp-Thr-Gly-Ala-Thr-Gly-Arg (derived from type I collagen) promotes cell adhesion, while Glu-Gly-Ile-Arg-Gly-Asp-Lys-Gly-Glu-Pro and Gly-Ser-Arg-Gly-Asp-Hyp-Gly-Thr-Hyp (derived from collagens of different genetic types) do not [37]. RGD tripeptide-containing nonmulberry silk fibroins are potential biomaterials that may possess better cell-binding ability than mulberry silk fibroins without this motif, although the RGD sequence has not previously been verified to promote cell adhesion. Herein, we developed the GSGAGGRGDGGYGSGSS peptide (-RGD-, completely derived from nonmulberry silk fibroin) and multimers thereof (-RGD-)n at less cost, which were reported in previous studies [19] by efficient recombinant expression in E. coli. Our Discussion RGD repeat motifs are present in fibronectin, a well-characterized extracellular glycoprotein that interacts strongly with other extracellular matrix molecules to promote cell adhesion and spreading, and this tripeptide motif is also present in some other extracellular matrix proteins. RGD peptides have been used to modify synthetic polymers to manipulate cell behavior, promote cell adhesion, proliferation and spreading, and induce stem cell differentiation [25][26][27][28][29]. When encapsulated within three-dimensional hydrogel systems, RGD peptide presentation can guide cell motility of encapsulated cells [30][31][32]. In recent oncotherapy research, RGD-decorated nanoparticles were shown to enhance cell targeting and uptake, leading to more effective anti-tumor effects and demonstrating high potential for targeted chemotherapy of cancer cells [33][34][35][36]. Not all RGD tripeptide-containing peptides or proteins can promote cell adhesion. For example, Arg-Gly-Asp-Thr-Gly-Ala-Thr-Gly-Arg (derived from type I collagen) promotes cell adhesion, while Glu-Gly-Ile-Arg-Gly-Asp-Lys-Gly-Glu-Pro and Gly-Ser-Arg-Gly-Asp-Hyp-Gly-Thr-Hyp (derived from collagens of different genetic types) do not [37]. RGD tripeptide-containing nonmulberry silk fibroins are potential biomaterials that may possess better cell-binding ability than mulberry silk fibroins without this motif, although the RGD sequence has not previously been verified to promote cell adhesion. Herein, we developed the GSGAGGRGDGGYGSGSS peptide (-RGD-, completely derived from nonmulberry silk fibroin) and multimers thereof (-RGD-) n at less cost, which were reported in previous studies [19] by efficient recombinant expression in E. coli. Our aim was to determine whether nonmulberry silk fibroins could affect cell responses, and therefore have potential for applications in stem cell differentiation, tissue repair, or oncotherapy. The structure and properties of all proteins is dependent on the nature and distribution of amino acid side chains. As shown in Figure 7, mulberry silk fibroin was no more favorable for attachment of L929 cells than the bare cell culture plate. After grafting GRGDS and RGD-10 peptides, the adherence rate of L929 cells was increased, but the effect was not pronounced. By contrast, the adhesion ability was improved significantly when (-RGD-) 4 or (-RGD-) 8 were grafted onto the mulberry silk fibroin surfaces, to levels higher than on bare cell culture plates. Meanwhile, RGE-10 displayed no cell adhesion-promoting activity, similar to unmodified mulberry silk fibroin when Aspartate residues were substituted by glutamate. In our experiments, all peptides were grafted onto mulberry silk fibroin through amide bonds formed by the -COOH groups of silk fibroin and the -NH 2 groups of peptide Arg residues ( Figure 9). The cell adhesion-promoting activity was increased after coupling GRGDS and RGD-10 peptides to mulberry silk fibroin by only the -NH 2 groups, compared with ungrafted mulberry silk fibroin. However, cell adhesion rates were lower than cell culture plate controls. Interestingly, because there are four or eight repeats of the RGD tripeptide in the peptide chain of (-RGD-) 4 and (-RGD-) 8 ), some -NH 2 groups remained unoccupied when grafted to mulberry silk fibroin, resulting in significantly improved cell adhesion activity compared with ungrafted mulberry silk fibroin, above those of the bare cell culture plate. These results are consistent with a previous report [37], and suggest that only the highly conserved RGD tripeptide sequence can promote cell adhesion activity, since cell binding activity was lost or decreased when arginine or aspartate residues were replaced. RGD-containing peptides have been widely used to modify polymers by covalent binding via reaction with -COOH groups of Asp and/or -NH 2 groups of Arg. However, our results showed that the RGD tripeptide and free side -COOH and -NH 2 groups had a profound impact on cell adhesion. In order to react RGD-containing fibroin proteins from wild silkworms more efficiently, we should explore new pathways to investigate their bioactivity and cell responses. aim was to determine whether nonmulberry silk fibroins could affect cell responses, and therefore have potential for applications in stem cell differentiation, tissue repair, or oncotherapy. The structure and properties of all proteins is dependent on the nature and distribution of amino acid side chains. As shown in Figure 7, mulberry silk fibroin was no more favorable for attachment of L929 cells than the bare cell culture plate. After grafting GRGDS and RGD-10 peptides, the adherence rate of L929 cells was increased, but the effect was not pronounced. By contrast, the adhesion ability was improved significantly when (-RGD-)4 or (-RGD-)8 were grafted onto the mulberry silk fibroin surfaces, to levels higher than on bare cell culture plates. Meanwhile, RGE-10 displayed no cell adhesion-promoting activity, similar to unmodified mulberry silk fibroin when Aspartate residues were substituted by glutamate. In our experiments, all peptides were grafted onto mulberry silk fibroin through amide bonds formed by the -COOH groups of silk fibroin and the -NH2 groups of peptide Arg residues (Figure 9). The cell adhesion-promoting activity was increased after coupling GRGDS and RGD-10 peptides to mulberry silk fibroin by only the -NH2 groups, compared with ungrafted mulberry silk fibroin. However, cell adhesion rates were lower than cell culture plate controls. Interestingly, because there are four or eight repeats of the RGD tripeptide in the peptide chain of (-RGD-)4 and (-RGD-)8), some -NH2 groups remained unoccupied when grafted to mulberry silk fibroin, resulting in significantly improved cell adhesion activity compared with ungrafted mulberry silk fibroin, above those of the bare cell culture plate. These results are consistent with a previous report [37], and suggest that only the highly conserved RGD tripeptide sequence can promote cell adhesion activity, since cell binding activity was lost or decreased when arginine or aspartate residues were replaced. RGD-containing peptides have been widely used to modify polymers by covalent binding via reaction with -COOH groups of Asp and/or -NH2 groups of Arg. However, our results showed that the RGD tripeptide and free side -COOH and -NH2 groups had a profound impact on cell adhesion. In order to react RGD-containing fibroin proteins from wild silkworms more efficiently, we should explore new pathways to investigate their bioactivity and cell responses. RGD-containing peptides have distinct effects on cells from different species. Metastatic cells attach preferentially to type IV collagen, and laminin can increase both the rate and number of metastatic cells attaching to type IV collagen, while fibronectin has no such effect [38]. By contrast, fibronectin can promote the cell adhesion activity of fibroblasts, while laminin has no such effect [39]. As shown in Figures 7 and 8, adhesion rates on mulberry silk fibroin films grafted to the RGD-10 decapeptide were somewhat higher than those grafted to GRGDS, but this did not enhance the cell proliferation rate; there was significant differences in cell proliferation rate of (-RGD-)4-or (-RGD-)8-modified mulberry silk fibroin films compared to cell culture plate, but was not in the cell adhesion rate. This suggests that cellular response mechanisms are complex and related to specific structural domains, and sequences flanking the Arg-Gly-Asp (-Ser) sequence also affect activity [37]. RGD-containing peptides have distinct effects on cells from different species. Metastatic cells attach preferentially to type IV collagen, and laminin can increase both the rate and number of metastatic cells attaching to type IV collagen, while fibronectin has no such effect [38]. By contrast, fibronectin can promote the cell adhesion activity of fibroblasts, while laminin has no such effect [39]. As shown in Figures 7 and 8, adhesion rates on mulberry silk fibroin films grafted to the RGD-10 decapeptide were somewhat higher than those grafted to GRGDS, but this did not enhance the cell proliferation rate; there was significant differences in cell proliferation rate of (-RGD-) 4 -or (-RGD-) 8 -modified mulberry silk fibroin films compared to cell culture plate, but was not in the cell adhesion rate. This suggests that cellular response mechanisms are complex and related to specific structural domains, and sequences flanking the Arg-Gly-Asp (-Ser) sequence also affect activity [37]. Conclusions Herein, the RGD motif-containing peptide GSGAGGRGDGGYGSGSS (-RGD-) derived from A. pernyi and A. yamamai was recombinantly expressed and purified, and confirmed by MS, amino acid composition analysis, and SDS-PAGE. The resulting (-RGD-) 4 and (-RGD-) 8 target peptides promoted cell adhesion to materials and cell spreading, but had no greater effect on cell proliferation than chemically synthesized RGD-containing peptides. We preliminarily evaluated the cytocompatibility of L929 cells related to the recombinant RGD-containing peptides, and in future work we intend to investigate cell behavioral responses to RGD-containing peptides derived from nonmulberry silk fibroin using a variety of systems for probing adhesion, proliferation, and migration of various cell types, differentiation of stem cells, and target uptake of tumor cells.
7,683.2
2018-10-26T00:00:00.000
[ "Materials Science", "Biology" ]
Nonparametric Copula Density Estimation Methodologies : This paper proposes several methodologies whose objective consists of securing copula density estimates. More specifically, this aim will be achieved by differentiating bivariate least-squares polynomials fitted to Deheuvels’ empirical copulas, by making use of Bernstein’s approximating polynomials of appropriately selected orders; by differentiating linearized distribution functions evaluated at optimally spaced grid points; and by implementing the kernel density estimation technique in conjunction with a repositioning of the pseudo-observations and a certain criterion for determining suitable bandwidths. Smoother representations of such density estimates can further be secured by approximating them by means of moment-based bivariate polynomials. The various copula density estimation techniques being advocated herein are successfully applied to an actual dataset as well as a random sample generated from a known distribution. Introduction and Preliminary Considerations 1.Introduction Copulas are principally utilized for modeling dependency features in multivariate distributions.Thus far, they have found applications in numerous fields of scientific investigation, including finance, reliability theory, machine learning, signal processing, geodesy, hydrology, and biostatistics.Of note, they are increasingly used in several areas of forecasting such as portfolio optimization, water systems management, values at risk, irradiation effects, and stock price projections.Such applications are discussed in the following recent papers among others: Quintero et al. [1], Kim et al. [2], Sreekumar et al. [3], Wang et al. [4], Karmakar and Khadotra [5], Müller and Reuber [6], Sahamkhadam and Stephan [7], and Wang et al. [8].As well, a chapter of the monograph authored by Patton [9] is devoted to their use in connection with the forecasting of multiple time series. Copulas enable one to represent the joint distribution of two or more random variables in terms of the marginal distributions and a specific correlation structure so that the effect of the dependence between the variables can be separated from the contribution of each of the marginals.This paper addresses the two-dimensional case, which is not overly restrictive as will be explained in Section 4. Certain definitions and results that will be needed in the sequel are reviewed next. The following result, which was introduced by Sklar (1959) [10], constitutes a seminal contribution to the theory of copulas and its application. Result 1 (Sklar's Theorem).Let H(x, y) be the joint cumulative distribution function of the random variables X and Y whose continuous marginal distribution functions are denoted by F(x) and G(y).Then, there exists a unique bivariate copula C(•, •) : 1 2 → 1, such that H(x, y) = C(F(x), G(y)) where C(•, •) is a joint cumulative distribution function having uniform marginals.Conversely, for any continuous cumulative distribution functions F(x) and G(y) and any copula C(•, •), the function H(•, •), as specified in Equation ( 1), constitutes a joint distribution function whose marginal distribution functions are F(•) and G(•). This result provides a technique for constructing copulas.Indeed, the function is a bivariate copula, where the quasi-inverses F −1 (•) and G −1 (•) are given by and Copulas are invariant with respect to strictly increasing transformations.More specifically, letting X and Y be two continuous random variables whose associated copula is C(• , •), if α(•) and β(•) are two strictly increasing functions and C α,β (• , •) is the copula obtained from α(X) and β(Y), then for all (u, v) ∈ 1 2 , C α, β (u, v) = C(u, v). We shall denote the probability density function (pdf) corresponding to the copula C(u, v) by The following relationship between the joint density function of X and Y, denoted by h(•, •), and the associated copula density function c(• , •) can be readily obtained by differentiating the right-hand side of Equation (1) with respect to x and y: h(x, y) = f (x) g(y) c(F(x), G(y)) (6) where f (x) and g(y) respectively denote the marginal density functions of X and Y. Accordingly, the copula density function can be expressed as follows: Given a random sample (x 1 , y 1 ), . . ., (x n , y n ) generated from the distributions of the continuous random variables X and Y, let where F(•) and G(•) are the usually unknown marginal cumulative distribution functions (cdf's) of X and Y. Throughout this paper, X and Y are assumed to be continuous random variables, and n will denote the sample size.For the estimation of copulas having discrete marginals, the reader is referred to Genest and Neslehová [11].Now, since the underlying distributions of the variables are herein assumed to be continuous, the x i 's are, in theory, all distinct, and so are the y i 's.Should a dataset happen to contain replicates due to rounding, for instance, the observations could be randomly perturbed in a minimal way, which would ensure that the ranks associated with each variable will be distinct. The pseudo-observations ( ûi , vi ), i = 1, . . ., n, are then defined in terms of the empirical marginal cdf's denoted by F(•) and Ĝ(•), that is, where the empirical cdf's (ecdf's) are given by with I(A) denoting the indicator function which is equal to 1 if condition A is verified and 0 otherwise.Equivalently, one has where r i is the rank of x i among {x 1 , . . ., x n } and ρ i , the rank of y i among {y 1 , . . ., y n }. It is explained in the next subsection that it can prove advantageous to reposition the pseudo-observations. Repositioning the Pseudo-Observations Given a random sample consisting of n bivariate observations, it is propounded that the favored positioning of the pseudo-observations ought to be at the center of the cell they occupy in an n × n grid of the unit square.Thus, the corresponding centered pseudoobservations, that is, ( û * i , v * i ), are obtained by subtracting 1/(2 × n) from each coordinate of ( ûi , vi ), i = 1, . . ., n. An approach that is suggested in the literature for mitigating the edge effects consists of multiplying the pseudo-observations by n/(n + 1), which, with n = 4, will produce the following points {(1/5, 3/5), (2/5, 4/5), (3/5, 1/5), (4/5, 2/5)}.These modified pseudoobservations are plotted in Figure 4.As can be seen from this graph, these points occupy haphazard positions within the corresponding grid cells, and their uneven distribution will result in a copula density that will be less concentrated in the vicinity of both ends of the unit intervals than it would be with centered pseudo-observations whose marginal probabilities are 1/n at the points (2i − 1)/(2n), i = 1, . . ., n, for each variable. The empirical copulas as determined from the pmf of the pseudo-observations which is equal to 1/4 at the points shown in Figure 2 and, as obtained, the pmf of their centered counterparts shown in Figure 3, which is equal to 1/4 at those points, are respectively plotted in Figures 5 and 6.The marginals are manifestly closer to being uniformly distributed in the latter case.Wang and Fang [18] and Pérez et al. ( [19], p. 100) discussed the following measure of divergence of a sample S = {x 1 , x 2 , . . ., x n } with respect to the distribution function F(x), which is referred to as F-discrepancy: where F n (x) denotes the empirical distribution function as determined from S and ℜ, the set of real numbers.We observe that D F (S) is in fact the Kolmogorov-Smirnov statistic for assessing goodness-of-fit with respect to F(x).It was established that in one dimension, {F −1 ( 2i−1 2n ), i = 1, 2, . . ., n} is the set of points having the lowest F-discrepancy.In that sense, these n points form the most representative sample with respect to the distribution specified by F(x).Thus, when F(•) is the distribution function of a uniform distribution on the unit interval, the sample of size n having the lowest F-discrepancy is { 2−1 2n , . . ., 2n−1 2n }, which is precisely the support of each of the marginals of the distribution of the empirical copula pmf when the centered pseudo-observations are utilized.Referring to the previous example, if one makes use of four cuboidal kernels whose height is 4 and whose bases are squares of dimension 1/4 × 1/4 that are centered at the centered pseudo-observations, ( û * i , v * i ), i = 1, 2, 3, 4, as defined at the beginning of this subsection, one obtains continuous uniform marginals on the unit interval from the resulting joint density function, which can be clearly observed by inspecting the copula density function appearing in Figure 7 and the corresponding bona fide copula plotted in Figure 8-obtained via integration of the joint density function shown in Figure 7.This clearly would not be the case with any other repositioning of the pseudo-observations.Uniform marginals could be similarly secured for bivariate samples of size n, in which case the cuboidal kernels would be of dimension 1/n × 1/n × n .As there are n distinct ranks with respect to each coordinate, each row and each column of the n × n grid of the unit square will contain exactly one pseudo-observation, whether centered or not.Since the centered points are not lying on the boundary of the support of the copula, the edge issues being encountered in the context of kernel density estimation are ipso facto attenuated.Accordingly, we shall make use of the centered pseudo-observations whenever kernel density estimates (kde's) are sought. Moment-Based Polynomial Approximation Methodology Once a copula density is determined by means of a nonparametric technique, it can be approximated or smoothed by a function consisting of the product of a bivariate base density function and a bivariate polynomial whose coefficients are determined from the joint moments of the copula distribution.The proposed procedure for achieving this is described in the next result which extends to two variables a proposition stated in Provost [20].Essentially, once the joint moments of the target distribution, as defined in Equation ( 13)-be they exact or empirical-are secured, and those associated with an initial bivariate density approximation, ψ Y (y 1 , y 2 ), as specified in Equation ( 15), are determined, the density function of the target distribution, namely f Y (y 1 , y 2 ), can be approximated by taking the product of ψ Y (y 1 , y 2 ) and a bivariate polynomial whose coefficients ξ i,j are obtained by solving the linear system (17).The methodology is described in the following result. Result 2 (Moment-Based Bivariate Polynomial Approximations).Let f Y (y 1 , y 2 ) be the density function of a bivariate continuous random variable Y defined in the rectangle (l 1 , u 1 ) × (l 2 , u 2 ).The joint moments of orders i and j obtained from f Y (y 1 , y 2 ) are denoted as Let ψ Y (y 1 , y 2 ) be a base density function whose distributional features are analogous to those of f Y (y 1 , y 2 ).In the case of a copula, a uniformly distributed base density is generally suitable.The joint moments of orders i and j associated with ψ Y (i, j) are denoted as Assuming that the sequence µ Y (i, j), i = 0, 1, 2, . . ., j = 0, 1, 2, . . .uniquely defines the distribution of Y, the density function of Y can be approximated by where the polynomial coefficients ξ i,j can be specified by solving the following system of equations: h = 0, 1, . . ., n; g = 0, 1, . . ., n, which can be re-expressed as h = 0, 1, . . ., n; g = 0, 1, . . ., n.Thus, given the joint moments associated with f Y (•, •) and ψ Y (•, •), one can determine the polynomial coefficients ξ i,j of f n (y 1 , y 2 ) by solving the system of linear equations specified by (17).The resulting polynomial function will be referred to as a moment-based bivariate polynomial approximation of degree n (in each variable). It should be noted that this result can readily be extended to accommodate approximations of differing degrees in each variable.Such approximating polynomials can be utilized to express a copula estimate in a convenient form and, as the case may be, to smooth it.The base function ψ Y (y 1 , y 2 ) can be a uniform density function or some other density function selected on the basis of the distributional features of the copula density.Whenever the copula density estimate to be approximated appears to exhibit an irregular pattern that cannot be related to a familiar copula density function, as is frequently the case, a uniform density function whose support area slightly exceeds that of the copula ought to be taken as the base density.Accordingly, unless specified otherwise, we will utilize such a base density for the purpose of approximating or smoothing copula density functions. Degree n used in the polynomial adjustment should be selected so that f n (• , •) provides an accurate approximation to the copula density estimate.In order to compare a copula density or distribution function estimate to a reference copula density or distribution function, we will make use of the integrated squared difference (ISD), which is equal to the integral of the square of their difference over the domain of interest.When the density estimates fluctuate erratically near the boundary, a subset of the unit square, namely [0.1, 0.9] × [0.1, 0.9], will be utilized for comparison purposes.Moreover, in order to ensure that the resulting density functions be bona fide within the unit square, the final approximations will be taken to be c ( f (y 1 , y 2 )) or c ( f (y 1 , y 2 ) + | f (y 1 , y 2 )|)/2 when c ( f (y 1 , y 2 )) happens to take on negative values, as could possibly be the case with a polynomial approximation, with c denoting the normalizing constant. To ensure that the polynomial approximations be positive only within a certain neighborhood of the set of pseudo-observations and zero elsewhere, we next introduce a technique for obtaining a suitable distributional support. Determining the Support of a Copula Density Function When a copula density function is directly estimated by a polynomial, as is the case for the differentiated least-squares copula estimates introduced in Section 2.1, or it is being approximated by means of a moment-based bivariate polynomial, as is the case in Section 2.4.2, some fluctuations may occur in certain areas located away from the pseudoobservations.To address this issue, a technique is being proposed for determining a distributional support denoted by S, outside of which polynomial density estimates or approximants will be equal to zero. The support is taken to be the union of all the points lying within a certain distance c of the centered pseudo-observations.Thus, denoting the centered pseudo-observations by . ., n, the support of the copula density is defined as where c, the radius of the circular neighborhood around each point, can be set as equal to 1/10 or another value that allows the density estimate to nearly reach zero on the boundary.A bona fide copula density function is then obtained by multiplying its polynomial representation by the indicator function, and normalizing the resulting function. Consider the Old Faithful geyser eruption data which will be used throughout Section 2 for illustrative purposes.Scatter plots of the bivariate observations and the set of centered pseudo-observations are respectively shown in Figures 9 and 10.The support of the distribution, that is, S, as determined by letting c = 1/10, is plotted in orange in Figure 11.The polynomial copula density estimate appearing in Figure 12 was obtained by applying the differentiated least-squares technique introduced in Section 2.1.The bona fide copula density function, which is shown in Figure 13, was secured by bounding the original density estimate by the support S and normalizing the resulting function. Structure of the Paper The remainder of this paper is organized as follows.Section 2 proposes four nonparametric approaches for securing copula density estimates and specifies criteria for determining their associated tuning parameters.Additionally, Section 2.5 illustrates that a joint density estimate can be secured from a copula density estimate.The proposed copula density estimation techniques are then applied to a sample generated from a bivariate Student's t distribution in Section 3. Several concluding remarks are then offered in the last section.Let (x 1 , y 1 ), . . ., (x n , y n ) denote the dataset at hand and Ĉ(u, v) be the associated empirical copula as specified in (12).A least-squares approximating polynomial of degree t + 1 in each variable, which is denoted by P LS t+1 (u, v), is fitted to the n 2 points (j/n, k/n, Ĉ(j/n, k/n)), j, k = 1, 2, . . ., n.The resulting polynomial is then differentiated with respect to u and v and normalized to obtain a copula density estimate denoted by ĉ LS t (u, v), whose domain is delimited by the unit square.For a derivation of bivariate least-squares regression polynomials, the reader is referred to Fox ([21], Section 5.2.1). Methodologies for Estimating Copula Densities On plotting the density estimates ĉ LS t (u, v) for t = 10, 15, 20, . . ., one will notice that several successive plots turn out to be quite similar and that, past a certain value of t, higherdegree polynomials will exhibit noticeably larger fluctuations.That several graphs show nearly identical features over such a wide range of degrees provides a clear indication that the copula density functions so obtained are representative of the underlying distribution.Among these density estimates, the experimenter could select that which possesses the desired smoothness level or the polynomial of lower degree for the sake of parsimony.A suitable degree for ĉ LS t (u, v) could as well be determined more precisely by evaluating the integrated squared differences between copula estimates of orders t and t + 5 and choosing the value of t beyond which the ISD's no longer decrease markedly.Once normalized, the copula density estimate of the selected degree will be a bona fide density estimate, which could then be utilized as a yardstick to calibrate the tuning parameters of density functions resulting from the application of alternative methodologies. An Illustrative Example For comparison purposes, all the proposed copula density estimation techniques will be applied to the Old Faithful geyser eruption data, which consists of 272 bivariate observations whose first component represents the duration of an eruption in minutes and the second one, the waiting time to the next eruption in minutes.As can be seen from recently published articles such as Howlett [22] and Keller et al. [23], this dataset, as well as related ones, are still of current interest as they are required to understand the subsurface systems that give rise to the geysers.We selected this hydrogeological dataset, noting that its empirical copula is not as typical as those generally associated with datasets arising, for instance, in financial modeling, environmetrics, or epidemiology.The four nonparametric copula density estimation methodologies advocated in this and the next three subsections, which are also shown to be successful in modeling a challenging distribution in Section 3, would presumably apply to sets of observations originating from a variety of disciplines. A kernel density estimate of the joint distribution is shown in Figure 14.The points Ĉ(j/272, k/272), j, k = 1, . . ., 272, as defined in Equation ( 12), are plotted in Figure 15.A bivariate least-squares approximating polynomial of degree t + 1 (in each variable) is fitted to the empirical copula points plotted in Figure 15 for t = 5, 10, . . ., 40, and differentiated with respect to each variable as explained in Section 2.1.1;finally, the resulting bivariate polynomial of degree t in each variable is normalized over the unit square.The copula density estimates so determined are plotted in Figures 16-22.By mere visual inspection, one can observe that the copula density estimates of degrees 20, 25, 30, and 35 are analogous, while the estimate of degree 40 attains its maximum at a discernibly higher value than the previous estimates.Such a stable distributional behavior over an ample range of degrees-which incidentally are significantly lower that those required by the Bernstein polynomial approximations discussed in the next subsection-explains why the differentiated least-squares approach is discussed first and justifies employing the selected copula density estimate resulting from its use as the initial reference density.If smoothness is a key consideration, one ought to select the copula density of degree 20 in each of the variables as a yardstick for this copula distribution.This choice can be mathematically corroborated by noting that the integrated squared differences between successive copula estimates indicate that there is little to be gained by selecting copulas of degrees greater than 20, which can be inferred from the ISD's listed in Table 1 and the graph shown in Figure 23.In actuality, the stability of the density estimates for such a wide array of degrees beyond 20 is indicative of their reliability.In this example, least-squares polynomial estimates are underfitting when t < 20 as they do not adequately capture the distinctive distributional features of the copula, whereas estimates of degrees that are at least 20 in each variable turn out to be comparable up to degree 35.Beyond that degree, the estimates exhibit signs of overfitting.Thus, once normalized, the differentiated least-squares polynomial of order 20 in each variable is deemed to be a copula density estimate that is representative of the underlying copula distribution.A bona fide reference copula estimate can then be secured via integration. Bernstein's Copula Density and Degree Selection 2.2.1. Introduction This section initially presents relevant background information on Bernstein's empirical copula.A copula density function will be obtained by differentiating Bernstein's polynomial approximation of Deheuvels' empirical copula, and a criterion for determining a suitable degree for such an approximant, will be proposed.Leblanc [24] made use of Bernstein's polynomials to estimate distribution functions that are defined on closed intervals, establishing their pointwise convergence.He also showed that such estimators are free of boundary bias. First, we define Bernstein's polynomials and describe some of their properties.A Bernstein polynomial of order k is obtained as follows: where the β v 's are called the Bernstein coefficients and b is the Bernstein basis polynomial of degree k, which is also a binomial probability mass function when x ∈ [0, 1].The Bernstein basis polynomials have the following properties: Moreover, their derivatives can be written as a combination of two polynomials of a lower degree: The Bernstein approximating polynomial of a continuous function f on the interval [0, 1] is given by It can be established that lim . This approximation approach can be generalized as follows to d dimensions.Letting g(x 1 , . . ., x d ) be a continuous function on [0, 1] d , g(x 1 , . . ., x d ) can be approximated by the following Bernstein polynomial of order k in each variable: Bernstein's empirical copula was first introduced and investigated by Sancetta and Satchell [25] for identically and independently distributed (i.i.d.) data.Bernstein's approximation of order k, k > 0, of a copula function C, the so-called Bernstein copula function, is defined as for u = (u 1 , . . ., u d ) ∈ [0, 1] d , where k plays the role of bandwidth parameter and b v j ,k (u j ) is the binomial probability mass function, It has been shown that In addition, under the conditions specified in Theorem 1 of Sancetta and Satchell [25], it was established that B k in ( 22) is itself a copula.Thus, in order to estimate the copula function C(•), they proposed the following estimator referred to as Bernstein's empirical copula: where C n denotes the standard empirical copula estimator given by where F j;n is the empirical cumulative distribution function of the component X j , and n is the sample size.Janssen et al. [26] demonstrated that Bernstein's copula estimator outperforms the classical empirical copula estimator.Whenever it exists, the copula density, denoted by c(u), is obtained from the copula function C(u) as follows: Since Bernstein's copula function as specified in Equation ( 22) is absolutely continuous, Bernstein's copula density can then be defined as where P ′ v j ,k (u j ) is the derivative of P v j ,k with respect to u j .Accordingly, Sancetta and Satchell [25] proposed the following estimator of Bernstein's copula density: Later, Bouezmarni et al. [27] made use of Bernstein's copula density to estimate the copula density in the presence of dependent data.More recently, Janssen et al. [28] established the asymptotic normality of this estimator given independently and identically distributed data. In bivariate applications, the copula density given in (28) where the dimension d is two will be utilized, and the degree k of the copula density estimate will be taken to be such that there is no significant advantage in opting for a higher degree when compared to the selected least-squares copula. An Illustrative Example We are now estimating the copula density associated with the Old Faithful geyser eruption dataset by making use of Bernstein's polynomial approximation technique. The reference copula density function of degree 20 obtained in the previous subsection is shown in Figure 24, and Bernstein's copula densities of degree 25, 50, 75, 100, 125, 150, and 200 are plotted in Figures 25-31.The integrated squared differences (ISD's) between Bernstein's copula approximants, that are twenty-five degrees apart, and the reference copula are included in Table 2.We observe that the ISD's keep decreasing as the orders of Bernstein's copulas keep increasing from 25 to 175.However, beyond degree 125, the ISD's with respect to the selected least-squares copula turn out to be of the same order before they start increasing. Accordingly, for the sake of parsimony, we may decide that Bernstein's copula density of degree 125 in each variable is suitable, which is in agreement with the assessment resulting from a visual comparison with the selected least-squares density estimate.This density estimate will be utilized as the reference copula density in connection with the approaches that are presented in the two subsequent subsections.It should be noted that the Bernstein polynomial approximation technique readily produces bona fide copula density estimates. Introductory Considerations Given a bivariate sample x 1 , . . ., x n , arising from a distribution whose density function is f (•), a kernel density estimate is given by where x = (x 1 , x 2 ) ′ ); x i = (x i1 , x i2 ); V is a 2 × 2 bandwidth matrix assumed to be symmetric and positive definite; and x), with the kernel K(x) being a bivariate density function such as the standard bivariate Gaussian density function.For additional considerations on bivariate kernel density estimation, the reader is referred to Duong and Hazelton [29], Sheater and Jones [30], and Wand and Jones [31], among others.For instance, Li and Silvapulle [32], Geenens et al. [33], and Wen and Wu [34] employed kernel density estimates (kde's) in the context of copula density estimation.Since the support of copulas is finite, kde's can produce what is referred to as 'boundary bias'.Gijbels and Mielniczuk [35] attempted to address this drawback by making use of a certain mirror reflexion methodology.It will be explained that boundary effects can be alleviated as well by repositioning the usual pseudo-observations and making use of kernels having a finite support. For illustrative purposes, let the pseudo-observations be {(1/4, 3/4), (1/2, 1), (3/4, 1/4), (1, 1/2)}.It will be shown that, as explained in Section 1.3, centering them within grid cells can significantly alleviate the boundary issues associated with the original pseudo-observations in the context of kernel density estimation.Bi-weight kernels whose density function is K(x) = (15/16)(1 − x 2 ) 2 I(x ≤ 1) are utilized in this example.The resulting kde's of the copula density, as secured from the original and centered pseusoobservations, are plotted in Figures 32 and 33.The corresponding copulas, which were obtained via integration, are shown in Figures 34 and 35 It is seen that two of the kernels centered at the original pseudo-observations are truncated.Moreover, as the graph of the cumulative distribution function indicates, the resulting kde integrates to less than 0.8 over the unit square whereas, in this case, the cumulative distribution function tends to one when the centered pseudo-observations are utilized.Actually, a kde will never integrate to one within the unit square when the selected kernel is centered at each of the usual pseudo-observations or defined on an infinite support.In the current context, it is thus advisable to make use of finite-support kernels whose modes occur at the centered pseudo-observations. Kernel Bandwidth Selection A mathematical criterion for selecting an appropriate kernel bandwidth is proposed in this subsection.As centered pseudo-observations yield improved copula density functions, kde's of various bandwidths, which are centered at those points, are initially obtained and then compared to a reliable reference copula density, such as the selected Bernstein or least-squares copula density functions. Once again, we rely on the Old Faithful geyser eruption observations for illustrative purposes.In this instance, the selection criterion will be based on the integrated squared difference between the selected Bernstein copula density shown in Figure 36 and Epanechnikov kde's of bandwidths 0.045, 0.04, 0.035, 0.030, and 0.025, which are plotted in Figures 37-41.It is seen from the ISD's listed in Table 3 that the smallest ISD corresponds to a bandwidth of 0.035.Accordingly, the copula kde having this bandwidth is selected as being the most suitable one, a conclusion that, incidentally, could also have been reached via visual inspection.A novel approach to copula density estimation is described in this subsection.A Deheuvels' empirical copula is first determined for the dataset at hand by making use of Equation (12).Next, the empirical copula is evaluated at grid points of the unit square whose associated spacing along both directions is denoted by c.Then, linear interpolation is applied to those points within each grid cell and the resulting surface is differentiated, which yields an approximate density function.As the resulting copula density is obtained by differentiating a linearized copula, it will be referred to as a DL copula density.The spacing parameter c is chosen in such a way that the DL copula density function and a reference copula density-for instance, the selected Bernstein polynomial approximation-share similar distributional features.Mathematically, c is taken to be the minimizer of the integrated squared difference between the chosen reference copula density and differentiated linearized copula densities resulting from various values of the spacing parameter. The grid points of the empirical copula as determined from the Old Faithful geyser dataset, which are plotted in Figure 42, are c = 1/12 apart.Linear interpolation was applied within each grid cell.The resulting linearized copula and DL copula density obtained via differentiation are respectively plotted in Figures 43 and 44.As Table 4 indicates, the appropriate spacing parameter c is 1/12 in this case.For comparison purposes, the DL copula density functions are also plotted for c = 1/11 and c = 1/13 in Figures 45 and 46.The selected DL copula density that is plotted in Figure 44 was smoothed by approximating it with an eleventh degree moment-based bivariate polynomial-as defined in Result 2. The resulting bona fide density function that was obtained after normalization is shown in Figure 47.In this instance, the base density was taken to be the uniform distribution. Thus, once a copula density estimate has been secured, it is a rather simple matter to obtain a joint density estimate.More specifically, one would proceed as follows: First, the marginal density functions f (x) and g(y) associated with the random variables X and Y are estimated and their respective distribution functions are obtained via integration; then, a copula density estimate is determined by implementing one of the proposed methodologies, and a joint density estimate is secured by making use of the representation given in Equation ( 29).This alternative approach to determining joint density function estimates allows for more flexibility than the direct approach.For instance, one has then the option to rely on some prior information for selecting appropriate tuning parameters-such as degrees or bandwidths-for each of the marginal density functions and for assigning a suitable degree of smoothness to the copula density estimate. An Illustrative Example Consider once again the Old Faithful geyser eruption data.A kde of the copula density whose suitable bandwidth was determined to be 0.035 is plotted in Figure 48.Kernel density estimates of the marginal density functions are superimposed on histograms of the observations on each of the variables in Figures 49 and 50.It is seen that the bivariate kde shown in Figure 51, which was secured directly from the data, and the estimated joint density obtained from Equation ( 29), which appears in Figure 52, exhibit similar features. Introduction The four density estimation techniques introduced in Section 2 are applied to a random sample of size 2000 that was generated from a distribution whose associated copula is distributed as a bivariate Student's t on only one degree of freedom, the marginal distributions being respectively standard normal and uniform on the interval [0, 2].It should be noted that the selected copula proves challenging to model as its density function tends to plus infinity at each of the four vertices of the unit square.Moreover, as pointed out in Quintero et al. [1], heavy-tailed distributions are generally more difficult to model.The exact joint and copula density functions are respectively plotted in Figures 53 and 54 Application of the Proposed Methodologies Proceeding as explained in Section 2.1, it was determined that a suitable degree for the differentiated least-squares bivariate polynomial approximation is 30.The resulting copula density estimate is plotted in Figure 55.On following the methodology advocated in Section 2.3.2, it was found that the kde-based estimate having 0.025 as its bandwidth, shown in Figure 56, is appropriate.Referring to Section 2.2, it was determined that an appropriate degree for Bernstein's copula density estimate is 100.This density function is plotted in Figure 57.Now, proceeding as explained in Section 2.4, the proper spacing for the DL copula density was determined to be c = 1/12.This copula density is shown in Figure 58.All of these density estimates exhibit distributional features that are consistent with those of the underlying distribution, which supports the validity of the various methodologies being advocated in this paper. Identification of the Underlying Distribution For illustration purposes, we assess whether the distribution of a previously determined Student's t copula density estimate can be correctly identified when compared to several parametric copula density functions by making use of the Hellinger distance measure, which constitutes an alternative to the Kullback-Leibler divergence for comparing density or distribution functions.Another distance measure is studied in Fournier and Guillin [36]. If we denote the probability density functions of two bivariate distributions as f (•, •) and g(•, •), the square of the Hellinger distance between them is given by The Hellinger distances between Bernstein's copula density approximation of degree 100, which is plotted in Figure 57, and the following copula density functions were evaluated: bivariate Student's t on 1, 3, and 10 degrees of freedom; bivariate Gaussian; Farlie-Gumbel-Morgenstern; Ali-Mikhail-Haq; Gumbel-Hougaard; Frank; and Clayton-Pareto.In this instance, the lower and upper bounds of integration are zero and one in Equation (30). As anticipated, the Hellinger distance between the estimated copula density and the bivariate t copula density on one degree of freedom turned out to be the smallest. Concluding Remarks Four types of nonparametric copula density estimates were considered and criteria for selecting their tuning parameters were proposed.Bernstein's polynomial density estimates enjoy the advantage of not having to be normalized.However, given the high orders that they necessitate, they require longer computing times than alternative techniques.The differentiated least-squares density estimates which turn out to be consistently of much lower degrees, are actually easier to determine, as are kernel density estimates as well.As illustrated in Section 2.4.2, moment-based polynomial approximations of even lesser orders can also adequately serve as density estimates when applied, for instance, to differentiated linearized copula density estimates.Additionally, on the basis of a random sample arising from a known but atypical distribution, each one of the density estimation techniques advocated in this paper yielded rather accurate copula density estimates. Although distinct in nature, these methodologies were found to produce analogous density estimates.They can, in fact, be extended to estimate the distribution of multivariate copulas, in which case they would rely on multivariate kernel density estimation, polynomial interpolation in several variables, or multivariate Bernstein approximating polynomials.We note that the bivariate case is of particular relevance in connection with vine copulas which, as explained in Joe [16], constitute a flexible tool for modeling multivariate distributions. In actuality, this work also constitutes an informative introduction to the theory of copulas and its application.All of the calculations were carried out with the symbolic computational package Mathematica (version 10.3; Wolfram, Champaign, IL, USA), with the code being available upon request. Figure 1 . Figure 1.The four data points. Figure 5 . Figure 5. Empirical copula as evaluated from the pseudo-observations. Figure 6 . Figure 6.Empirical copula as evaluated from the centered pseudo-observations. Figure 8 . Figure 8. Copula resulting from the cuboidal kernel density estimate. Figure 9 . Figure 9. Scatter plot of the data. Figure 10 . Figure 10.Scatter plot of the centered pseudo-observations. Figure 11 . Figure 11.S, the support of the copula density. Figure 32 . Figure 32.kde obtained from the original pseudo-observations. Figure 33 . Figure 33.kde obtained from the centered pseudo-observations. Figure 34 . Figure 34.Copula resulting from the original pseudo-observations. Figure 37 . Figure 37. kde with a bandwidth of 0.045. Figure 38 . Figure 38.kde with a bandwidth of 0.040. Figure 39 . Figure 39.kde with a bandwidth of 0.035. Figure 40 . Figure 40.kde with a bandwidth of 0.030. Figure 41 . Figure 41.kde with a bandwidth of 0.025. Figure 47 . Figure 47.Bivariate polynomial approximation of the selected DL copula density. 2. 5 . Estimating Joint Density Functions by Means of Copula Density Estimates 2.5.1.Introduction The following formula, which can be deduced from Result 1 (Sklar's theorem), expresses a joint density function estimate in terms of estimates of the marginal density and distribution functions and a copula density estimate denoted by c(•, •): h(x, y) ≈ f (x) g(y) c( F(x), G(y)). Figure 48 . Figure 48.Copula kde with a bandwidth of 0.035. Figure 49 . Figure 49.The estimated marginal density of the first variable and histogram. Figure 50 . Figure 50.The estimated marginal density of the second variable and histogram. Figure 51 . Figure 51.Bivariate kde obtained directly from the observations. Figure 54 . Figure 54.The bivariate Student's t copula density on one degree of freedom. Table 1 . ISD's between successive density estimates that are five degrees apart. Table 3 . ISD's between the reference copula density and kde's of various bandwidths. Table 4 . ISD's between the reference copula density and certain DL copula densities.
9,107
2024-01-26T00:00:00.000
[ "Mathematics" ]
Structural Basis of Pharmacological Chaperoning for Human β-Galactosidase* Background: Pharmacological chaperone (PC) therapy has been proposed for lysosomal storage diseases. Results: Wild type and mutant β-galactosidases exhibit similar enzymological properties. The recognition mechanism of glycomimetic PC candidates involves both sugar-like and substituent moieties. Conclusion: Crystal structures reveal the molecular basis for high binding potency of PC compounds. Significance: Enzymological properties, binding affinities, and recognition modes are biophysically and structurally characterized. GM1 gangliosidosis and Morquio B disease are autosomal recessive diseases caused by the defect in the lysosomal β-galactosidase (β-Gal), frequently related to misfolding and subsequent endoplasmic reticulum-associated degradation. Pharmacological chaperone (PC) therapy is a newly developed molecular therapeutic approach by using small molecule ligands of the mutant enzyme that are able to promote the correct folding and prevent endoplasmic reticulum-associated degradation and promote trafficking to the lysosome. In this report, we describe the enzymological properties of purified recombinant human β-GalWT and two representative mutations in GM1 gangliosidosis Japanese patients, β-GalR201C and β-GalI51T. We have also evaluated the PC effect of two competitive inhibitors of β-Gal. Moreover, we provide a detailed atomic view of the recognition mechanism of these compounds in comparison with two structurally related analogues. All compounds bind to the active site of β-Gal with the sugar-mimicking moiety making hydrogen bonds to active site residues. Moreover, the binding affinity, the enzyme selectivity, and the PC potential are strongly affected by the mono- or bicyclic structure of the core as well as the orientation, nature, and length of the exocyclic substituent. These results provide understanding on the mechanism of action of β-Gal selective chaperoning by newly developed PC compounds. Human ␤-D-galactosidase (EC 3.2.1.23, ␤-Gal) 3 is a lysosomal hydrolase that catalyzes removal of terminal ␤-linked galactose in G M1 ganglioside and keratan sulfate (1)(2)(3). In humans, deficiency of ␤-Gal enzyme causes G M1 gangliosidosis and Morquio B disease, two lysosomal storage diseases (LSDs) characterized by the progressive accumulation of metabolites in the cell (4 -6). G M1 gangliosidosis is a severe neurodegenerative disease that is classified into three types, infantile, juvenile, and adult, depending on the onset and severity (7). Morquio B disease is a rare bone disease without central nervous system involvement (7). Currently, more than 160 mutations in the human ␤-Gal gene have been identified as factors causative of its deficiency (8,9). Two principal treatment strategies are currently approved or in clinical trials for LSDs. The first is enzyme replacement therapy, where the deficient enzyme is supplied by regular injection of purified recombinant human enzyme (10 -12). However, little or no improvement has been observed in the central nervous system affectations in LSD patients because the enzyme cannot cross the blood-brain barrier. The second is substrate reduction therapy, which uses an orally available small molecule that inhibits the biosynthesis of the glycosphingolipid (13). However, this treatment is nonspecific, leading to serious side effects. An alternative treatment, pharmacological chaperone (PC) therapy, has been proposed for G M1 gangliosidosis, Morquio B disease, and other LSDs (14). This therapy uses a small molecule ligand that can bind to the mutant protein and stabilize the correct conformation of the protein at neutral pH in the endoplasmic reticulum, allowing it to be transported to the lysosome where the ligand dissociates at acidic pH and in the presence of excess substrate. Galactose, the catalytic product of ␣or ␤-Gal, the iminosugar 1-deoxygalactonojirimycin (DGJ; Fig. 1), a mimic of galactose, and some derivatives have been well studied as PCs for these LSDs (15,16). However, DGJ has promiscuity for a number of galactopyranoside-processing isoenzymes, which may hamper clinical development. We reported the crystal structure of human ␤-Gal (24). Getting more structural information on how this set of compounds interact with human ␤-Gal and establishing PC⅐enzyme complex structure-activity relationships provide useful information about the features governing chaperone-enzyme interactions at the molecular level and how they could be implemented in the design of new generations of PC drug candidates for G M1 gangliosidosis. In this study, we examined the enzymological properties of recombinant human ␤-Gal WT and two representative mutations in Japanese patients, ␤-Gal R201C and ␤-Gal I51T . To date, the chaperone ability for candidate compounds was evaluated against lysates from cultured human fibroblasts or transiently transfected cells. In this report, all assays were performed using purified recombinant human ␤-Gal, so that the chaperone effect can be estimated without any interference. We also determined crystal structures of the complexes of the strong ␤-Gal inhibitors, NOEV and 6S-NBI-DGJ, and of the much weaker ligands 6S-NBI-GJ and NBT-DGJ bound to human ␤-Gal WT . ␤-Gal I51T mutant structures complexed with galactose or 6S-NBI-DGJ were also determined. Cloning, Expression, and Purification of Human ␤-Gal-The details of cloning, expression, and purification of ␤-Gal were reported previously (27). Mutations were introduced by Prime-STAR Max DNA polymerase (Takara) using oligonucleotide primers following the manufacturer's protocol. Briefly, ␤-Gal WT and two mutant proteins, ␤-Gal R201C and ␤-Gal I51T (residues 24 -677), fused to the N-terminal hexahistidine and FLAG tag, were expressed in yeast Pichia pastoris KM71 and purified to homogeneity in a nickel-Sepharose column. In the course of purification, polysaccharide moieties attached to the protein were trimmed off by endoglycosidase Hf treatment. For crystallization, ␤-Gals were subjected to limited proteolysis with bovine trypsin and further purified by cation-exchange column chromatography, and Superdex 200 size exclusion column (GE Healthcare) equilibrated with buffer A (0.01 M MES, pH 6.0, 0.1 M NaCl). Finally, purified ␤-Gals were concentrated to 10 mg/ml in buffer A with or without 10 mM galactose. CD Spectrum-Circular dichroism (CD) spectra were recorded at 20°C on a Jasco J-720W spectropolarimeter equipped with a Julabo F25-ED temperature controller. The wild type and two mutant proteins of ␤-Gal were diluted to 0.1 mg/ml in 20 mM sodium acetate, pH 5.0, with or without galactose (50 mM). CD spectra were collected over the 200 -280 nm wavelength range and with a resolution of 0.1 nm, a bandwidth of 1 nm, and a response time of 1 s. Final spectra were the sum of 16 scans accumulated at a speed of 50 nm/min. Isothermal Titration Calorimetry-Isothermal titration calorimeter MicroCal iTC200 was employed to determine the affinities between ␤-Gal and the two PCs, NOEV or 6S-NBI-DGJ, at 25°C. The calorimetric cell was filled with 50 (for NOEV) or 100 M (for 6S-NBI-DGJ) ␤-Gal, and PCs (0.5 mM NOEV and 1 mM 6S-NBI-DGJ) were injected into the cell with a 60-l syringe. The released heat was measured by integrating the calorimetric output curves. Data were fit with the model for one set of binding sites in Origin (OriginLab). Diffraction datasets were collected at beamline BL-17A and AR-NE3A at the Photon Factory (Tsukuba, Japan). Prior to data collection, the crystals of the each PC compound⅐␤-Gal complex were soaked for a few seconds in the reservoir solution supplemented with the corresponding PC compound (galactose 10 mM; NOEV and 6S-NBI-DGJ, 1 mM; 6S-NBI-GJ and NBT-DGJ, 2 mM) and 15% ethylene glycol, and flash-cooled to 95 K. The datasets were processed with the HKL2000 package ( Table 2) (28). Structure Determination and Crystallographic Refinement-Structures of ␤-Gal WT complexed with PC compounds (NOEV, 6S-NBI-DGJ, 6S-NBI-GJ, or NBT-DGJ) and ␤-Gal I51T complexed with galactose or 6S-NBI-DGJ were determined by the molecular replacement method using the ␤-Gal WT -galactose complex structure (PDB code 3THC) as a search model and the program Molrep implemented in the CCP4 suite (29). Model building and adjustment were carried out using program COOT (30). Crystallographic refinement was performed using the program REFMAC (31) implemented in the CCP4 package until the R factor was converged ( Table 2). The atomic coordinates and structure factors have been deposited in the Protein TABLE 1 Enzymological parameters of the wild-type and mutant ␤-Gals, and inhibitory constant values of PC compounds The enzyme activity of ␤-Gal was fluorometrically determined with 4-methylumbelliferyll-␤-D-galactopyranoside as a substrate. RESULTS Enzymological Properties of ␤-Gal-The Michaelis-Menten parameters of the purified recombinant ␤-Gal were determined ( Table 1). The V max values of ␤-Gal WT , ␤-Gal R201C , and ␤-Gal I51T were 1.8, 1.7, and 1.8 M/min, and the K m values were 0.5, 0.4, and 0.6 mM, respectively. The results suggest that the wild type and mutant proteins show similar enzyme activity and substrate affinity. These observations further support the promise of PCs capable of rescuing the mutant protein from endoplasmic reticulum-associated degradation and promoting trafficking to the lysosome for the treatment of G M1 gangliosidosis. The corresponding inhibition constant (K i ) values are presented in Table 1. Data for D-galactose and DGJ have been collected for comparative purposes. Galactose is a very weak inhibitor, although DGJ shows 100-fold stronger inhibition than galactose. NOEV exhibited the most potent inhibition of ␤-Gal among the series, with K i close to 1 M. 6S-NBI-DGJ inhibits ␤-Gal to the same degree as DGJ, whereas the structurally related derivatives 6S-NBI-GJ and NBT-DGJ showed much weaker inhibitory properties. Consistent with the degree of the inhibition, a isothermal titration calorimetry experiment demonstrated that NOEV exhibited a strong binding affinity with the dissociation constant of submicromolar, whereas 6S-NBI-DGJ exhibited a ϳ100-fold reduced affinity (supplemental Fig. S1). It should be noted that all the compounds inhibited WT and mutant ␤-Gal with virtually identical potencies. The Stability of WT and Mutant ␤-Gal under Various pH Conditions-The activity of WT and mutant ␤-Gal after heat treatment (48°C, 40 min) was examined under various pH conditions (Fig. 2). As previously reported (32), each ␤-Gal was more active at acidic pH (4.5-5.5), whereas the catalytic activity was greatly reduced under strong acidic (pH Ͻ 3.5) and basic (pH Ͼ 8.0) conditions. When comparing wild type and mutant enzymes, it was found that ␤-Gal I51T was only slightly more sensitive to heat-induced inactivation in the whole range of pH values from 3.5 to 8.0, whereas the activity of ␤-Gal R201C was significantly reduced under various pH conditions compared with ␤-Gal WT . As a conclusion from these results, ␤-Gal mutations associated with G M1 gangliosidosis lead to the reduction of enzyme stabilities but do not lose the catalytic activity at all, suggesting that both ␤-Gal R201C and ␤-Gal I51T could process G M1 ganglioside and prevent/reduce its accumulation provided they are properly transferred to the lysosome with the help of PCs. Effects of the Pharmacological Chaperones on the Stability of the ␤-Gal-The ability of a ligand to prevent heat-induced inactivation of a given glycosidase has been previously used as an indication of its pharmacological chaperone potential (33). In our case, galactose, DGJ, NOEV, and 6S-NBI-DGJ were added to ␤-Gal at neutral pH in buffers, and then the remaining enzyme activities were examined after heat treatment (Fig. 3). All compounds were able to increase the residual activities of the ␤-Gal in a dose-dependent manner. ␤-Gal WT activity was decreased to ϳ20% after 40 min at 48°C incubation, but galactose-treated ␤-Gal retained ϳ70% activity. In accordance with the inhibitory activity, NOEV-treated ␤-Gal, and DGJ and 6S-NBI-DGJ-treated ␤-Gal reached a similar activity improvement at ϳ10,000and ϳ100-fold lower concentrations, respectively, compared with galactose, suggesting that these compounds bear potential as PCs. Little chaperone effects were observed under the acidic condition (supplemental Fig. S2). Structural Basis of Pharmacological Chaperone Interaction with Human ␤-Gal-To gain atomic insight into the PC compounds binding to ␤-Gal, we determined crystal structures of ␤-Gal WT complexed with NOEV, 6S-NBI-DGJ, 6S-NBI-GJ, and NBT-DGJ. The overall structures of ␤-Gal remained largely unchanged among these four complexes and all structures showed a strong overall agreement with galactose-bound ␤-Gal WT (PDB code 3THC) (24) with root mean square deviations ranging from 0.17 to 0.26 Å. ␤-Gal was folded into three domains: the TIM barrel domain, ␤-domain 1, and ␤-domain 2 (supplemental Fig. S3). The galactose mimetics were embedded in the same ligand binding pocket of the TIM barrel domain in ␤-Gal, with the alkyl chain at the exocyclic nitrogen atom oriented toward the entrance of the active site pocket (Fig. 4, A-D). In the case of NOEV, all hydroxyl groups made direct hydrogen bonds with ␤-Gal in a similar manner to galactose. In addition, a hydrogen bond was formed between the exocyclic N atom of NOEV and Glu-188 of ␤-Gal, stabilizing the orientation of the octyl chain. The hydrophobic tail extended along the protein surface consisting of Tyr-485, Trp-273, Leu-274, His-276, and Asn-321 (Fig. 4A). The terminal methyl group (C16 carbon atom) was in contact with the CE1 atom of His-276, and CB and CG atoms of Asn-321 (Fig. 5A). Similar to NOEV, hydroxyls OH2, OH3, and OH4 of the sp 2 -iminosugar 6S-NBI-DGJ made direct hydrogen bonds with ␤-Gal (Fig. 4B). The exocyclic N atom of 6S-NBI-DGJ also made a hydrogen bond with Glu-188 of ␤-Gal, which orients the butyl chain toward the hydrophobic pocket flanked by Tyr-485 and Trp-273 (Fig. 5B). Contrary to NOEV, for which the endocyclic double-bond imposes a half-chair conformation at the valienamine ring, the 6-membered ring of 6S-NBI-DGJ exhibits an almost ideal chair conformation close to that encountered for galactose in the galactose⅐␤-Gal complex. The 5-membered ring is fused to the 6-membered ring at an angle of 110°(supplemental Fig. S4). Because 6S-NBI-DGJ lacks the primary hydroxyl equivalent to the OH6 group in galactose, the side chain orientation of Tyr-333 is shifted so as to fill the corresponding space. This feature is unique to the 6S-NBI-DGJ⅐␤-Gal complex structure. The presence of a pseudoanomeric hydroxyl group, OH1, in the galactonojirimycin analogue 6S-NBI-GJ led to a very significant decrease in the binding affinity toward ␤-Gal as compared with the DGJ congener 6S-NBI-DGJ. Actually this compound is a rather selective inhibitor of ␤-glucosidase (21). Whereas 6S-NBI-GJ has been shown to exist exclusively in the 4 C 1 chair conformation in water solution, with OH1 axially oriented in the ␣-configuration (34,35), in the corresponding complex with ␤-Gal the opposite ␤-configuration was encountered, with the 6-membered ring and the 5-membered ring in the same plane (supplemental Fig. S4). The ␤-oriented OH1 group is now involved in hydrogen bonding with Glu-188 (Fig. 4C), with the butyl chain extended to Tyr-485 and Trp-273 (Fig. 5B). The monocyclic derivative NBT-DGJ recovered the hydrogen bonding interaction between the primary hydroxyl OH6 and Tyr-333 (Fig. 4D). Actually, all hydroxyl groups of NBT-DGJ made direct hydrogen bonds with ␤-Gal as in the NOEV⅐␤-Gal complex. However, the exocyclic N and S atoms had no interactions with ␤-Gal, which is in agreement with the weak binding affinity observed in the kinetic inhibition studies. We also determined crystal structures of ␤-Gal I51T complexed with galactose or 6S-NBI-DGJ (Fig. 6). The quality of the electron density maps is sufficiently high to allow modeling of the SNP residue (Fig. 6B). These overall structures are essentially identical to those of ␤-Gal WT bound to the corresponding ligand. In fact, the main chain atoms in each complex could be superimposed on those of the galactose or 6S-NBI-DGJ complex with root mean square deviations of 0.16 and 0.17 Å, respectively. CD spectrum demonstrated that structures of the wild type and mutant proteins are essentially the same regardless of the presence or absence of the PC compounds (supplemental Fig. S5). DISCUSSION In this study, we determined the enzymological properties of purified recombinant human ␤-Gal (␤-Gal WT ) and two representative mutant proteins, ␤-Gal R201C and ␤-Gal I51T . Moreover, we determined the crystal structures of ␤-Gal WT complexed with four ligand compounds, two of which, NOEV and 6S-NBI-DGJ, have shown pharmacological chaperone activity for several G M1 gangliosidosis-associated mutations, whereas the other two, 6S-NBI-GJ and NBT-DGJ, showed only weak affinity for the enzyme despite having a structure closely related to that of 6S-NBI-DGJ. Crystal structures of ␤-Gal I51T complexed with galactose or 6S-NBI-DGJ have also been determined. Although NOEV and 6S-NBI-DGJ exhibit micromolar and tens of micromolar K i values, these compounds were shown to significantly enhance the ␤-Gal activities (up to 6-fold) in G M1 fibroblasts, demonstrating their efficacy in vitro. It was also demonstrated that the PC compounds ameliorated accumulation of G M1 ganglioside in a mouse model (human ␤-Gal R201C ), including the brain, after oral administration (8,18,20). The Ile-51 mutation is located in the inner region of ␤-Gal, whereas the Arg-201 mutation is located instead on the lateral face of the TIM barrel domain, being exposed to the solvent. In both the Arg-201 and Ile-51 mutant proteins, the mutated amino acid is far from the active site, indicating that the mutations are unlikely to affect the active site directly. Accordingly, ␤-Gal I51T and ␤-Gal R201C showed similar enzymological parameters as ␤-Gal WT (Table 1). This coincides with the fact that no significant conformational change in ␤-Gal I51T was observed in the crystal structure of the mutant protein as com-pared with the wild type enzyme (Fig. 6A). By contrast, ␤-Gal R201C was more unstable than ␤-Gal WT and ␤-Gal I51T under various pH conditions (Fig. 2). Arg-201 forms a salt bridge to and the loss of this salt bridge probably affects the stability of the protein, increasing denaturation propensity. These enzymological properties, obtained using recombinant ␤-Gal, were consistent with previously reported data (32). The structural study provides an atomic basis for the binding mechanism of active site-directed pharmacological chaperones to ␤-Gal, by highlighting the importance of the sugar-like aglycon moiety, the nature of the exocyclic substituent and the conformational properties of the ligand. NOEV and 6S-NBI-DGJ were recognized in a similar manner; however, NOEV was a tight binding inhibitor, whereas 6S-NBI-DGJ binds Ͼ60-fold weaker (Table 1). This high affinity of NOEV results from the exocyclic N atom recognition, the length and the orientation of the extended hydrophobic tail and half-chair conformation imposed by the double bond. The sp 2 -iminosugar type inhibitor 6S-NBI-DGJ is characterized by a rigid bicyclic core derived from DGJ. 6S-NBI-DGJ lacks OH6, exhibits a different length and orientation of the hydrophobic tail, and adopts the chair conformation, resulting in the weaker affinity. As a conse- quence, the corresponding hydrogen bond interaction with Tyr-333 is missing, which is expected to have a detrimental impact in complex stability. Noteworthy, this scenario leads to a shift of Tyr-333 in the 6S-NBI-DGJ⅐␤-Gal complex, occupying the space where OH6 is located in the corresponding complex with NOEV, revealing a certain degree of flexibility at this region of ␤-Gal. The shortened alkyl substituent in 6S-NBI-DGJ as compared with NOEV, butyl instead of octyl, is also expected to affect the binding affinity. In fact, N-alkyl-4-epi-␤valienamine derivatives with longer alkyl chains than NOEV have been shown to exhibit higher affinity to bovine ␤-Gal (36), whereas N-hexyl-4-epi-␤-valienamine had 3-fold weaker affinity than NOEV (14). Consistently, the NЈ-octyl analogue of 6S-NBI-DGJ was also found to be a 4-fold stronger inhibitor of bovine ␤-Gal than 6S-NBI-DGJ, but it was discarded for PC chaperone studies with the human lysosomal enzyme due to toxicity issues (17). Probably the longer alkyl chain interacts with the neutral groove, formed by His-276, Asn-321, Pro-323, and Ala-325, in a wider surface area than the butyl chain in 6S-NBI-DGJ, thereby increasing the binding affinity (Fig. 5). Unlike the octyl chain in NOEV, the butyl chain does not extend over the neutral groove at the entrance of the active site in ␤-Gal (Fig. 5B), which in part is responsible for its moderate binding affinity to ␤-Gal. The present study revealed that the stability of the mutant proteins (R201C, I51T) coincides with the severity of the disease and the PC effect toward ␤-Gal WT and ␤-Gal R201C are well consistent with the previous data (8,20). However, NOEV has been shown to exhibit differences in their PC effects toward some ␤-Gal mutant proteins in vitro (20). This behavior does not correlate, however, with the present data on the stabilization effect toward heat-induced inactivation of purified recombinant ␤-Gal, where NOEV proved more efficient than 6S-NBI-DGJ irrespective of the mutation. Mutant enzyme rescued by PCs is a complex process that also involves stabilization of the already folded enzyme to promote trafficking and maturation. How close are the folded states is probably strongly mutationdependent, which may explain the above discrepancy between in vitro and cellulo results. Further investigation will be required. The hydrogen bond interactions involving OH2, OH3, and OH4 of the sugar-like moiety and two glutamic acid residues (Glu-129 and Glu-268) at the active site of ␤-Gal are critical for binding in both the NOEV and 6S-NBI-DGJ complexes and likely define the D-galacto configurational selectivity of the enzyme. A third glutamic acid, Glu-188, interacts with the exocyclic basic nitrogen atom. This hydrogen bond contributes substantially to ␤-Gal binding affinity and its absence is likely to be at the origin of the much lower inhibitory potencies of the structurally related sp 2 -iminosugars 6S-NBI-GJ and NBT-DGJ as well as of the monosaccharide galactose. The weakening of the ␤-Gal binding affinity for NOEV and 6S-NBI-DGJ observed at acidic pH (8,17) can be rationalized in terms of the expected decrease in the strength of the key hydrogen bonds after protonation of the glutamic acid residues in the protein and the basic nitrogen functionality in the chaperone. The sharp decrease in ␤-Gal binding affinity for 6S-NBI-GJ and NBT-DGJ as compared with 6S-NBI-DGJ further illustrates the strong dependence of the inhibitory/chaperone activity of sp 2 -iminosugar glycomimetics on subtle chemical modifications. The findings presented here may be particularly useful for the rational design of second generation PCs for the treatment of the mutant ␤-Gal-associated LSDs G M1 gangliosidosis and Morquio B disease. Thus, according to the x-ray data, structural changes at the exocyclic substituent in the 6S-NBI-GJ scaffold are expected to be tolerated by the enzyme, offering potential for drug optimization. In addition, the incorporation of an appropriate substituent at the five-membered ring methylene carbon might lead to favorable interactions with Tyr-333, which could be exploited in the design of higher affinity ligands. Research in that direction is currently sought in our laboratories.
5,052.6
2014-04-15T00:00:00.000
[ "Biology", "Chemistry" ]
T-duality diagram for a weakly curved background In one of our previous papers we generalized the Buscher T-dualization procedure. Here we will investigate the application of this procedure to the theory of a bosonic string moving in the weakly curved background. We obtain the complete T-dualization diagram, connecting the theories which are the result of the T-dualizations over all possible choices of the coordinates. We distinguish three forms of the T-dual theories: the initial theory, the theory obtained T-dualizing some of the coordinates of the initial theory and the theory obtained T-dualizing all of the initial coordinates. While the initial theory is geometric, all the other theories are non geometric and additionally nonlocal. We find the T-dual coordinate transformation laws connecting these theories and show that the set of all T-dualizations forms an Abelian group. Introduction T-duality is a property of string theory that was not encountered in any point particle theory [1,2,3,4]. Its discovery was surprising, because it implies that there exist theories, defined for essentially different geometries of the compactified dimensions, which are physically equivalent. The origin of T-duality is seen in a possibility that unlike point particle the string can wrap around compactified dimensions. But, no matter if one dimension is compactified on a circle of radius R or rather on a circle of radius l 2 s /R, where l s is the fundamental string length scale, the theory will describe the string with the same physical properties. The investigation of T-duality does not cease to provide interesting new physical implications. * The prescription for obtaining the equivalent T-dual theories is given by the Buscher T-dualization procedure [5,6]. The procedure is applicable along isometry directions, what allows the investigation of a backgrounds which do not depend on some coordinates. It is obtained that T-duality transforms geometric backgrounds to the non-geometric backgrounds with Q flux which are locally well defined, and these to a different types of nongeometric backgrounds, backgrounds with R flux which are not well defined even locally [7,8]. The similar prescription can be used to obtain fermionic T-duality [9]. It is argued that the better understanding of T-duality should be sought for by doubling the coordinates, investigating the theories in which the background fields depend on both the usual space-time coordinates and their doubles [10,11,12,13], which would make the Tduality a manifest symmetry. T-duality enables the investigation of the closed string non-commutativity. The coordinates of the closed string are commutative when the string moves in a constant background. In a three dimensional space with the Kalb-Ramond field depending on one of the coordinates, successive T-dualizations along isometry directions lead to a theory with Q flux and the non-commutative coordinates [14,15,16]. The novelty in the research is the generalized T-dualization procedure, realized in [17], addressing the bosonic string moving in the weakly curved background -constant gravitational field and coordinate dependent Kalb-Ramond field with infinitesimal field strength. The non-commutativity characteristics of a closed string moving in the weakly curved background was considered in [18]. The generalized procedure is applicable to all the space-time coordinates on which the string backgrounds depend. In Ref. [17], it was first applied to all initial coordinates, which produces a T-dual theory; it was then applied to all the T-dual coordinates and the initial theory was obtained. In this paper, we will investigate the application of the generalized T-dualization procedure to an arbitrary set of coordinates. Let us mark the T-dualization along direction x µ by T µ and the Tdualization along dual direction y µ by T µ . Choosing d arbitrary directions, we mark where µ n ∈ (0, 1, . . . , D − 1), and • marks the composition of T-dualizations. We will apply T-dualizations (1) to the the initial theory, and T-dualizations (2) to its completely T-dual theory (obtained in [17]). We will prove the following composition laws where 1 marks the identical transformation (T-dualization not performed). So, elements 1, T a and T a , with d = 1, . . . , D, form an Abelian group. We will find the explicit form of the resulting theories and the corresponding T-dual coordinate transformation laws. These results complete the T-dualization diagram connecting all the theories T-dual to the initial theory. Because the Kalb-Ramond field depends on all the coordinates, all T-dual theories except the initial one are non geometric and nonlocal unlike the non geometric theories with Q flux, which have local geometric description. To all of these theories there corresponds the flux which is of the same type as the R flux. The obtained relations are the generalization of the Tdualization chain presented in Refs. [14,15,16]. Putting D = 3, d = 1, 2 with B µν depending on x 3 = Z we reproduce the T-duality chain of the Refs. [14,15,16]. Because the Kalb-Ramond field depends only on the third coordinate, a T-dualization along one of the first two coordinates leads to a geometric theory with f flux, while T-dualizing both isometry directions one obtains a non-geometric theory with Q flux. Both theories have local geometric description. Once T-dualization along all three coordinates is performed, the nonlocal and non geometric theory with R flux is obtained. The generalized T-dualization procedure originates from the Buscher T-dualization procedure. The first rule in the prescription is to replace the derivatives with the covariant derivatives. The new point in the prescription is the replacement of the coordinates in the background fields argument with the invariant coordinates. The invariant coordinates are defined as the line integrals of the covariant derivatives of the original coordinates. Both covariant derivatives and invariant coordinates are defined using the gauge fields. These fields should be nonphysical, so one requires that their field strength should be zero. This is realized by adding the corresponding Lagrange multipliers terms. As a consequence of the translational symmetry one can fix the coordinates along which the T-dualization is performed and obtain a gauge fixed action. An important crossway in the T-dualization procedure is determined by the equations of motion of the gauge fixed action. Two equations of motion obtained varying this action are used to direct the procedure either back to the initial action or forward to the T-dual action. For the equation of motion obtained varying the action over the Lagrange multipliers, the gauge fixed action reduces to the initial action. For the equation of motion obtained varying the action over the gauge fields one obtains the T-dual theory. Comparing the solutions for the gauge fields in these two directions, one obtains the T-dual coordinate transformation laws. Weakly curved background Requirement for the quantum conformal invariance of the world-sheet results in the space-time equations of motion for the background fields. In the lowest order in the slope parameter α ′ these equations are Here B µνρ = ∂ µ B νρ + ∂ ν B ρµ + ∂ ρ B µν is the field strength of the field B µν , and R µν and D µ are the Ricci tensor and the covariant derivative with respect to the space-time metric. We will consider one of the simplest coordinate dependent solutions of (5), the weakly curved background. This background was considered in Refs. [19,20,21], where the influence of the boundary conditions on the non-commutativity of the open bosonic string has been investigated. The same approximation was considered in [15,18] in context of the closed string non-commutativity. The weakly curved background is defined by with b µν , B µνρ = const. This background is the solution of the space-time equations of motion if the constant B µνρ is taken to be infinitesimal and all the calculations are done in the first order in B µνρ , so that the curvature R µν can be neglected as the infinitesimal of the second order. The assumption that B µνρ is infinitesimal, means that we consider the D-dimensional torus so large, as to for any choice of indices holds [15] where R µ are the radii of the torus. In this paper we will investigate the T-dualization properties of the action (4) describing the closed string moving in the weakly curved background. Taking the conformal gauge g αβ = e 2F η αβ , the action (4) becomes with the background field composition equal to and the light-cone coordinates given by Complete T-dualization The T-dualization of the closed string theory in the weakly curved background was presented in [17]. The procedure is related to a global symmetry of the theory The symmetry still exists in the presence of the nontrivial Kalb-Ramond field (6), but only in the case of the trivial mapping of the world-sheet into the space-time, because in that case the variation of the action (8) equals zero. The T-dual picture of the theory, obtained applying the T-dualization procedure to all the coordinates, is given by where with being the effective metric and the non-commutativity parameter in Seiberg-Witten terminology of the open bosonic string theory [22]. The T-dual background fields are equal to and their argument is given by Here Θ µν 0± is the zeroth order value of the field composition Θ µν ± defined in (14) and g µν = G µν − 4b 2 µν and θ µν 0 = − 2 κ (g −1 bG −1 ) µν are the zeroth order values of the effective fields (15). The variable ∆ỹ µ is the double of the dual variable ∆y µ = y µ (ξ) − y µ (ξ 0 ), defined as the following line integral taken along the path P , from the point The initial theory (8) and its completely T-dual theory (13) are connected by the T-dual coordinate transformation laws (eq.(42) of Ref. [17]) and its inverse (eq.(66) of Ref. [17]) where and therefore In this section, we will learn what theory is obtained if one chooses to apply the T-dualization procedure to the action (8), along arbitrary d coordinates x a , The closed string action in the weakly curved background (6) has a global symmetry (11). One localizes the symmetry for the coordinates x a , by introducing the gauge fields v a α and substituting the ordinary derivatives with the covariant derivatives The covariant derivatives are invariant under standard gauge transformations In the case of the weakly curved background, in order to obtain the gauge invariant action one should additionally substitute the coordinates x a in the argument of the background fields with their invariant extension, defined by where To preserve the physical equivalence between the gauged and the original theory, one introduces the Lagrange multipliers y a and adds term 1 2 y a F a +− to the Lagrangian, which will force the field strength F a 01 to vanish. In this way, the gauge invariant action is obtained, where the last term is equal to 1 2 y a F a +− up to the total divergence. Now, we can fix the gauge taking x a (ξ) = x a (ξ 0 ) and obtain the gauge fixed action This action reduces to the initial one for the equations of motion obtained varying over the Lagrange multipliers. The T-dual action is obtained for the equations of motion for the gauge fields. Regaining the initial action Varying the gauge fixed action (28) over the Lagrange multipliers y a one obtains the equations of motion On this solution the background fields argument ∆V a defined in (26) is path independent and reduces to The gauge fixed action (28) reduces to the initial action (8), but the background fields argument is ∆V a instead of x i . However, the action (8) is invariant under the constant shift of coordinates, so shifting coordinates by x a (ξ 0 ) one obtains the exact form of the initial action. The T-dual action Using the equations of motion for the gauge fields, we eliminate them and obtain the T-dual action. The equations of motion obtained varying the gauge fixed action (28) over the gauge fields v a ± are where is the contribution from the background fields argument ∆V a , defined in a same way as in Ref. [17], by Multiplying the equations (32) by 2κΘ ab ∓ , defined in (126), the inverse of the background fields composition Π ±ab , one obtains Substituting (34) into the action (28), we obtain the Tdual action where In order to find the explicit value of the background fields argument ∆V a (x i , y a ), it is enough to consider the zeroth order of the equations of motion for the gauge HereΘ ab 0± and Π 0∓bi stand for the zeroth order values of Θ ab ± and Π ∓bi , and they are defined in (130). Substituting (37) into (26) we obtain Here are the variables T-dual to the coordinates y a and x i in the zeroth order in B µνρ , for b µν = 0, which we call the double variables. So, we obtain the explicit form of the T-dual action and conclude that it is given in terms of the original coordinates x i and the dual coordinates y a originating from the Lagrange multipliers. However, the background fields argument depend not only on these variables but on their doubles as well. Because of this the theory is nonlocal as the double variablesx i andỹ a are defined as line integrals. The action (35) can be obtained from the initial action (8) under the following substitutions of the coordinate derivatives and the background fields where the dual background fields are with Π +ij , Π +µν andΘ ab − defined in (36), (9) and (126). The argument of all T-dual background fields is [x i , V a (x i , y a )]. According to (26) and (38), it is nonlocal and consequently non geometric. Calculating the symmetric and antisymmetric part of the T-dual field compositions (42), we obtain that the T-dual metric and Kalb-Ramond field are equal to whereG Eab andθ ab are defined in (125) and (129) The inverse T-dualization In this section we will show that T-dualization of the action S[x i , y a ], given by (35), along already treated directions y a leads to the original action, T a : So, let us localize the global symmetry of the coordinates y a δy a = λ a , of the action (35). Note that this is the symmetry, despite the coordinate dependence of the metric (43), due to the invariance of the background fields argument [17]. Following the T-dualization procedure, we substitute the ordinary derivatives with the covariant ones where u ±a are gauge fields which transform as δu ±a = −∂ ± λ a . We also substitute coordinates y a in the background fields argument with the invariant coordinates where In this way, adding the Lagrange multiplier term which makes the introduced gauge fields nonphysical, we obtain the gauge invariant action which after fixing the gauge by y a (ξ) = y a (ξ 0 ) becomes where ∆V a is defined in (38) and ∆U a in (48). Regaining the T-dual action The equations of motion obtained varying the gauge fixed action (50) over the Lagrange multipliers z a have the solution On this solution the variable ∆U a defined by (48) is path independent and reduces to ∆U a (ξ) = y a (ξ) − y a (ξ 0 ), and the gauge fixed action (50) reduces to the action (35). Regaining the initial action The equations of motion obtained varying the gauge fixed action (50) over the gauge fields u ±a are where termsΘ ab 0∓ β ± b are the contribution from the variation over the background field argument Here β ± a is of the same form as (33) andΘ ab 0∓ is defined in (130). Let us show that for the equations of motion (54), the gauge fixed action (50) will reduce to the initial action (8). Using the fact thatΘ ab ∓ is inverse to 2κΠ ±ab , these equations of motion can be rewritten as Substituting (56) into (50), using the definition (36) and the first relation in (141) one obtains The explicit form of the argument of the background fields is obtained substituting the zeroth order of the equations (56) into (48) Consequently, the argument of the background fields ∆V a , defined in (38), is just So, the action (57) is equal to the initial action (8) with Comparing the solutions for the gauge fields (52) and (56), we obtain the T-dual transformation law Substituting ∂ ∓ y a to (44) with the help of (59) one finds ∂ ± x a = ∂ ± z a . So, (60) is the transformation inverse to (44), which confirms the relation T a • T a = 1. T-dualization along all undualized coordinates In this section we will T-dualize the action (35), applying the T-dualization procedure to the undualized coordinates x i . Substituting the ordinary derivatives ∂ ± x i with the covariant derivatives where the gauge fields w i ± transform as δw i ± = −∂ ± λ i , substituting the coordinates x i in the background field arguments by and adding the Lagrange multiplier term, we obtain the gauge invariant action Substituting the gauge fixing condition x i (ξ) = x i (ξ 0 ) one obtains where ∆W µ = ∆W i , ∆V a (∆W i , y a ) with ∆W i defined by and ∆V a = ∆V a (∆W i , y a ) is defined in (38), where argument x i is replaced by ∆W i . Regaining the T-dual action The equations of motion for the Lagrange multipliers y i are and they have the solution On this solution the background field argument ∆W i defined in (65) reduces to so that the argument ∆V a becomes and therefore the gauge fixed action (64) reduces to the action (35). From the gauge fixed action to the completely T-dual action The equations of motion obtained varying the gauge fixed action (64) over w i ± are where Terms Π ±ij Θ jµ ∓ β ± µ (W ) are the contribution from the background fields argument defined by calculated using (134), (135) and (38). Using the fact that the background field composition Π ±ij is invese to 2κΘ ij ∓ defined by (141), we can rewrite the equation of motion (70) expressing the gauge fields as Using the second relation in (142), we obtain Substituting (74) into the gauge fixed action (64), we obtain Using (141), (146) and (148) one can rewrite this action as In order to find the background fields argument ∆W i , we consider the zeroth order of the equations (74), and conclude that Using (147) and (142), we obtain that ∆V a (∆W i , y a ) defined in (38) equals ∆V a (∆W i , y a ) = −κθ aµ 0 ∆y µ + (g −1 ) aµ ∆ỹ µ . Therefore, we conclude that the background fields argument is equal to (17), so that the action (76) is the completely T-dual action (13), which is in agreement with Ref. [17]. Comparing the solutions for the gauge fields (67) and (74), we obtain the T-dual transformation law One can verify that two successive T-duality transformations (44) and (79) correspond to the total T-duality transformation (19). Indeed, the relation (79) is just the i-th component of this transformation. Substituting ∂ ± x i from (79) into (44), using (144) and (148), we obtain which is just the a-th component of the complete Tduality transformation. So, we confirm that T a •T i = T . 6 Inverse T-dualization along arbitrary subset of the dual coordi- Finally, in this section we will show that the T-dualization of the completely T-dual action (13), along arbitrary subset of the dual coordinates y i leads to T-dual action (35). So, let us start with the T-dual action which is globally invariant to the constant shift of coordinates y µ δy µ = λ µ . We localize this symmetry for the coordinates y i and obtain the locally invariant action where D ± y i = ∂ ± y i + u ±i are the covariant derivatives. The gauge fields u ±i transform as δu ±i = −∂ ± λ i and the invariant coordinates are defined by y inv i = P (dξ + D + y i + dξ − D − y i ). After fixing the gauge by y i (ξ) = y i (ξ 0 ), the action becomes where ∆U i = P (dξ + u +i + dξ − u −i ). Regaining the T-dual action The equations of motion obtained varying the gauge fixed action (83) over the Lagrange multipliers have the solution On this solution the variable ∆U i reduces to and therefore ∆V µ (∆U i , y a ) = ∆V µ (y). Obtaining the T-dual action The equations of motion obtained varying the action (83) over u ±i are where β ± µ are given by (71). The terms with beta function come from the variation over the argument U i and are calculated using (134) and (17). Using the fact that 2κΠ ∓ij is the inverse of Θ ij ± , the equation (88) can be rewritten as Substituting (90) into the gauge fixed action (83), using (144) we obtain which with the help of (148) becomes In order to find the argument of the background fields ∆V (∆U i , y a ), one considers the zeroth order of the equations (90) and obtains where the double variables are defined in analogy with (39). Substituting (93) into (17), we obtain and ∆V a (∆U i , y a ) = −κ Θ ab which is exactly (38) with z i = x i . So, we can conclude that the action (92) is equal to the T-dual action (35). Comparing the solutions for the gauge fields (85) and (90), we obtain the T-dual transformation law These transformations are inverse to (79), so that T i • T i = 1. Successively applying (96) and (60), using (148) and (144), we obtain the i-th component of the inverse law of the total T-dualization (20). Its a-th component is (60), so we confirm that T a • T i =T . Group of the T-dual transformation laws In this section we will recapitulate the coordinate transformation laws between the theories considered. In section 3, we performed T-dualization procedure along coordinates x a and obtained the following coordinate transformation law (44) where V a and β ± a are given by (38) and (33). In the zeroth oder this law implies In section 4, starting from the action S[x i , y a ] we performed T-dualization procedure along coordinates y a and obtained the transformation law (60) which is the law inverse to (98) and in the zeroth order it implies Multiplying the transformation law (98) from the left side by Π ±ca (x) ∼ = Π ±ca x i , ∆V a (x i , y a ) , using (99), we obtain the transformation law (101). So, we confirm that T a • T a = 1. In the section 5, starting once again from the action S[x i , y a ], we performed T-dualization procedure along the undualized coordinates x i and obtained the coordinate transformation law (79) where V µ and β ± µ are given by (17) and (71). In the zeroth order it gives Two successive T-duality transformations (98) and (104) give the complete transformation (19), so that T a •T i = T . In section 6, starting from the completely T-dual action S[y], we performed T-dualization procedure along coordinates y i and obtained (96) with V a , U i and β ± µ given by (78), (93) and (71). In the zeroth order this law implies Multiplying (107) from the left by using (105), we obtain the transformation law (104), so that T i • T i = 1. Successively applying (107) and (101), using (148) and (144), we obtain the i-th component of the inverse law of the complete T-dualization (20). Its a-th component is (101), so we confirm that T a • T i =T . We can conclude that the elements 1, T a and T a , with d = 1, . . . , D, form an Abelian group. The element T a is the inverse of the element T a . Comparison with the existing facts In this section we will compare our results with the Tdualization chain of the Ref. [15]. The coordinates of the D = 3 dimensional torus will be denoted by x 1 , x 2 , x 3 . Because of the different notation, the background fields considered in this paper and those considered in [15], which will be marked by G and B, are related by Nontrivial components of the background considered in Ref. [15] are which in our notation corresponds to the background fields Let us first compare the results in the case d = 1, corresponding to the transition Therefore and so our result is in agreement with that of Ref. [15]. Now, let us make the comparison in the case d = 2 which corresponds to the transition T 1 • T 2 : torus with H flux → Q flux non − geometry. Instead to perform T 2 dualization, from twisted torus to Q-flux non-geometry as in [15], we will start from the initial background with H-flux and perform T-dualizations along x 1 and x 2 , T 1 • T 2 : S[x] → S[y 1 , y 2 , x 3 ]. The indices take the following values a, b ∈ {1, 2} and i, j ∈ {3}. Because the only nontrivial contribution to the Kalb-Ramond field B ab is B 12 = − 1 2 Hx 3 , the effective background fields areG E ab = δ ab ,Ḡ E ij = δ ij and the only nonzero component ofθ ab isθ 12 = 1 κ Hx 3 . The T-dual background fields linear in H are therefore and Consequently so the results of this paper and [15] in this case coincide. Conclusion In this paper, we considered the closed string propagating in the weakly curved background (6), composed of a constant metric G µν and linearly coordinate dependent Kalb-Ramond field B µν , with infinitesimal field strength. We investigated the application of the generalized T-dualization procedure on the arbitrary set of coordinates and obtained the following T-duality diagram: Let us stress that generalized T-dualization procedure enables the T-dualization along arbitrary direction, even if the background fields depend on these directions. The consequence of this procedure is that the arguments of the background fields, such as ∆V a , are nonlocal. They are nonlocal by definition, as they are the line integrals of the gauge fields. Once the explicit form is obtained the non locality is seen in a fact that they depend on double coordinatesx andỹ, which are the line integrals of the τ and σ derivatives of the original coordinates. To all the theories considered, except the initial theory, there corresponds the non geometric, nonlocal flux. The generalized T-dualization procedure was first applied along arbitrary d (d = 1, . . . , D − 1) coordinates x a = {x µ 1 , . . . , x µ d }. We obtained the T-dual action S[x i , y a ], given by eq. (35) with the dual background fields equal to The argument of all background fields, [x i , V a (x i , y a )], depends nonlinearly on coordinates x i , y a through their doublesx i ,ỹ a (see (38) and (39)). All actions S[x i , y a ] are physically equivalent, but are described with coordinates x i = {x µ d+1 , . . . , x µ D }, for the untreated directions and dual coordinates y a = {y µ 1 , . . . , y µ d }, for the dualized directions. The case d = D corresponds to the completely T-dual action with the T-dual fields κ 2 Θ µν − V (y) and the case d = 0 to the initial action with the background Π +µν (x). Applying the procedure to the T-dual action along dual directions y a = {y µ 1 , . . . , y µ d } we obtained the initial theory, and applying it to the untreated directions x i = {x µ d+1 , . . . , x µ D } we obtained the completely T-dual theory. All these derivations confirmed that the set of all T-dualizations forms an Abelian group. The neutral element of the group is the unexecuted Tdualization, while the T-dualizations along some subset of original directions T a is inverse to the T-dualizations along the set of the corresponding dual directions T a . and Π ∓ib = −2κΠ ∓ij Θ ja ± Π ∓ab , Π ∓aj = −2κ Π ∓ab Θ bi ± Π ∓ij . Let us derive some useful relations between these quantities. The relation (124), for µ = a, ν = i and µ = i and ν = a becomes while taking µ = a, ν = b and µ = i and ν = j we obtain Multiplying the relation (146) from the left withΘ ca ∓ and from the right withΠ ∓ik we get the relation while multiplying the relation (147) from the right with Θ ki ∓ and from the left withΠ ±ac , we obtain
6,969.2
2014-06-20T00:00:00.000
[ "Physics" ]
Production of α-glucosidase Inhibitor in the Intestines by Bacillus Licheniformis Alpha-glucosidase (EC.3.2.1.20) is involved in the absorption of monosaccharides in the small intestine of animals. We aimed to nd a microorganism capable of proliferating in the intestine and producing α-glucosidase inhibitor. We developed a strain capable of forming spores from dry grass and growing in an anaerobic environment was selected as Bacillus lichenformis. Mixing spores of this strain with a high-fat diet and high-carbohydrate diet, it was conrmed that the weight gain was signicantly reduced than the high-calorie diet group without spores. Furthermore it was conrmed that Bacillus lichenformis administered as spores eciently proliferated in the intestine and consistently produced α-glucosidase inhibitor by securing a constant amount of the strain and α-glucosidase inhibitor in feces after a certain period. This study shows an ecient process in which microorganisms capable of proliferating in the intestine directly produce and supply specic secondary metabolites in the intestine. The AGI originating from aerobic microorganisms such as Streptomyces and Bacillus can be obtained through culture. It has the advantage of producing AGI in large quantities in a short period through the breeding of high-producing strains and optimization of the culture process (Zhang et al. 2019;Lee et al. 2018). However, the AGI of microbial origin is di cult to use into originals cause various factors such as fermentation odor and concentration of effective substances, there is a need to undergo puri cation (Zhu et al. 2013;Zhu et al. 2008). The intestinal microbiota such as lactic acid microorganism proliferate in the intestine and exhibit various physiological effects that control the in ammatory response of the host or produce useful substances such as secondary metabolites and also generate energy for use by the host through metabolic processes (Wang et al. 2019;Riedl et al. 2017;Nieuwdorp et al. 2014;Sweeney & Morton 2013). This means that if microorganisms that are capable of growing in the anaerobic part of intestine produce large amounts of secondary metabolites, the possibility of using the intestine as a production site and using space for speci c secondary metabolites is high. Several speci c microorganisms, such as Bacillus and Streptomyces are known to produce large amounts of AGI in an aerobic environment (Onose et al. 2013;Paek et al. 1997;Hardick et al. 1992), however they are di cult to proliferate in the intestine which is an anaerobic environment, they are di cult to supply AGI in the intestine. In this study, a new strain that is capable to proliferate in the intestine for intestinal production of useful substances and capable of producing large amounts of AGI in an anaerobic environment such as the intestine was explored, a new strain was examined that is capable of intestinal proliferation, production of AGI in the intestine and weight loss effect accordingly. Results And Discussion Screening strain About 400 pieces of rice straw and hay collected from all over the world such as Korea, China, Japan and the United States were used as samples to isolate microorganisms. A small amount of sterile saline is added to each sample and suspended, and the spore solution obtained by heat treatment in an 80℃ water bath for 20 minutes is used in an LB plate medium containing 2% agar (1% tryptone, 0.2% sucrose, 0.5% yeast extract and 0.5% NaCl, pH 7.0) and anaerobically cultured in an incubator at 55℃. for two days to isolate colony-forming microorganisms. Single colonies obtained by separating from hay were inoculated in 5 mL of 5% soy our suspension medium and cultured with shaking at 37℃ for 24 hours, and then 60 species with high AGI activity in the supernatant were selected. The selected 60 strains were capable of growing for anaerobic growth at 50℃ and three strains with high AGI activity were selected as strains that were capable of using propionic acid. The selected three strains were identi ed as Bacillus licheniformis as a result of classi cation by taxonomic characteristics ( The microorganism with the highest AGI production capacity was nally selected and to increase the AGI production capacity of the strain after inducing mutation by treatment with NTG (100μg/mL) to reach 99.9% kill rate, spread to 200-300 per sheet on L-broth plate medium and incubate anaerobically for two days at 37℃. The resulting colonies were randomly inoculated in 5 percent soybean our medium and cultured with shaking at 40℃ for two days, and then the centrifuged supernatant was measured for the AGI activity. By repeating the mutation twice in the same way, a strain having high AGI activity was selected and named B. licheniformis NY1505. As a result of analyzing the 16s RNA nucleotide sequence of B. licheniformis NY1505 strain, it showed high homology with B. licheniformis. The dendrogram shows that B. licheniformis NY1505 (the accession number KCTC13021BP) is allied species with the B. licheniformis type strain ( Fig. 1). Analysis of AGI 5 x 10 5 spores of NY 1505 were inoculated into 500g of steaming soybeans and covered the lm, incubated at 37℃ for 24 hours, added 2.5L of 70% (v/v) ethanol, extract twice, and evaporated under reduced pressure. To 150 mL of the concentrated extract, 300 mL of hexane, dichloromethane, and Ethyl acetate were sequentially added twice, stirred for 2 hours, allowed to stand for 2 hours, and fractionated to wash the aqueous layer. After drying the aqueous layer, 100 mL of 90% ethanol was added to dissolve it, followed by silica gel column (100 mL) chromatography. The mobile phase was stepwise gradient from 1:1 solution of acetonitrile and methanol to 3 : 2 solution (Total 1000 mL). The AGI activity of each fraction was measured to obtain two fractions, AGI 1 and AGI 2. AGI 1 and AGI 2 each appeared as a single spot in thin layer chromatography, and the structure was determined through NMR analysis. The chemical shift of AGI 1 is as follows. AGI 1 is a triterpene-type substance in which ve rings are connected and has a structure similar to betulinic acid. The chemical structure was found to be 3-oxo-11α-hydroxy-lup-20(29)-en-28-oic acid. The inhibition pattern of AGI 1 through the Lineweaver-Burk plot was con rmed as non-competitive inhibition (Fig. 2). AGI 2 was presumed to be 1-deoxynojirimycin (DNJ), and as a result of running NMR, DNJ and NMR results were con rmed to be consistent. The chemical shift of AGI 2 is as follows. AGI 2 showed a tendency to inhibit competitive inhibition (Fig. 3). Animal experiment High carbohydrate diet After starting the diet and measuring the weight of each group every week, the lowest and highest values were excluded and statistically processed. In addition, more than 3g of fresh feces were collected for each cage to examine the number of microorganism. The high carbohydrate diet group showed a weight gain rate about 130% compared to the standard diet group in week 4, but the high carbohydrate diet group with spores of the B. licheniformis NY1505 strain showed the weight gain rate of about 90% compared to the standard diet group ( Fig 4). High fat diet The high fat diet group showed a weight gain rate about 150% compared to the standard diet group in week 6, but when the high fat diet group with spores of the B. licheniformis NY1505 strain showed the weight gain rate about 120% compared to the standard diet group ( Fig. 5). Comparing the results in Fig. 4 and 5 the high fat diet group was more sensitive than the high carbohydrate group to the administration of B. licheniformis NY1505 spores at the week 3 or 4. In particular, it has been reported that when DNJ, a component of AGI 2, is administered for a long period of 12 weeks or longer, it activates β-oxidation, which decomposes fatty acids in mitochondria, and inhibits liver fat formation (Tsuduki et al. 2013;Tsuduki et al. 2009). Therefore, this is expected to be because B. licheniformis NY1505 proliferates in the intestine and produces AGI that activates β-oxidation, which is the catabolic action of fatty acids. It is known that betulinic acid, which has a similar structure to AGI 1, also inhibits adipogenesis by inhibiting differentiation in adipocyte growth ( (Fig. 6). Spores were administered for seven weeks and the number of microorganism rapidly decreased at 8th week when not administered for one week. It can be judged that it inhabits temporarily without adhering to the intestine. The number of α-glucosidase inhibitor NY1505 microorganism excreted in feces The B. licheniformis NY1505 strain proliferates vigorously in the intestine and produces AGI. From the 2nd week when the B. licheniformis NY1505 strain which had proliferated in the intestine was being detected in the feces, AGI that produced in the intestine was excreted into the feces equally (Fig. 7). As shown in Fig. 6, the amount of B. licheniformis NY1505 excreted is stabilized, and the amount of AGI excreted is also constant from 4thweek to 7th week. In other words, AGI is always continuously produced in the intestine, its concentration is maintained and a constant amount of the produced AGI is excreted. Considering that the AGI activity of natto produced from the B. licheniformis NY1505 strain is 90-95 units/g (data not shown), it can be seen that a signi cant amount of AGI is produced in the intestine and some of it is excreted. Conclusions Gut microbiota inhabiting in the intestine proliferate in the intestine and produce secondary metabolites (Kopp-Hoolihan 2001). Secondary metabolites produced by the gut microbiota may affect the host depending on the amount. In other words, various physiological activities are capable to expect by intentionally administering gut microbiota that can produce a large amount of useful secondary metabolites (Parvez et al. 2006;Kopp-Hoolihan 2001). In this study, we investigated the possibility of intestinal production of physiologically active substances as AGI that involved in the absorption of sugars in the digestive tract by microbiota capable of proliferating in the intestine. The conditions for the strains to produce physiologically active substances in the intestine should be safe, they should be able to proliferate in the anaerobic environment in the intestines, they should form spores to reach the intestines e ciently, the high productivity of physiologically active substances in the anaerobic environment (Parvez et al. 2006;Kopp-Hoolihan 2001). Therefore, strains were screened from the nature. As a result B. licheniformis NY1505 was obtained. B. licheniformis NY1505 e ciently reaches the intestine in the form of spores, then proliferates, produces a large amount of AGI, and exhibits the physiological activity of the host that slows the rate of weight gain (Figs. 4, 5). On the other hand, how the administered microbiota adheres and inhabits in the intestine depends on the need for supply of the secondary metabolite. In other words, it is preferable to inhabit and produce the secondary metabolite in the intestine only for a necessary period. As shown in Fig. 6, following the intestinal habitat of B. licheniformis NY1505, the numbers of strain excreted out of the body decreases after the administration of the strain is nished. B. licheniformis NY1505 inhabits and is excreted in a relatively short time rather than adhering and inhabiting in the intestine, so it is capable to use only for a necessary period. It takes a lot of time and cost to process such as extract, purify, and drying physiologically active substances from fermented products of animals and plants or microorganisms (Zhu et In this study, it is shown an e cient process of producing and supplying secondary metabolites directly in the intestine by administering strain capable of proliferating in the intestine. AGI is a compound that works inside the intestine, but it is expected that this process will work. This study suggests that by oral administration of microbiota that is capable of intestinal proliferation, the intestinal environment is used as a factory to produce secondary metabolites. It can be a new supply and intake method for physiologically active substances. Methods/experimental Materials Reagents such as N-methyl-N'-nitro-N-nitrosoguanidine (NTG), ρ-nitrophenyl α-D-glucopyranoside(pNPG), sodium propionate, potassium phosphate, and sodium carbonate were used with guaranteed reagents. The reaction was stopped by adding 100 µL of 0.1M sodium carbonate, and the inhibition rate was measured by substituting the absorbance at 405 nm into the following equation (Kim et al. 2005). The inhibition rate = (A-B)/A × 100 In the above equation, A is the absorbance of the control and B is the absorbance of the sample containing the inhibitor. Inhibition rate 1 unit was de ned as the amount of inhibitor when 100% inhibition of α-glucosidase used in the assay. Animals Experimental animals were bred in research facilities. Three-week-old ICR mice were used as experimental animals after the acclimation period before the experiment were classi ed by weight equally and classi ed into ten animals per group and 5 per cage. The illuminance was controlled by turning it on every 12 hours and the temperature of the breeding room was adjusted to 21℃. The feed composition of each group is as shown in Table 1, and the high carbohydrate group and the fat group was reared by dividing the spores administered group and the non-administered group, respectively. The spore administration diet was prepared by directly mixing 10 8 cells of spores per 1 kg of feed. During the breeding period, body weight was measured, feces were collected and the amount of microorganism contained in the feces and excreted and the activity of AGI were measured at regular intervals. This study was approved by the Animal Care and Use Committee of Kangwon National University (permit no. KW-190103-11) Statistical analysis The results of each experiment were expressed as the mean with standard deviations (± SD). A one-way analysis of variance (ANOVA) test (Bonferroni, SPSS, v.32, for Windows) was performed to determine the group means. Values were considered to be signi cant when P was less than 0.05(P ≤ 0.05). Chemical structure of AGI 1 and the enzyme inhibition type of AGI 1 a Chemical Structure of AGI 1 b Inhibition type of AGI 1 showed non-competitive inhibition by Lineweaver -Burk plot. Each symbol showed amounts of AGI 1. is 1140 μg, □ is 570 μg, △ is 228 μg, ○ is no inhibitor Figure 3 Chemical structure of AGI 2 and the enzyme inhibition type of AGI 2 a Chemical Structure of AGI 2 b Inhibition type of AGI 2 showed competitive inhibition by Lineweaver -Burk plot. Each symbol showed amounts of AGI 2. is 48.9μg, □ is 32.6μg, △ is 16.3μg, ○ is no inhibitor Figure 4 Comparison of weight alteration in High carbohydrate diet clade with a standard diet Each symbol □ showed weight alteration of standard diet group, ○ showed weight alteration of high carbohydrate diet group, • showed weight alteration of high carbohydrate diet with NY1505 spore group. Values were considered to be signi cant ( ) when P was less than 0.05(P ≤ 0.05). meant that high carbohydrate diet group with and without NY1505 showed to be accompanied reliability meaning signi cant. Figure 5 Comparison of weight alteration in fat diet clade with a standard diet Each symbol □ showed weight alteration of standard diet group, ○ showed weight alteration of fat diet group, • showed weight alteration of fat diet with NY1505 spore group Values were considered to be signi cant ( ) when P was less than 0.05(P ≤ 0.05). meant that high fat diet group with and without NY1505 showed to be accompanied reliability meaning signi cant. Figure 6 A secreting amount of NY 1505 CFU per 1g from feces Each symbol showed secreting amount of NY 1505 CFU per 1g from feces. ○ meant high carbohydrate diet with NY1505 spore group, • meant high fat diet with NY1505 spore group.
3,644.2
2021-08-05T00:00:00.000
[ "Biology" ]
Study of neutrino oscillation parameters at the INO-ICAL detector using event-by-event reconstruction We present the reach of the proposed INO-ICAL in measuring the atmospheric-neutrino-oscillation parameters θ23\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{23}$$\end{document} and Δm322\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varDelta m^2_{32}$$\end{document} using full event-by-event reconstruction for the first time. We also study the fluctuations arising from the low event statistics and their effect on the precision measurements and mass-hierarchy analysis for a 5-year exposure of the 50 kton ICAL detector. We find a mean resolution of Δχ2≈2.9\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varDelta \chi ^2 \approx 2.9$$\end{document}, which rules out the wrong mass hierarchy of the neutrinos with a significance of approximately 1.7σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.7\sigma $$\end{document}. These results on mass-hierarchy are similar to those presented in earlier studies that approximated the performance of the ICAL detector. Introduction In the Standard Model (SM), neutrinos are massless fermions which interact only via the weak interaction through the exchange of W ± or Z 0 bosons. A series of experiments dedicated to neutrinos [1][2][3][4][5][6][7][8][9] have proved the existence of neutrino flavour oscillations, which implies that neutrinos are massive. The neutrino flavour states produced along with charged leptons are linear superpositions of the mass eigenstates. Due to the difference in phase between the wave packets of each of the mass eigenstates, neutrino oscillations occur [10,11]. The mixing angle θ 13 is found to be non-zero from reactor [6,8,9] and accelerator [15,16] neutrino oscillation experiments. The relatively large value of θ 13 ∼ = 8.6 • has intensified the search for C P violation effects in neutrino oscillations, and also the determination of the sign of Δm 2 32 via matter effects [17,18]. Matter plays an important role in enhancing the effect of sin θ 13 via resonance, which is sensitive to the sign of Δm 2 32 and is different for neutrinos and antineutrinos [19,20]. Determination of the sign of Δm 2 32 would help us understand the correct mass hierarchy (MH) of the neutrinos, i.e., whether the MH is (m 1 < m 2 < m 3 ) normal (NH) or (m 3 < m 1 < m 2 ) inverted (IH) hierarchy. A series of experiments with complementary approaches have been proposed using accelerator, reactor and atmospheric neutrinos to determine the MH [21,22]. The intermediate and long baseline, off-axis accelerator neutrino experiments T2K [5] and NOνA [23,24], search for the appearance of ν e in an intense beam of ν μ , wherein the appearance probability depends on the MH of the neutrino states. Recent results from NOνA [25] have disfavoured the IH by more than 93% C.L. in the lower octant for all values of CP phase δ, and it is in good agreement with the T2K result [26] giving a near maximal value of θ 23 and a weak preference of NH. Liquid scintillator detectors proposed at RENO-50 [8] and JUNO [27] could unravel the MH using reactor neutrinos. Atmospheric neutrino experiments using water or ice Cherenkov detectors, such as Hyper-K [28,29], MEMPHYS [30], ORCA and PINGU [31,32], make use of different cross-sections and different ν andν fluxes to study the MH. The proposed magnetized Iron Calorimeter (ICAL), to be built at the India-based Neutrino Observatory (INO) [33], will study interactions involving primarily atmospheric muon neutrinos and anti-neutrinos. It will consist of three identical modules, each with dimension 16 m × 16 m × 14.5 m (length × width × height) placed in a line and separated by a small gap of 20 cm. Each module will consist of 151 layers of 5.6 cm thick iron plates interleaved with 4 cm air gap containing the active detector elements, glass Resistive Plate Chambers (RPCs). This huge size of 48 m × 16 m × 14.5 m magnetized detector, with a mass of 50 kton, provides the target nuclei to achieve a statistically significant number of neutrino interactions within a reasonable time frame. One of the main goals of INO is to study the MH via earth matter effects, and to determine the octant of θ 23 . ICAL is designed to have very good muon detection efficiency of greater than 85% for muons greater than 2 GeV (with zenith angle cos θ z ≥ 0.4), combined with excellent angular resolution. The ICAL will prominently observe the matter effects on the upward going muon neutrinos, where it uses the magnetic field to distinguish between ν μ andν μ events by discriminating the charge on the final state muon. The probability of oscillations are different for ν μ andν μ in the presence of matter, and depends on the MH (sign of Δm 2 32 ). The seen relative difference between normal and inverted hierarchy is 3.9% (−5%) for ν μ (ν μ ) for cos θ z > 0.2, assuming NH to be the true hierarchy (where cos θ z = +1 is the up-going direction). Hence the ICAL could study the MH by observing earth-matter effects independently on ν andν. This paper shows the precision reach of ICAL in the sin 2 θ 23 − |Δm 2 32 | plane for a 5 year run of ICAL. Event-byevent reconstruction and fluctuations arising from low event statistics, using an analysis technique that will be suitable to be employed on the actual data. In the previous studies [33], a lookup table was obtained for the reconstruction efficiencies and resolutions from the studies based on single muons of fixed energy and direction. Later the generated data is folded with the detector efficiencies and smeared by the resolution functions within the lookup table. Hence, the previous methods used parameterizations of the efficiency and resolution that do not reflect the tails of these distributions. As well as those methods used very large sample sizes to negate the effect of low statistics. We also apply a few event selection criteria, as presented in Ref. [34], but within the framework of low event statistics and present its effect on the outcome of this analysis. This paper also compares the results from simulated unfluctuated data, by simulating data sets corresponding to 5-years of data. This paper is organized as follows. In Sect. 2, we outline the methodology describing the event detection in the INO detector and the software framework used to simulate and reconstruct the events in the detector. In Sect. 3, we describe the event generation and discuss the fluctuations in the data. We describe how the events are reconstructed as well as the event selection criteria applied to obtain a sample of events. We also describe the oscillation analysis including the Earthmatter effects and discuss how data collected by the ICAL are sensitive to the MH. We also describe the χ 2 analysis and the binning scheme used, and also discuss the types of systematic uncertainties used in this analysis. In Sect. 4, we present the results of our simulated analysis, showing the reach of the ICAL for atmospheric oscillation parameters θ 23 and Δm 2 32 . Event selection reduces the event statistics, hence we also present the results with and without event selection to see its effect. We also discuss the effect of fluctuations on the precision measurements and show the different possible outcomes resulting from the low event statistics. We also discuss the results on the MH of the neutrinos and the effects of fluctuations in determining it. Finally, in Sect. 5 we present the summary of our results and conclusions. Methodology The NUANCE [35] neutrino event generator, along with the Honda neutrino flux [36] at the Kamioka site, is used to generate neutrino interactions within the ICAL detector. The proposed ICAL geometry, containing mainly iron and glass components of the detector, is given as input to NUANCE. It generates secondary particles from interactions with these materials, and calculates event rates integrated over the weighted flux and cross sections of all chargedcurrent (CC) and neutral-current (NC) interactions, at each neutrino energy and angle. The output from NUANCE contains vertex and timing information, as well as energy and momentum of all initial and final state particles in each event. In the ICAL, atmospheric neutrinos will interact with an iron nucleus, undergoing NC and CC interactions. The main CC interactions taking place in the detector are quasi-elastic (QE) and resonance (RS) at low energies and deep-inelastic scattering (DIS) at higher energies. All neutrinos interacting via CC interaction produce an associated lepton. The DIS events produce a number of hadrons along with the lepton, while RS interactions produce at most one hadron. In the QE process, no hadrons are produced and the final-state lepton takes away most of the energy of the incident neutrino. A C++ code developed by the collaboration using the GEANT4-based [37] simulation toolkit, containing the full ICAL detector geometry, magnetic field map and RPC characteristics, is used to propagate the secondary particles. The output from GEANT4 contains the information on energy loss and momentum of the particles all along its path, which are then digitized and reconstructed to obtain the event observables. The detailed information of which are discussed in Sect. 3.2. This paper is based only on the CC neutrino events, where we consider the muon information alone. The information on the energy and direction of the muons is used to study the sensitivity to atmospheric neutrino oscillation parameters θ 23 and Δm 2 32 at INO-ICAL and resolving the MH. Including hadron information is beyond the scope of this paper. Analysis procedure The first step in the procedure is event generation. The generated events are reconstructed in the GEANT4-simulated ICAL and oscillations applied event-by-event after event selection. The oscillated events are binned and used in the χ 2 analysis to determine the oscillation parameters. Each of these procedures are described in detail in the subsections below. Event generation In this analysis, NUANCE data for an exposure of 50 kton × 1000 years is generated, out of which sub-samples corresponding to 5 years of data are used as the experimentally simulated sample and the remaining 995 years of data are used to construct probability distribution functions (PDF) that are used in the χ 2 fit. Hence the data are uncorrelated with the PDFs that are used to fit the data. This paper is based only on the CC neutrino events with energies less than 50 GeV which corresponds to 98.6% of the sample. The idealized case, where the NUANCE data is folded with detector efficiencies and smeared by the resolution functions obtained from GEANT-based studies of single muons with fixed direction and energy, has been presented previously [38]. In the earlier analysis, although the data was analysed for an exposure of 5 or 10 years, it was scaled down from the 1000 year sample. Hence, the reconstructed central value was always practically the same as the input value. Here we examine in detail the more realistic case, where the data size and central value are both subject to fluctuations. Event reconstruction The generated NUANCE data is simulated in GEANT4based detector environment. The energy deposited due to the energy loss of the charged particle in the RPCs are converted to signals, where they are detected by the mutually orthogonal copper strips (along x and y directions parallel to the global detector coordinates described in Sect. 3.3) on the RPCs. Hence the measured data is digitized to form (x, z) or (y, z) and time t of the signal, referred to as hits. Here the z position is given by the layer number of the RPC. A recursive optimal state estimator -the Kalman Filter [39,40], uses the local geometry and magnetic field information to fit the muon hits, where the muons passing through a minimum of three layers are fit to form the track. The direction and the momentum of the muon is obtained from the best fit values of the track. Also the timing information from the RPCs, with a resolution of approximately 1 ns enables the distinction between upward and downward going particles. More details can be found in Ref. [40]. Hence for the first time we have done this analysis performing event-by-event reconstruction, where each event is simulated through the detector and reconstructed to obtain the observables. Therefore the tails of the resolution functions, which have been approximated by single Gaussians and Vavilov [41] functions in the previous studies [33], are also taken in to account in this analysis. The μ ± leave one or two hits per layer on average, forming a well-defined track, whereas the hadrons leave several hits per layer forming a shower of hits. Rarely (less than 1% of the time) a pion may also leave a well-defined track in the ICAL and may be misidentified as a muon. In this case the longest track is identified as the muon. The iron plates will be magnetised to produce a field upto 1.5 T and this will be used in the ICAL to probe the charge and momentum of the muon. The direction and the curvature of the muon trajectory, as it propagates through the magnetized detector, gives its charge and momentum, respectively. Figure 1 shows the zenith angle (θ z ) distribution before (true) and after reconstruction. Note that in the current analysis cos θ z = +1 is the up-going direction. The energy of hadrons is obtained by calibrating the number of hits not associated with the muon track, in the event [42]. The incident neutrino energy (E ν ) can be reconstructed from the energies of the muons and hadrons produced in the detector. The poor energy resolution of hadrons [42] affects the reconstruction of the incident neutrino. Hence for ICAL physics analysis hadron and muon energies are used separately, without losing the good energy and angular resolution of muons [33,38,43,44]. Event selection The reconstruction of muons is badly affected by the nonuniform magnetic field and dead spaces such as coil slots and support structures. Also the horizontal events which pass through very few layers giving very few hits are reconstructed poorly. Partially contained events, where the μ ± leaves the detector volume, are typically harder to reconstruct, as most of the time they leave a short track within the detector. To remove these badly reconstructed events and obtain a better reconstructed sample of data, we investigated applying selection cuts as used in the previous Monte Carlo (MC) studies [34]. Events with χ 2 /ndf < 10 are used in the analysis, where the χ 2 is the chi-square estimate of the fit for the track obtained from the Kalman filter, and ndf is the number of degrees of freedom. Here ndf = 2N hits − 5, where the Kalman filter fits five parameters to form the track and N hits are the number of hits associated with the track, with each hit having two degrees of freedom as they are either (x, z) or (y, z) in coordinates. The badly reconstructed horizontal events are removed by applying a cut of | cos θ z | ≥ 0.35. Also, to keep a check on the events leaving from top and bottom of the detector, a cut on the z-position of the event vertex is applied. Events with vertices lying below z = 6 m and above z = −6 m are the ones selected from up-going and down-going events respectively. The entire ICAL detector was divided into three regions, depending on the magnitude of the magnetic field. Considering three modules of size 16 m × 16 m × 14.4 m each and choosing an origin at the centre of the central module, the ICAL will have conventionally 24 m, 8 m and 7.2 m on either side of the origin along x, y and z directions respectively. The region |x| ≤ 20 m, |y| ≤ 4 m, with z unconstrained is defined to be the central region. Here the magnetic field is highest and uniform in magnitude (with ≈ 12% coefficient of variation) despite the fact that the direction of the magnetic field would flip along y in the regions |x| < 4 m, 4 m ≤ |x| < 12 m and 12 m ≤ |x| < 20 m. In contrast, the region |y| > 4 m, termed peripheral region, has maximally varying magnetic field in both magnitude (with ≈ 28% coefficient of variation) and direction. Finally the third region |x| > 20 m and |y| ≤ 4 m, termed the side region, has a magnetic field smaller by ≈ 11% and opposite in direction to the central region. The side region has better uniform magnetic field among the three regions (with less than 5% coefficient of variation). All the events with the interaction vertices in the central region, and with N hits > 0, are selected as they are either contained within the detector or can form a reasonable length of track to identify the direction and momentum. The rest of the events in the peripheral and side regions are classified into partially (PC) and fully (FC) contained events according to The response of ICAL can be quantified in terms of reconstruction efficiency ( rec ), the relative charge ID (CID) efficiency ( cid ), the energy (σ E μ ) and angular resolutions (σ cos θ z ). The Fig. 2a shows the comparison of these quantities with (WS) and without (WOS) event selection, as a function of true muon energy E true μ . The reconstruction efficiency increases with increase in muon energy, as the energetic muon pass through large number of layers. At higher energies the reconstruction efficiencies become almost a constant. The resolution improves considerably at low and high energies with the selection of events. An overall improvement of 23% and 19% is noticed in energy and zenith angle resolution of muons (averaged over muon energy, zenith and azimuthal angles) after the event selection. The CID efficiency, i.e., the fraction of events identified with correct muon charge among the total reconstructed events, shows ∼ 6% to 10% improvement at all muon energies, after the event selection. The previous studies without the event-by-event reconstruction obtained an angular resolution less than 1 • for near horizontal events (cos θ z = −0.35) with E μ > 5 GeV [38], whereas it deteriorates to less than 8 • after realistic reconstruction. Similarly the CID efficiency was calculated to be above 95% for all the events, whereas it deteriorates in between 60% to 80% after event-by-event reconstruction. The resolutions and efficiencies in the low energy (E μ < 5 GeV) are relatively worse, whereas the near vertical events show a good agreement with the previous results. The reconstruction efficiency decreases with event selection as expected, where approximately 40% of reconstructed events are lost with event selection. Figure 2b, shows the reconstructed cos θ z distribution and compares the reconstructed events with (WS) and without selection (WOS) criterion. The dip at cos θ z = 0 results from the difficulty to reconstruct horizontal events. This paper also studies the effect a δ cp is assumed to be zero of the applied event selection on the sensitivity of oscillation parameters in ICAL. Hence, the parameter sensitivity is studied with and without applying any selection criterion. Applying oscillations The muon signal in the ICAL will have contributions from the component of the ν e flux (Φ ν e ) that has oscillated to ν μ and the component from ν μ flux (Φ ν μ ) that has survived. As the 5-year pseudo-data would have negligible contribution from Φ ν e compared to Φ ν μ , we have only used the Φ ν μ events in our analysis. Hence, neglecting oscillations from Φ ν e , the total number of events appearing in the detector for an exposure time T is obtained from where N D is the number of targets in the detector and P μμ is the ν μ survival probability. Oscillation probabilities are calculated by numerically evolving the neutrino flavor eigenstates [19] using the equation, where [ν α ] denotes the vector of flavor eigenstates, ν α , α = e, μ, τ , U is the PMNS mixing matrix, and M 2 is the mass squared matrix. Here, A is the diagonal matrix, diag(A, 0, 0), with matter term A given by where the sign is positive for ν and negative forν. Here, G F is the Fermi coupling constant, E is the neutrino energy in GeV, and n e is the electron number density which is related to the matter density ρ in gcm −3 . The density profile of the Earth's matter, given by Preliminary Reference Earth Model (PREM) [45], is used to calculate oscillation probabilities for ν andν. The difference in sign of A for ν andν leads to differing oscillation probabilities, which in turn are sensitive to the sign of Δm 2 32 . The ICAL has an advantage because it can differentiate between ν andν events and observe the matter effects separately. Oscillations are applied on the 5-year data sample using the accept or reject method. First, the survival probability P μμ is calculated for each ν orν with a given energy and direction. To decide whether an un-oscillated ν μ survives oscillations to be detected as ν μ , a uniform random number r is generated between 0 and 1. If P μμ > r , the event is accepted to have survived the oscillations. Otherwise it is considered to have oscillated into another flavor and is rejected. Here, we have used the true values of the oscillation parameters, from Ref. [46]; see Table 2. The zenith angle distribution of muons before and after applying oscillations via the accept or reject method is shown in Fig. 3, where the effect of change in sign of Δm 2 32 (MH) is shown forν μ (Fig. 3a, c) and ν μ (Fig. 3b, d) events. It also compares the zenith angle distributions with (WS) and without (WOS) event selection. Note that the oscillation signatures are different inν μ and ν μ events for normal and inverted hierarchies. This difference is solely due to the matter effects, where it depends on the sign of Δm 2 32 as we have assumed no C P violation. (It has been clearly established that CC μ events in the ICAL are insensitive to δ cp [33].) Hence ν μ events are separated fromν μ events while binning, to have maximum sensitivity to the MH. Binning scheme During reconstruction, the positively charged particles are tagged with positive momentum and the negatively charged particles are tagged with negative momentum by convention. The reconstructed muons of positive and negative charges are binned separately in Q μ E μ and cos θ z bins after applying oscillations, where Q μ = ±1 for μ + /μ − . The events with negative and positive Q μ E μ indicate those identified as ν μ andν μ events respectively. The atmospheric neutrino flux falls rapidly at higher energies. Hence, wider bins were chosen in those energy regions to ensure adequate statistics. Also, within the frame work of low event statistics, increasing the number of high energy bins is not feasible due to the limited statistics. Table 3 summarises the binning scheme used in the current analysis. The effect of finer binning is studied previously [38] for energies less than 11 GeV, and is known to marginally improve the precision in both sin 2 θ 23 and |Δm 2 32 |. Also, increasing the range of energies beyond 11 GeV is known to improve the result [47]. Increasing the number of high energy bins can improve the precision. The optimization of bin widths at higher energies will be a part of the future work, and the current analysis will focus on the effects of fluctuations arising from the low event statistics. The χ 2 analysis and systematics The pull approach [48] is used in defining the χ 2 such that systematic uncertainties are incorporated. The pull approach is equivalent to the covariance approach, but is computationally much faster. After binning the oscillated events, the 5 year simulated data set is fit by defining the following χ 2 [38,49]: where, Here, N data i j and N pdf i j are the observed and the expected number of muon events respectively, in a given (cos θ i z , E j μ ) bin, while n cos θ z and n E μ are the total number of cos θ z and E μ bins respectively. N data i j is calculated for true values of oscillation parameters, summarised in Table 2, whereas N pdf i j is obtained by combining ν μ andν μ PDFs as in Eq. 5, where T ν i j and Tν i j are the ν andν PDFs respectively normalized from 995 year sample, with f as a free parameter describing the relative fraction ofν μ and ν μ in the sample, with R being the normalization factor in the fit which scales the PDF to 5 years. Theν μ and ν μ PDFs with (WS) and without (WOS) selection criterion are shown in Fig. 4a, b. The systematic errors and the theoretical uncertainties are parametrized in terms of variables {ξ k } called pulls. The value ξ k = 0 corresponds to the expected value, and the variation, ξ k = ±1 corresponds to a one standard deviation for each source of systematics resulting in an uncertainty of π k i j for the k th source. In this analysis we have considered two systematic uncertainties, a 5% uncertainty on the zenith angle dependence of the flux and another 5% on the energy dependent tilt error [49], parametrized by ξ flux zenith and ξ flux tilt respectively. There is no systematic uncertainty related to the flux normalization as R and f are the fit parameters which fixes the overall and relative flux normalizations. To calculate the energy tilt error i.e., the possible deviation of the energy dependence of the atmospheric fluxes from the power law, we use the standard procedure as given in, for example Ref. [49], and define: Neglecting the effect of oscillations, the expected number of events Φ 0 (E) is calculated for each (i j) th bin. Then we compute Φ δ (E), where δ is the 1σ tilt error taken to be 5% and E 0 = 2 Gev, and find the relative change in flux to obtain the coupling π tilt i j . The coupling π zenith i j in each bin is calculated in proportion to the zenith angle value of that particular bin. The parameters in the fit are marginalized as given in Table 4, where f and R are always marginalized over the given ranges. The parameters sin 2 θ 13 , sin 2 θ 12 and Δm 2 21 had minimal effect when marginalized hence were kept constant in the fit without any prior constraint. Parameter determination The fluctuated pseudo-data set is first fit to determine sin 2 θ 23 , marginalizing over |Δm 2 32 |, for an input value of sin 2 θ 23 = 0.5. The comparison of Δχ 2 with (WS) and without (WOS) event selection is shown as a function of sin 2 θ 23 in Fig. 5a, where the octant degeneracy in sin 2 θ 23 , which stems from the leading term sin 2 2θ 23 in the oscillation probability, is broken due to the relatively large value of θ 13 = 8.5 • . Hence, the asymmetrical curve in sin 2 θ 23 shows the effect of matter oscillations in breaking octant degeneracy. The significance of the fit, i.e., how far the observed value (best fit value) is away from the parameters true value (input value), is defined as where Δχ 2 input and Δχ 2 min are the Δχ 2 values at the true and observed values of the parameter respectively. The fit to sin 2 θ 23 without event selection converges to a value of 0.586 +0.060 −0.093 with a significance of 0.86, i.e., within 1σ of the input value, whereas the fit after event selection converges to 0.676 +0.063 −0.072 within 2σ of the input value. The fit to sin 2 θ 23 with event selection shows relatively larger uncertainty at 2 and 3σ range. The comparison of Δχ 2 with and without event selection is shown as a function of |Δm 2 32 | in Fig. 5b, where the data is fit to determine |Δm 2 32 |, marginalizing over sin 2 θ 23 , for an input value of Δm 2 32 = 2.32 × 10 −3 eV 2 . The fit without event selection converges to a value of (2.38 +0. 11 −0.39 ) × 10 −3 eV 2 , within 1σ of the input value with a significance of 0.51, whereas the fit after event selection converges to (2.184 +0. 23 −0.37 ) × 10 −3 eV 2 also within 1σ of the input value. The fit to |Δm 2 32 | also shows relatively larger uncertainty at 2 and 3σ range after applying event selection. The multiple local minimas in Δχ 2 function is due to the statistical uncertainty on the PDF, and it is observed to reduce with fits to PDFs constructed from larger MC samples. The free parameter f converges to 0.26±0.01 and 0.27 ± 0.01 before and after applying the selection, where as the parameter R converges to 5901 ± 79 and 3628 ± 62 respectively within 2% of uncertainty. Further adding prior constraints on sin 2 θ 13 and Δm 2 12 , was observed not to make any difference in the fit results or in terms of the coverage in the sin 2 θ 23 − Δm 2 32 plane. The fit with event selection shows larger coverage at 99% CL. Note that the data sets are fluctuated, hence many different sets were also studied and all showed a larger coverage for the fits with event selection. To decouple the effect of fluctuations from the effect of event selection, a 1000 year sample is scaled to required 5-year sample to minimize the fluctuations. The Fig. 7a, b shows the comparison of Δχ 2 obtained with and without event selection as a function of sin 2 θ 23 and |Δm 2 32 | respectively for the fit without fluctuations. The fits converge near the input value, and shows a relatively worse precision after the event selection as expected. The deterioration of precision in determining sin 2 θ 23 and Δm 2 32 after the event selection is due to larger statistical uncertainty, as the sample size was reduced by 40%. Hence, the rest of the analysis in this paper mainly focus on the fits and the effects of fluctuations without event selection. Effect of fluctuations Earlier analyses [33] scaled the 1000 year sample to a size corresponding to 5 years to generate the pseudo-data set. The process of scaling nullifies the effect of fluctuations so that the best-fit is always close to the input value. Then the parameter sensitivities are to be understood as the median value when averaged over a large number of randomly generated samples. In order to see this, we generate an unfluctuated 5 year sample by scaling the 1000 year set and a similar analysis is performed as in Sect. 3. The comparison of Δχ 2 with (WF) and without (WOF) fluctuations is shown as a function of sin 2 θ 23 and |Δm 2 32 | in Fig. 8a Fig. 9 Significance of convergence obtained from sixty different data sets. The solid (blue) and the broken (orange) line represents the significance for the fit to sin 2 θ 23 and Δm 2 32 respectively and the dotted (black) line shows the significance obtained for the simultaneous fit to sin 2 θ 23 and Δm 2 32 in Δm 2 32 . Note that three fits with fluctuations (WF :1, WF : 2 and WF : 3) from three independent fluctuated data sets are used in comparison, and each of them differ in the parameter sensitivities and the best fit values. The fit to sin 2 θ 23 for the data without fluctuation gives smaller uncertainty in the lower octant, and the fluctuations in the data leads to the fluctuations in the octant sensitivities (see Fig. 8a). Also the uncertainties in the parameter determination changes along with the significance of convergence, with each independent fluctuated pseudo-data set. The correlated precision is also observed to change with each fluctuated data set, where they show different coverages in sin 2 θ 23 − Δm 2 32 plane. The analysis was repeated for sixty different fluctuated data sets, performing separate (one parameter) and simulta-neous (two parameter) fits to determine sin 2 θ 23 and |Δm 2 32 |. Figure 9 shows the significance of convergence in terms of standard deviation σ . Almost 68% of times, the fit to sin 2 θ 23 converges within 1σ of the input value sin 2 θ 23 = 0.5. A similar trend is observed in the fit to Δm 2 32 , where 59% of times it converges within 1σ of the input value Δm 2 32 = 2.32 × 10 −3 eV 2 . Also 95% of the time the fit converges within 2σ , which evidently shows the Gaussian nature of the fit. The simultaneous fit to sin 2 θ 23 and Δm 2 32 also shows a similar behaviour in significance. Figure 10 shows the average coverage area with 99% CL in sin 2 θ 23 − Δm 2 32 plane, obtained by averaging the coverages from simultaneous fit to fifty different pseudo-data sets. The orange band signifies the 1σ uncertainty in calculating the average, where the asymmetrical widths from the best fit point of each data set were used. The precision reach for the fit without fluctuations is within 1σ of the average coverage area calculated. Previous studies [33,47] have obtained a better precision in sin 2 θ 23 and |Δm 2 32 |, and have quantified the precision on these parameters as where P max and P min are the maximum and minimum values of the concerned parameter determined at the given C.L. Figure 11 compares the Δχ 2 obtained from the previous [47] and the current analysis methods. It is to be noted that we have used the same binning scheme and the same set of NUANCE data in both the analysis methods for comparison, but the previous method incorporates the smearing of resolution functions whereas the current method incorporates event-by-event reconstruction. The precision in sin 2 θ 23 for the previous method is 19.4% at 1σ , whereas for the current Fig 11(a)). The parameter |Δm 2 32 | also shows a similar behavior, where the precision deteriorates from 5.9% to 12.9% at 1σ for the current method (see Fig 11(b)). The drop in precision for the current method is more predominant in |Δm 2 32 | and is clearly seen with a 30% difference in precision at 3σ . The noted difference in precision is due to the realistic approach of the event-by-event reconstruction, where the tails of the resolution functions, which were approximated in the previous studies, have been included. In the previous methods, the NUANCE data was folded with the detector efficiencies and were smeared by the resolution functions obtained from GEANT-based studies of single muons with fixed energy and direction [38]. Mass hierarchy determination The 5 year pseudo-data set is oscillated via accept or reject method assuming NH (IH), which is then fit with true NH (IH) and false IH (NH) PDFs. The parameters in the fit are marginalized as given in Table 4, and the Δχ 2 resolution to differentiate the correct hierarchy from the wrong hierarchy is defined as: where χ 2 true and χ 2 false are the minimum values of χ 2 from the true and false fits respectively. Figure 12a shows the true and false hierarchical fit to sin 2 θ 23 for a particular pseudo-data set with fluctuations, wherein the resolution of Δχ 2 MH = 7.2 rules out the wrong hierarchy with a significance greater than 2σ for this set. The procedure was repeated for sixty independent 5-year fluctuated data sets to see the effect of fluctuations on the mass hierarchy significance. Figure 12b shows the distribution of Δχ 2 MH obtained from the fit to sixty different sets. The mean resolution of Δχ 2 MH = 2.9 rules out the wrong hierarchy with a significance of ≈ 1.7σ for a 5 year run of 50 kton ICAL detector. The large uncertainty in Δχ 2 MH is due to the fluctuations in the data and the negative values signifies the identification of the wrong mass hierarchy. Note that an earlier analysis that excluded the effect of fluctuations [47] gave a value Δχ 2 MH = 2.7 and our mean value is compatible with this, as expected, given minor differences in the analysis procedures. A 13 year exposure of ICAL will give a 3σ separation to differentiate the correct MH. The classical statistical analysis used here is the same as that in the earlier works by the INO collaboration hence allowing a direct comparison. However, using a Bayesian approach [50] could improve the hierarchy sensitivity. Discussion and summary One of the main aims of the proposed ICAL at INO is to measure the atmospheric neutrino oscillation parameters sin 2 θ 23 and |Δm 2 32 |, and also to measure the mass hierarchy (MH) of neutrinos. The moderately large value of θ 13 and the ability of the magnetised ICAL to distinguish a neutrino event from an antineutrino event, allows the observation of earth matter effects separately in ν andν and helps identify the MH of neutrinos. In this analysis we focus on the precision measurements and the mass hierarchy resolutions that ICAL could attain within a period of 5 years. Incorporating a realistic analysis procedure, it is for the first time we have applied event-by-event reconstruction and have considered the tails of angular and energy resolution which were approximated by single Gaussians and Vavilov functions in previous such studies [33]. We show that incorporating non-Gaussian resolutions are likely to affect the parameter sensitivities, whereas reduction in precision by 4% and 7% is obtained for sin 2 θ 23 and Δm 2 32 respectively at 1σ . Also for the first time we study the effect of low event statistics on the precision and MH measurements, by introducing fluctuations in the data. It is also for the first time within the framework of low event statistics, we show the effect of event selection criterion on the parameter sensitivities, and show that we can include all reconstructed muons to get better sensitivity of parameters. Hence within the framework of low event statistics, we show that the fit without any selection criterion (WOS), where we include all the reconstructed events, is the baseline to obtain better constraints on the parameters. We start by using 5 year fluctuated pseudo-data set for our analysis and apply oscillations via the accept or reject method. The oscillated data is binned in E μ and cos θ z for the χ 2 analysis, where we have used the energy and direc- The constraints on sin 2 θ 23 and Δm 2 32 are compared with and without event selection criterion. Statistically we lose 40% of events after selection, hence we find large uncertainty in parameter determination after applying event selection. We use an ensemble of independent fluctuated data sets to study the effect of low event statistics on the precision measurements of the oscillation parameters. The constraints on sin 2 θ 23 and Δm 2 32 are compared with and without fluctuations, and we find a reasonable agreement between the unfluctuated and the average fluctuated precision reach obtained in sin 2 θ 23 − Δm 2 32 plane. As far as the mass hierarchy of the neutrinos is concerned, we find a mean resolution of Δχ 2 M H = 2.9 from an ensemble of sixty experiments. This rules out the wrong hierarchy with a significance of ≈ 1.7σ , consistent with earlier analysis obtained without considering fluctuations. We also find a significant deviation in the mean value of Δχ 2 M H , and roughly a 15% probability of obtaining the wrong hierarchy due to the fluctuations in the data. In the near future, combining the CP independent measurements at ICAL with the measurements from NOνA, T2K and other reactor experiments, will help to determine the correct MH [22,51]. This paper presents an analysis procedure which can be used on the real ICAL data where the fluctuations are inbuilt as the PDFs are uncorrelated. However, in this analysis we have only used muon information from CC ν μ events and we have ignored the small contribution from ν e to ν μ oscillated events, that is known to slightly dilute the sensitivities [19]. The ICAL can also measure the hadron energy via proper calibration of hits, and including the hadron energy information in CC events is expected to improve the sensitivity of the detector towards the oscillation parameters and also improve the MH significance [44]. Note that there are CC ν e as well as neutral current (NC) events in the detector. Separation of ν μ CC events from the others is quite robust for E μ 1GeV and has been discussed elsewhere [33]. Separation of low energy CC ν μ events from CC ν e and NC events is an ongoing effort of the INO-ICAL collaboration. A combined analysis including all the CC events along with the hadron information will give us the maximum sensitivity the ICAL can attain, and is likely to improve the results presented in this paper.
9,779
2019-04-01T00:00:00.000
[ "Physics" ]
Study of Noether symmetry analysis for a cosmological model with variable G and (cid:2) gravity theory In this present model we have discussed about the solution of the matter-dominated cosmological equations using the expression of (cid:2) = (cid:2)( G ) . Using Noether symmetry analysis we have analyzed both the classical and quantum cosmology. We have made a suitable point transformation in order to make one variable cyclic. As a result the evolution equations become simpler to solve. Wheeler–DeWitt (WD) equation has been constructed for this cosmological model and using conserved charge we have found the solution of WD equation. Finally the solutions have been analyzed in the cosmological point of view. Introduction Based on the series of observational data for the last fifteen to twenty years cosmologists have a unanimous opinion about concordance cosmological model [1]-a cosmological paradigm based on general Relativity with a cosmological constant . This model not only describe the early formation of large-scale structures but also the present era of accelerated expansion. At present two different types of unknown matter are well known from observational point of view and they constitute more than 95% of the matter energy around us. These matter components are termed as dark matter (DM) and dark energy (DE). Dark matter is invisible but attractive in nature as visible matter. It is speculated that it is present in between the galaxies and successfully describes rotational curves in spiral galaxies [2] (for an alternative view point see Ref. [3]). Dark energy, ( being the simplest choice) on the other hand is supposed to be the major ingredient of the cosmic matter [4] to account for the present accelerating era. a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>(corresponding author) Although there are lot of physical dark energy models in the literature, still none of them is suitable both from theoretical as well as observational point of view. As there is no strong basis (both theoretical and experimental) for these hidden matter parts so there are several alternative ways to accommodate the above Cosmological and Astrophysical issues. One such possible physical theory deals with variable cosmological constant as well as variable gravitational coupling. Such physical theory describes cosmological dynamics by analyzing renormalization group induced quantum version [5][6][7][8][9][10][11][12][13] of the theory with nonperturbative renormalization due to non-Gaussianity -the quantum Einstein Gravity [14]. In cosmological context, the inherent infrared divergence in the above quantum Einstein Gravity implies dynamical nature of the cosmological constant [15]. The basic aim of the present work is to determine analytic solutions of the above variable G, cosmological model using the Noether symmetries of the field equations. The idea of using Noether symmetry to cosmological models is not at all new, rather there are lot of works in the literature [16,17]. The key idea of this approach is geometric [18,19] and in particular the homothetic vectors of the kinetic metric for a first order Lagrangian are the Noether point symmetries, i.e, determination of Noether point symmetries reduces to a problem of differential geometry. Also mathematically, the first integral in Noether symmetry can be considered as a tool to simplify a system of differential equations or to determine the integrability of the system. In the context of quantum cosmology, symmetry analysis has a great role to determine the solution of the Wheeler-Dewitt (WD) equation. Noether symmetries play the role of a bridge between quantum cosmology and classically observable Universe. In fact Noether symmetries provide a subset of the general solutions of the WD equation, giving oscillatory behaviours with suitable physical meaning [20][21][22]. More-over, the criterion due to Hartle are associated with Noether symmetries to identify typical classical trajectories satisfying the cosmological evolution equations [22,23]. The plan of the paper is as follows: The basic equations are presented in Sect. 2. Noether point symmetry has been used in the present model and classical cosmological solutions are obtained in Sect. 3. The quantum cosmology has been formulated (and solution of the WD equation is evaluated) in Sect. 4. Finally, the paper ends with a brief summary of the whole work in Sect. 5. Basic equations of the model In Quantum Einstein gravity, for homogeneous and isotropic space-time geometry if G is treated as independent dynamical variable then at classical level it reduces to metric-scalar gravity. On the other hand for both G and to be independent variables leads to a pathological situation: the momentum conjugate to vanishes. Then the preservation of this primary constraint leads to vanishing Lapse function i.e, a collapse of space-time geometry. Hence it is reasonable to assume a generic functional dependence = (G). In the present model the point like Lagrangian has the explicit form as [24] where dot indicates the derivative with respect to the cosmic time t, G is a function of t and = (G) while μ is a non-vanishing interaction parameter. L m , the Lagrangian for the matter field, is chosen for a perfect fluid as −Da −3(γ −1) . Where γ (index of state parameter) is a constant and D is a suitable integration constant. Here lapse function N = 1 and shift vector N i = 0. So the Euler-Lagrange equations for a and G take the form as: The Hamiltonian constraint is given bẏ This is equivalent to the constraint on the energy function associated with the Lagrangian. The kinetic metric corresponding to the Lagrangian (1) is given by with effective potential, It is to be noted that the Noether point symmetries of the system are generated by the elements of the homothetic group of the above kinetic metric as the Lagrangian is in the form of a point particle. Further, the above kinetic metric can be rewritten as Hence the kinetic metric is conformal to the flat 2D Minkowskian geometry, having four dimensional homothetic Lie Algebra with elements (a) the gradient homothetic vector l 1 = a∂ a (b) three killing vectors which span the E(2) group. Noether symmetry and classical cosmological solutions Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. For solving the field equations we use Noether symmetry approach. In the present model the point like Lagrangian is given by (1). Now the existence of Noether symmetry demands that there exist a vector valued function F(t, a, G) such that where X [1] is the first prolongation vector defined by (9) and the vector field X is defined as D t , the total derivative operator, is given by From (8) using (1), (9), (10) and (11) we get the following set of partial differential equations. Now if one consider ζ as a function of t only then ζ takes the form ζ(t) = c 1 t + c 2 and α, β are independent of t. For solving the set of differential equations we use the method of separation of variables i.e, where α 0 and β 0 are arbitrary constants. We also get μ = 6; and the value of γ is either 1 or 0. Putting γ = 1 in (12) we get where 0 is a strictly positive integration constant. Similarly putting γ = 0 in (12) we get where is positive integration constant. Now we want to make a point transformation (a, G) → (u, v) in such a way that u becomes a cyclic coordinate. So, the symmetry vector X should satisfy the following equations where i X is the inner product operator of X . Solving these equations we get and v = ln a G (26) The transformed Lagrangian takes the form Using Euler-Lagrangian equation we get and e 2vü = 3 Solving Eqs. (28) and (29) we obtain and where A, B, C and E are arbitrary constants. Hence the explicit cosmological solutions are of the form, where a 1 , t 1 , c 1 and d 1 are (arbitrary) constants constructed out of the integration constants A, B, C and E. Here t = t 1 is the big-bang singularity (assuming c 1 t 1 + d 1 > 0). So the above solution describes an early era of evolution and it is supported by the choice γ = 1 i.e, stiff fluid. The Figs. 1, 2, 3 shows the graphical representation of the evolution history. From the figure, it is found that the scale factor gradually increases from the big-bang singularity and it becomes enormously large as t → ∞. The Hubble parameter gradually decreases and the Universe is in accelerated era of expansion throughout the evolution. The gravitational parameter blows up both at the big-bang and as t → ∞ and it reaches a finite minimum at some finite time. The Eq. (21) shows that the variable cosmological constant has zero value at the big-bang, then it increases to a maximum and finally it approaches to zero again. However, throughout the evolution G 2 remains a finite constant. Case II: γ = 0 The transformed Lagrangian takes the form Using Euler-Lagrangian equation we get where A , B , C and E are arbitrary constants. As γ = 0 corresponds to dust era of evolution so by choosing the above constants suitable one may write the explicit cosmological solution as where the evolution has started from a time t > −t 0 (with −c 0 t 0 + d 0 > 0) with a finite value of the scale factor and the scale factor blows up at t → ∞. The Figs. 4, 5, 6 show the diagrammatic representation of the evolution of the Universe. The solution represents the evolution of the Universe from matter dominated era to the present late time accelerated evolution. Thus the present model may be considered as an alternative to the dark energy model. Now due to Eq. (22) both and G have similar behaviour namely both of them have finite value at the beginning and then they gradually blows up to infinity. Further, the above expressions for the scale factor show that the improper integral in the expression for the particle horizon does not converge. Hence the particle horizon does not exist for the present model both for γ = 0, 1. The minisuperspace approach in quantum cosmology Usually, in superspace the symmetries are characterized by the metric and matter field. On the other hand, restrictions of geometrodynamics of the superspace is termed as minisuperspaces. In cosmological context, both physically relevant and interesting models are normally defined over minisuper- space. The simplest minisuperspace model obeys the cosmological principle and one has the homogeneous and isotropic metrics with homogeneous matter fields. As a consequence, the lapse function is a function of 't alone and shift vector vanishes i.e, the line element takes the form Using this 3 + 1-decomposition, the Einstein-Hilbert action has the explicit form Here k ab stands for the extrinsic curvature, k = k ab q ab is termed as trace of the extrinsic curvature, (3) R is the usual 3space curvature and represents the cosmological constant. Now, due to homogeneity of the three space, the metric q ab corresponds to a finite number (n) of functions q α (t), α = 0, 1, . . . , n − 1 and consequently (42) becomes with μ αβ , the metric on the minisuperspace. The above action is familiar to a relativistic point particle having self interacting potential V (q) and the particle is moving in nD spacetime with metric μ αβ . Thus the equation of motion of the particle (obtained by variation of the metric f αβ ) is the following 2nd order differential equation The motion of the point particle is constrained by (due to variation of the action by the lapse function) the differential equation 1 2N 2 f αβq αqβ + V (q) = 0 (45) (Note that the particle motion is described by (2n − 1) free parameters) In Hamiltonian formulation, the canonical momenta corresponding to the dynamical variables are defined as So the canonical Hamiltonian can be written as It is to be noted that by writing the constraint equation (45) in terms of the momenta variables then one has In course of canonical quantization one has to make the transformation p α → −i ∂ ∂q α so that the Hamiltonian constraint (48) becomes This is known as Wheeler Dewitt equation. The time independent function ψ(q α ) is termed as wave function of the Universe. In construction of the WD equation there is an ambiguity related to factor ordering with may be avoided by demanding the quantization in minisuperspace is of covariant nature (i.e, invariant under the transformation q α → q α (q α )). As the above WD equation is a second order hyperbolic partial differential equation so for probability measure in quantum cosmology one may consider the conserved probability current Thus the appropriate probability measure on the minisuperspace takes the form with dV , a volume element on minisuperspace. Further, using WKB approximation one may write the wave function as ψ(x k ) ∼ e is(x k ) and consequently, the WD equation transforms to first order Hamilton-Jacobi equation. In the following we shall consider the two cases (for γ = 0, 1) separately. Case I: γ = 1 From (27), the momentum associated with the variables u and v can be written as Then the Hamiltonian takes the form Thus the WD equation takes the form The operator version of the conserved momentum can be written as with ψ 0 , a constant of integration. Now from the WD equation we get where k 1 = 3D i4π p 0 and k 2 = 3 0 i32π 2 p 0 . Solving Eq. (57), we get, with ψ 01 , a constant. From the solutions (32) and (33) we see that as the Universe approaches the big-bang singularity a G → 0 while aG → a finite non-zero constant. So near the big-bang singularity the above wave function has purely oscillatory in nature with finite amplitude and frequency. Thus there is finite probability to have the big-bang singularity for the present model with stiff fluid. The graphical representation of the wave function has been shown in Fig. 9. As the probability is constant near the Big-Bang singularity so no boundary proposal will not be valid there. Further, it is to be noted that there is no (curvature) divergence even at the singularity due to the running of the Newton's constant. Case II: γ = 0 From (34), the momentum can be written as Then the Hamiltonian takes the form Thus the WD equation takes the form The operator version of the conserved momentum can be written as with φ 0 , a constant of integration. Now from the WD equation we get where k 3 = 3 i32π 2 p 1 . Solving Eq. (65), we get, φ 0 is a constant of integration. The wave function of the Universe has similar form as in the previous case. As in this case the model does not correspond to early epoch of evolution so quantum cosmology will not be interesting. Brief summary The present work deals with a complicated cosmological model where Newtonian gravitational coupling is no longer a constant and the cosmological constant is a function of this coupling parameter. Due to non-linear coupled field equations it is hard to infer any physical prediction from the model. As a result, Noether symmetry analysis has been used to the model. By obtaining Noether symmetry vector, it is possible to have a transformation in the augmented space so that does not exist a fixed point for which G vanishes at small 't' and diverges but the product G approaches a constant. Further, the Noether symmetry analysis has a very crucial role in studying quantum cosmology. The associated conserved charge in Noether symmetry identifies is the oscillatory part of the wave function and consequently the WD equation simplifies to a great extend. The graphical representation of the wave function (shown in Fig. 9) shows that the quantum cosmological description favours the occurrence of big-bang singularity at the beginning. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: It is not applicable for this manuscript.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development.
3,997
2022-12-13T00:00:00.000
[ "Physics" ]
Tilde’s Parallel Corpus Filtering Methods for WMT 2018 The paper describes parallel corpus filtering methods that allow reducing noise of noisy “parallel” corpora from a level where the corpora are not usable for neural machine translation training (i.e., the resulting systems fail to achieve reasonable translation quality; well below 10 BLEU points) up to a level where the trained systems show decent (over 20 BLEU points on a 10 million word dataset and up to 30 BLEU points on a 100 million word dataset). The paper also documents Tilde’s submissions to the WMT 2018 shared task on parallel corpus filtering. Introduction Parallel data filtering for statistical machine translation (SMT) has shown to be a challenging task. Stricter filtering does not always yield positive results (Zariņa et al., 2015). This phenomenon can be explained with the higher robustness to noise of SMT systems, i.e., it does not harm the model if there are some incorrect translation candidates for a word or a phrase if the majority are still correct. However, there are also positive examples where data filtering allows improving SMT translation quality (Xu and Koehn, 2017). Neural machine translation (NMT), on the other hand, is much more sensitive to noise that is present in parallel data (Khayrallah and Koehn, 2018). From our own experience (as also shown by the experiments below), stricter filtering allows NMT models to show faster training tendencies and reach higher overall translation quality. In this paper, we describe Tilde's methods for parallel data filtering for NMT system development and Tilde's submissions to the WMT 2018 shared task on parallel data filtering. The paper is further structured as follows: Section 2 describes the data used in the filtering experiments, Section 3 provides details on the filter-ing methods that were applied to filter the parallel corpus of the shared task, Section 4 describes NMT experiments performed to evaluate the different filtering methods, Section 5 discusses the evaluation results, and Section 6 concludes the paper. Data The parallel data filtering experiments were performed on a German-English corpus that was provided by the WMT 2018 organisers. The corpus was a raw deduplicated subset 1 of the German-English ParaCrawl corpus 2 . It consists of one billion words and 104,002,521 sentence pairs. For filtering, we require source-to-target and target-to-source probabilistic dictionaries. The dictionaries for the WMT 2018 experiments were acquired by 1) performing word alignment of the parallel corpora from the WMT 2018 shared task on news translation 3 (excluding the filtered ParaCrawl corpus) using fast align (Dyer et al., 2013), and 2) performing raw probabilistic dictionary filtering using the transliteration-based probabilistic dictionary filtering method by Aker et al. (2014). Filtering Methods Although the filtering task required to score sentence pairs and not filter invalid sentence pairs out of the dataset, we start by filtering sentence pairs out of the raw corpus, after which we score each sentence pair and produce the scored output for submission. In order to filter the rather noisy "parallel" corpus, we use a combination of pre-existing parallel data filtering methods from the Tilde MT (Pinnis et al., 2018) and methods specifically developed to address the noisy nature of the ParaCrawl corpus. Some of the filtering methods feature hyperparameters, which were set empirically in parallel corpora filtering experiments. The first part of the filters were originally developed to increase SMT system quality. The filters are applied in the following order (for statistics of each individual filtering step, refer to Table 1): 1. Identical source and target sentence filter -validates whether the source sentence and the target sentence in a sentence pair are not identical. Although it may very well be that a sentence translates into the same sentence, it is also a strong indicator of non-translated sentence pairs. 2. Sentence length ratio filter. The filter validates whether the longest sentence (in terms of characters) is less than three times longer than the shortest sentence. This filter is meant to identify partially translated sentences. However, it has to be noted that this filter has been tested only for language pairs with Latin-based, Cyrillic-based, and Greek alphabets. 3. Maximum sentence length filter -validates whether neither the source nor the target sentence is longer than 1000 characters long. 4. Maximum word length filter -validates whether neither the source nor the target sentence contains tokens that are longer than 50 characters and do not contain directory separator characters. When extracting data from, e.g., PDF or image files, it may happen that word boundaries are not captured correctly. This may result in long words being formed in sentences. This filter is intended to remove such sentence pairs. 5. Maximum word count filter -validates whether neither the source nor the target sentence contains more than 400 tokens. 6. Unique sentence pair filter -validates whether a sentence pair is unique. The shared task organisers claimed that deduplication was performed 4 , however, this filter removes all white-spaces and punctuation marks, replaces all digit sequences with a numeral placeholder, and lowercases the sentence before validating the uniqueness of a sentence pair. Therefore, it is able to identify more redundant data. 7. Foreign word filter -validates whether the source sentence contains only words written in the alphabet of the source language and whether the target sentence contains only words written in the alphabet of the target language. The filtering steps, which had been originally developed for SMT systems, removed a total of 48,887,553 sentence pairs. After these steps, 55,114,968 sentence pairs were left in the corpus. As NMT systems have shown to be more sensitive to noise (Khayrallah and Koehn, 2018), the Tilde MT platform implements additional filtering steps that are stricter compared to the previous filters. Together with the parallel data noise, these filters may also remove valid sentence pairs. However, as shown by the results in Section 5, the amount of the parallel data is less important than the quality of the data. The following are the additional filtering steps that are used when preparing data for NMT systems: 1. Empty sentence filter -validates whether neither the source nor the target sentence is empty (or contains only white-space characters) after decoding HTML entities. 2. Token count ratio filter -The filter validates whether the token count ratio of the shortest sentence and the longest sentence is greater than or equal to 0.3 (in other words, if one sentence has three times as many tokens as the other sentence, then the sentence pair is considered invalid). 3. Corrupt symbol filter -validates whether neither the source nor the target sentence contains words that contain question marks between letters (e.g., 'flie?en' instead of 'fließen', 'gr??ere' instead of 'größere', etc.). Such words indicate encoding corruption in data, therefore, sentences containing such words are deleted. 4. Digit mismatch filter -validates whether all digits that can be found in the source sentence can also be found in the target sentence (and vice versa). Although this filter removes all sentence pairs where numbers that are written in digits have been translated into numbers written in words, it is effective for 1) identification of sentence breaking issues that are caused by incorrect handling of punctuation marks (e.g., cardinal numbers in some languages are written with the full stop character), and 2) identification of non-parallel content. By ensuring numeral writing consistency in parallel data, we can also ensure that digits will always be translated by the NMT systems as digits and numbers written in words as words. 5. Invalid character filter -validates whether neither the source nor the target sentence contains characters that have shown to indicate of encoding corruption issues. As most of potentially invalid (due to encoding corruption) sentence pairs are captured by the foreign word filter and the corrupt symbol filter, this filter provides just a minor addition -the list of invalid characters that are not included in valid alphabets consists of just four characters. However, this minor addition invalidates over 600 thousand sentence pairs. 6. Invalid language filter -validates whether the source sentence is written in the source language and whether the target sentence is written in the target language using a language detection tool (Shuyo, 2010). As language detection tools tend not to work well for shorter segments, this filter is applied only if the content overlap score (see below) between the source and target sentences is less than a trustworthy content alignment threshold (in the experiments set to 0.3) and the longest (source or target) sentence is at most two times longer than the shortest sentence. 7. Stricter sentence length ratio filter -validates whether the longest sentence (in terms of characters is less than two times longer than the shortest sentence. 8. Low content overlap filter -validates whether the content overlap according to the cross-lingual alignment tool MPAligner (Pinnis, 2013) is over a threshold. Because the content overlap metric produced by MPAligner represents the level of parallelity, it is used to score sentence pairs. Therefore, the threshold was also set to a low value (0.01). This far, a total of 74,809,736 were removed from the corpus, leaving a total of 29,192,785 sentence pairs remaining in the corpus. When training NMT systems with the subsampled datasets, we identified that there were frequent (wrong) many-to-many alignments left in the corpus even after filtering. We also found that the corpus contained many entries with text in both languages on one side (i.e., imagine a translation where some of the source words are translated, but the majority is just copied over from the source segment and left untranslated), which contribute to parallel data noise. Therefore, we introduced two additional filters that address these issues: 1. Non-translated sentence filter -validates whether more than half of the source words have been translated (i.e., are not present in the target sentence). 2. Maximum alignment filter -keeps only those sentence pairs where the target sentence is the highest scored target sentence for the source sentence (according to the content overlap scores) and vice versa. After all filtering steps, there were 13,748,432 sentence pairs left in the Max Filtered+ corpus. In order to compare whether the full filtering workflow produces better results than a part of the workflow, we also prepared the following intermediate datasets: 1. Filtered -the corpus filtered up to and including the low content overlap filter. 4. Max Filtered+ Rescored -the corpus filtered using all filters and rescored by ranking sentences with a Round-robin-based method according to source sentence lengths. I.e., all sentence pairs were separated into different lists according to sentence lengths and sorted according to the content overlap scores in a descending order. Then, sentences were ranked by assigning the highest score to the best-scored unigram sentence, the second highest score to the best-scored bigram sentence, etc. We performed such rescoring, because the filtering assigned higher scores to shorter segments, thereby skewing the sentence length statistics towards shorter sentences. The dataset consists of 16,529,684 sentence pairs. In each of the datasets (except for the Max Filtered+ Rescored dataset), sentence pairs were scored using the content overlap metric produced by MPAligner. In order to create scores for the raw dataset (i.e., to create submissions for the shared task), we scored each sentence pair in the raw dataset as follows: if a sentence pair was found in a particular filtered dataset, the sentence pair was scored using the score produced by MPAligner (or the rescoring method), otherwise the sentence pair received the score '0'. This means that all sentence pairs that were filtered out by any of the filtering steps, received the score '0'. Trained Systems To evaluate, which of the datasets allows achieving higher translation quality, we performed subsampling of the filtered datasets into 10 million and 100 million word datasets. For this, we used the subselect.perl script, which was provided by the organisers in the dev-tools package 5 . Then, we trained attention-based NMT systems with gated recurrent units in the recurrent layers using the Marian toolkit (Junczys-Dowmunt et al., 2018). All systems were trained using the configuration that is provided in the same package until convergence. In addition to the filtered dataset systems, we trained four baseline systems. The first two baseline systems were trained on datasets, which were subsampled using the Hunalign (Varga et al., 2007) scores that were provided by the organisers. For the other two systems, data subsampling was performed on randomly assigned scores. The NMT system training progress (in terms of BLEU scores on the raw tokenised development set) is depicted in Figure 1. The figure shows that for the small dataset systems, only the systems with the non-translated sentence filter were able to achieve results of over 20 BLEU points. All other systems show rather poor performance, indicating the necessity of careful data cleaning. It is also evident that the Filtered and Max Filtered datasets contain too much noise among the highest scored sentence pairs. The reason for this is because the content overlap filter (by design) does not look at whether a sentence pair is a reciprocal translation. It tries to identify, just like a word alignment tool, which words in the source sentence correspond to which words in the target sentence, and non-translated words can be paired easily. Although for the large dataset systems the Filtered and Max Filtered datasets contain higher levels of noise (compared to the more filtered datasets), they show comparative (however, lower) results to the more filtered datasets. The fact that the datasets are approximately 10 times larger than the smaller datasets allowed for higher quality sentence pairs to be included in the data sub-selected for NMT system training. The figure also shows an interesting tendency for the Max Filtered+ Rescored dataset. In both experiments (10 million and 100 million word systems) the quality increases at the beginning, but then it starts to drop -very noticeably for the small system and slightly for the large system. Results Automatic evaluation results in terms of BLEU (Papineni et al., 2002) scores are provided in Table 2. For all systems, we used the 'test.sh' script that was provided by the organisers in order to translate the test set and evaluate each model's translation quality. The evaluation results illustrate the same dataset rankings as the training progress chart. The best results are achieved by using the Max Filtered+ dataset. We were also interested in seeing whether the filtering methods (by improving the parallel data quality) also allow improving out-of-vocabulary (OOV) word rates on the development set. It is evident in Table 2 that the OOV rate decreases by adding more filtering steps. However, there is one exception -the translation quality of the NMT systems, which were trained using the Max Fil-tered+ Rescored dataset, decreases although the OOV rate drops (especially when calculated for unique tokens). There may be multiple explanations for the quality decrease. For instance, for the smaller (10 million word) dataset, the rescoring introduced a higher percentage of lower quality sentence pairs due to the fact that the frequency of longer sentences is naturally lower than that of shorter sentences. E.g., there are 746,480 English sentences that consist of five tokens, compared to just 2673 sentences of 80 tokens in the Max Filtered+ dataset (which was used to acquire the rescored dataset). This means that the rescoring method was forced to select lower quality longer sentence pairs simply because of insufficient sentence pairs to select from. For the larger dataset, the results also show that the running OOV rate is slightly larger than the unique token OOV rate. However, the issue with the limited number of longer sentences did affect also the larger system as the sub-sampled dataset included all sentence pairs that were longer than or equal to 42 tokens regardless of their quality. For future work, it could be beneficial to investigate whether a fixed content overlap threshold could allow the rescoring method to perform better. For the WMT 2018 shared task, we submitted the following three datasets: 1. tilde-isolated (Filtered+) -this dataset represents isolated sentence filtering where only individual sentence pairs are passed to the filtering method. 2. tilde-max (Max Filtered+) -this dataset represents full corpus filtering where (in addition to the filtering results of a particular sentence pair) also information about other sentence pairs is used to decide whether to keep a sentence pair or not. Conclusion The paper presented parallel corpus filtering methods that allow reducing the noise in noisy "parallel" corpora to a level where the corpus is usable in neural machine translation system development. Most of the filtering methods are simple (except for the low content overlap filter) and do not require any machine learning methods to be implemented (except for the invalid language filter). We showed that, by applying stricter filtering methods, NMT system quality increases. For the WMT 2018 shared task on corpus filtering, we submitted three scored datasets that represent isolated sentence filtering (Filtered+), full corpus filtering (Max Filtered+), and (a rather simple method for) full corpus filtering with data selection (Max Filtered+ Rescored). The filtering methods are integrated into the Tilde MT platform and serve its users when they require SMT and NMT system training. For future work, it may be beneficial to perform ablation experiments, to identify, which of the individual filtering methods contributes the most in order to acquire a higher quality parallel corpus.
4,011
2018-10-01T00:00:00.000
[ "Computer Science" ]
A new detector for sub-millisecond EXAFS spectroscopy at the European Synchrotron Radiation Facility The design and performance of the new sub-millisecond detector for time-resolved X-ray absorption spectroscopy at ID24 at the ESRF is described. Introduction The development of third-generation synchrotrons, with their insertion devices and X-ray optics, is constantly improving the characteristics of X-ray beams available for research in terms of photon flux, emittance, coherence and spectral properties. Various types of time-resolved studies are becoming more and more common in modern synchrotron-based science. They can be divided into those using several repetitions of reproducible events and those recording a single event. As an example of the first type of experiment one can mention X-ray diffraction in a pulsed laser-heated diamond-anvil cell (Goncharov et al., 2010). For this type of experiment, the detector does not necessarily need a high repetition rate; however, the exposure time is required to be short enough to obtain a high time resolution. Furthermore, there is no limitation on the signal intensity, since the required statistics can be obtained by summing multiple repeating measurements. In many cases, a high repetition rate is also important: for example, for experiments in pulsed magnetic fields (Strohm et al., 2012), where measurements at different times during the field pulse are desired. An example of the second type of experiment is shock compression study of matter, where the sample is destroyed after a single shock event (Gupta et al., 2012). For this type of measurement, to follow the changes in the sample with time, both a high repetition rate and a short exposure time of the detector are required. In addition, the detector sensitivity and the photon flux should be high enough to provide sufficient statistics for each individual spectrum or measurement in a sequence. In many situations, the limitations in time-resolved studies are no longer imposed by the photon flux, but rather through the detector (which can be limited by its sensitivity, readout or data-transfer time). The recently upgraded X-ray absorption beamline ID24 at the ESRF is dedicated to fast time-resolved and extreme-conditions X-rays absorption spectroscopic (XAS) studies and is demanding for a fast, low-noise and high-dynamic-range X-ray position-sensitive detector. X-ray absorption spectroscopy is a powerful tool for studying electronic and magnetic properties, local structure and valence state of a specific element in ordered (crystalline) as well as disordered (amorphous or liquid) and heterogeneous systems. The energy-dispersive spectrometer on ID24 has a huge advantage for fast spectral collection, since the whole spectrum is collected at once, without the need for scanning or moving any X-ray optical element, providing very high spatial and temporal beam stability (Pascarelli et al., 2006). For the needs of ID24 and other ESRF beamlines, a special fast-readout low-noise (FReLoN) detector has been developed. The first incarnation of the FReLoN camera for fast time-resolved studies was presented more than five years ago (Labiche et al., 2007), and was based on a two-dimensional 2048  2048 pixels CCD chip. Here we present a new, improved, version of the FReLoN detector based on a linear CCD array that has much faster readout and higher repetition rate, and also can accept a wider X-ray beam, allowing full exploitation of the improved capabilities of the new ID24 beamline, as described below. Geometrical constraints for detection using an energy-dispersive spectrometer In the energy-dispersive scheme used at ID24, as illustrated in Fig. 1, a curved crystal focuses a polychromatic fan of X-ray radiation onto the sample position, introducing at the same time a correlation between the photon energy and the direction of propagation; p and q are, respectively, the source-crystal and crystal-focal-spot distances. The angle covered by the polychromatic fan impinging on the sample is Á fan = L/q, where L is the horizontal dimension of the beam intercepted by the polychromator (L ' 50 mm on ID24). The full spectral range diffracted by the crystal, ÁE, is proportional to the variation of Bragg angle along the beam footprint on the crystal, Á: where E 0 is the central energy. On ID24, the maximum value of the ratio ÁE/E 0 ranges from 8% at the lowest energy (5 keV) to 20% at high energies (>12 keV). The transmitted beam is detected on a position-sensitive detector where energy is correlated to position. The angular acceptance of the detector, Á acc = A/d, where A is the horizontal aperture and d the detector-focal-plane distance, determines the portion of the total energy range detected, ÁE detected /ÁE: It is particularly important to detect the full spectral range (ÁE detected /ÁE = 1) at low energies, where the energy bandwidth of the polychromator is less than 10%. On the upgraded version of ID24, efforts were made to reduce q as much as possible, in order to increase ÁE [increasing Á, from equation (1)], leading to a maximum Á fan = 70 mrad. This implies that, in order for the detector to intercept the full diffracted polychromatic fan, Á acc needs to also reach a maximum of 70 mrad. The point spread function Ár (a function of horizontal pixel size) as well as the distance of the detector from the focal point d determines detector , the angular acceptance of each pixel: where É detector is proportional to the detector contribution to the energy resolution of the measured spectrum, E detector . Equations (2) and (3) show that in order to cover the required energy range ÁE detected without compromising energy resolution, it is important to have a detector with a large horizontal aperture A. The criteria for the choice of A and d were based on maintaining E detector smaller than other contributions to the total energy resolution: the intrinsic energy resolution of the polychromator (i.e. determined by the Darwin width) and the contribution from the finite source size. At low energies, the dominant contribution to energy resolution derives from the intrinsic energy resolution of the chosen polychromator. For example, when working at the Fe K-edge (7.1 keV), the energy resolution given by the intrinsic polychromator resolution is about 0.7 eV, while the energy resolution defined by the pixel size is about 0.2-0.3 eV. The E detector contribution to the energy resolution is highest at energies around 12-13 keV; however, it is always smaller than the intrinsic polychromator resolution. Because of the horizontal X-ray beam dispersion, the spectral energy range ÁE, the contribution E detector to the energy resolution and the number of photons per pixel can be adjusted by varying d, i.e. moving the whole detector along the beam propagation direction, closer or further away from the focal spot. The intensity distribution along different energies in a spectrum is not uniform. It is defined by the emission spectra of the undulators, and by the throughput of all X-ray optics (mirrors and polychromator) and windows of the beamline. By changing the undulator gaps one can adjust the intensity distribution along the spectrum. Typically intensity varies by 30-40% over the whole energy range. Optical design of the new FReLoN The X-ray photons are detected indirectly: first, the X-rays are absorbed on a scintillator screen and converted to visible light (Fig. 2). The visible light is then detected by a charge-coupled linear array device. This scheme presents several advantages: the scintillator screen material can be chosen depending on the specific X-ray energy range and the required time resolution, as different scintillator materials have different sensitivities and after-glow characteristics. The screen can be easily replaced if deteriorated by the X-ray beam, the silicon CCD camera operates at its maximum quantum efficiency (visible photons range), and the camera chip is protected from possible damage through direct X-ray beam exposure. The optical scheme of the FReLoN detector is shown in Fig. 2. This new version of the detector with newly designed optics can accept a fan of X-rays (1) with a horizontal width up to 100 mm. X-rays enter the detector from a vacuum beam flight-tube attached to the entrance flange (2). The scintillator screen (3) is mounted at 45 with respect to the plane of the incident X-ray fan. A metallic mask (4) with a slot placed just in front of the scintillator screen prevents scattered X-rays from exciting unwanted fluorescence. Visible light (5) The optical scheme of the energy-dispersive XAS spectrometer. On beamline ID24 the source is a demagnified image of the undulator source (i.e. secondary source). Source-to-crystal distance (p) is equal to 22 or 31 m (depending on the chosen beamline branch), q = 0.6-1.2 m and d = 1.2-2.4 m. demagnifying tandem lens system (6) consisting of a large field custom objective on the scintillator side and a wide aperture Zeiss Planar 1.4/85 ZF objective on the camera side, providing an ultra-sharp and undistorted image over the entire field of view. A motorized iris aperture (7) allows one to adjust the amount of light transmitted to the camera. Since each X-ray photon can excite several visible photons, it is sometimes important to reduce the number of visible photons to achieve total quantum efficiency of the system close to unity, to prevent oversaturation and to control precisely the exposure time. The FReLoN camera (8) is connected on the top with easy access to all the interface cable connectors and watercooling tubes for the Peltier element and the thermal stabilization of the electronic boards inside (see Labiche et al., 2007, for more details). For precise remote detector alignment on the beamline, two additional motorized movements (using stepper motors) are implemented in the detector: a tilt of the CCD camera around its optical axis for exact matching of the camera orientation to the image on the screen, and a motorized focus of the FReLoN objective. The modular design makes it very simple to change either the scintillator screen or the camera if necessary within a few minutes. Linear CCD FReLoN camera: a multi-kilohertz linear detector The basic FReLoN platform is made of a camera head and a data acquisition board, both linked by a serial line fiber optic cable and a power supply unit. To fit with the objectives above, a new linear CCD image sensor 11156 from Hamamatsu, consisting of an array of 2048 pixels of 14 mm  1000 mm, has been integrated in the well known FReLoN camera platform previously developed at ESRF (Fig. 3). A signal processing technique has been implemented based on the digital correlated double sampling (DCDS) method (see below). The CCD is integrated into a vacuum chamber containing the thermoelectric cooler which maintains the CCD temperature at 256 K. The dark current is then reduced to a few electrons per pixel per second to allow exposures of microseconds to a few seconds without significant excess noise. Extreme care has been taken to ensure the shortest possible wires between the CCD chip and the clock drivers and signal preamplifiers. This was a prerequisite to cope with both fast rise time, high current clocks and low-level wide bandwidth signals from the CCD output. The S11156-2048 chip utilizes a resistive gate structure that allows high-speed transfer with the 'on chip' electronic shutter function, offering significantly reduced image lag (less than 0.1%), even if the pixel height is large. With their backthinned structure, these CCDs also offer a high sensitivity (>80% quantum efficiency) from the UV to the near-IR region of the spectrum. The FReLoN camera is always ready for a new exposure without any dead time before starting a new integration. The jitter time of the synchronization is AE12 ns and the accuracy of the integration width is AE0.5 ns. This low level of jitter time (in ns) compared with the short exposure time (in ms) allows accurate acquisition time windows even in single-shot data acquisition. The plot in Fig. 4 shows The S11156 Hamamatsu sensor integrated in the new FReLoN camera. Figure 2 The optical scheme of the new position-sensitive detector (partially cut view): 1 is the flat horizontally diverging X-ray beam, 2 is the vacuum tube connection flange, 3 is the fluorescence screen for the conversion of X-rays to visible photons, 4 is the metallic slot mask for the fluorescence screen, 5 is the visual light cone accepted with the wide-angle optics (6), 7 is the motorized iris aperture assembly, 8 is the FReLoN camera housing. these spectra is dominated not by the counting statistics but rather by the fact that I 0 (intensity distribution in the upcoming beam) and I 1 (intensity distribution after the sample) are not recorded simultaneously, while the X-ray beam structure changes slightly with a high frequency. These intensity variations can be averaged out by taking more spectral accumulations. In the uniform filling of the storage ring, photon flux exceeds 10 6 photons per pixel in a single exposure time of 200 ms. Characterization and results 4.1.1. Data signal processing. The timing scheme of the CCD camera at the highest frame rate is shown in Fig. 5 (top panel). With a dead-time of 30 ms and an exposure time of 200 ms, the camera is capable of continuous acquisition at 4.3 kHz. The data signal processing is done through DCDS. This technique proves to be a versatile method to optimize the performances of each sensor. The charge stored inside each pixel is converted by an on-chip charge preamplifier giving a floating voltage output that is digitized by a fast and accurate analog-to-digital converter and is sent to a field-programmable gate array. As a result, eight different sample readings are consequently measured for each pixel. Then each sample is associated with a weighted coefficient (Fig. 5, lower panel). The value of the charge is obtained by performing a digital difference between the reference signal (the D-zone floating diode) and the charge signal (S-zone signal). At the same time, an integral nonlinearity correction is added by using a look-up table. The sum of D coefficients (the D-zone floating diode) is equal to unity, as well as the sum of S coefficients (S-zone charge signal). Final reading of the charge Y for each pixel is calculated as For the case shown in Fig. 5, the calculated value would be 4.1.2. Photon transfer curves. To characterize this camera we have used the plot of the photon transfer curves (Janesick, 2001), which is the most explicit characterization of the CCD parameters. A photon transfer curve is obtained by taking a series of pairs of uniformly illuminated exposures, with varying number of photons for each pair. By analyzing the difference between each pair of images one can calculate the noise level and photonic variance 2 . By analyzing the photon transfer curve one can obtain all the characteristics of the camera such as readout noise, dark current generation, full-well capacity, linearity, sensitivity or dynamic range. More details on this technique can be found elsewhere (Janesick, 2001). The plot of the variance (Fig. 6) gives the value of the fullwell saturation (FW), the true dynamic range or the gain (k) in electrons per analog-to-digital unit (ADU) = mean signal/ photonic variance. The values for FW and k are 297000 electrons and 5.4 electrons/ADU, respectively. The Quality of data collected on a pure Cu foil on ID24. The curves from bottom to top represent EXAFS spectra obtained from one, ten, 100 and 1000 accumulations of 200 ms each. The inset shows the extracted k 2weighted EXAFS oscillations. All EXAFS features, even at the highest kvalues, are already clearly visible on the single shot spectrum. Figure 5 The upper panel shows the timing scheme of the camera exposure and readout. The lower panel shows the timing scheme of a single pixel period, explaining the principle of the DCDS: the video signal of the pixel period above is sampled via a fast ADC (analog-to-digital converter) and each sample [X(i)] is weighted with a coefficient (see text for explanation). nonlinearity below 1%, a specific plotting technique is used and defined as the linearity residuals (LR) where S m and t m are the signal and time at the middle scale, and S is the signal at time t. The LR value does not exceed AE0.25% on the full scale (Fig. 7). Main performance characteristics of the FReLoN camera. An overview of the main parameters of the new FReLoN camera is given in Table 1. The parameters are compared with those of the S11165 system, which is a driver circuit designed for this particular linear CCD array by the original manufacturer Hamamatsu. The R&D performed with the FReLoN system pushes the performance in two directions: the effective frame rate is improved by a factor of three and the true dynamic range by a factor of three. The electronic noise and dark current values are also significantly reduced. Application example: in situ oxidation of iron above 1000 K As a demonstration of the ability of the new detector to perform full timeresolved kinetic studies in the submillisecond domain we have studied the oxidation of metallic iron in air at high temperature. The chemical reaction Fe + O 2 = Fe 2 O 3 is responsible for iron corrosion at atmospheric conditions and is one that humans have dealt with since the Iron Age. Even the famous brown color of ancient Greek pottery appeared due to this reaction (Hofmann, 1962). Iron is a multi-valence metal and forms different oxides depending of the oxygen activity (fugacity) and temperature: wü stite Fe x O (non-stoichiometric oxide with predominantly Fe 2+ ), magnetite Fe 3 O 4 (with one third Fe 2+ and two thirds Fe 3+ ) and hematite Fe 2 O 3 (all Fe 3+ ). At normal atmospheric conditions [log( f O 2 ) = À0.7] hematite is the only stable iron oxide phase (Ghiorso & Sack, 1991). A pure Fe foil (99.85% from Goodfellow) of 5 mm thickness exposed to normal atmosphere was heated with a highly defocused (focal spot of $1 mm) 5 W IR laser hitting the foil at an angle of 25 . Heating was synchronized with detection via an analog TTL signal. The polychromatic X-ray beam was focused down to about 5 mm  4 mm on the sample in order to minimize probed thermal gradients. Fig. 8 shows time and energy dependence of normalized X-ray absorption around the K-edge of iron (7.112 keV), recorded every 230 ms as a color map. Several time-stamps are drawn on the map as small horizontal arrows (see figure caption). Some periodic horizontal bands can be seen in Fig. 8 The linearity residuals of the new FReLoN linear CCD detector show a linearity better than AE0.25% of the full scale. Table 1 Performance of the different cameras. Column 2 is for the old FReLoN camera based on a two-dimensional CCD chip, column 3 is for the new FReLoN camera equipped with the Hamamatsu S11165 array, and column 4 is for the Hamamatsu S11165 system produced by the original manufacturer. Figure 6 The photon transfer curve for the new FReLoN camera, showing the photonic variance (square of noise) as a function of mean camera output signal. The inverse of the slope is given by the camera gain k, and the saturation level defines the full-well capacity FW (see Table 1). related to the electronic beam instabilities in the storage ring. More than 400 spectra were collected in 0.1 s. They were then normalized and analyzed with a standard linear combination fit technique in order to extract the relative fractions of the principal components. The pure components spectra were extracted from the experimental sequence. During the heating of the phase (with the body-centered cubic 'b.c.c.' structure) of iron, significant thermal damping of the EXAFS occurs. Therefore, we include two different components for the -Fe: a cold b.c.c. and a hot b.c.c. phase. The 'hot b.c.c.' was taken just before the onset of the phase transformation to the high-temperature -Fe phase (with a face-centered cubic 'f.c.c.' structure) (Fig. 9a). Temperature was not measured directly in this experiment; however, we were able to estimate the temperature of the sample from the data analysis using a simple model. Sixteen individual EXAFS spectra of -Fe were acquired between the moment the laser was turned on and the onset of the transformation to the phase. We performed a standard EXAFS analysis using the FEFFIT code (Newville, 2001). In the b.c.c. structure the first two coordination shells (eight nearest neighbors at R1 and six next-nearest neighbors at R2) are closely overlapped and cannot be separated in the Fourier transform, so we treat them simultaneously, putting a geometrical constraint on the R1 and R2 distances [R1 ¼ ð3=4Þ 1=2 R2], and using a Debye model for the thermal factors. With this model all spectra were fitted simultaneously, sharing a common effective Debye temperature (although this assumption is not totally true, it was good enough for the present case owing to the limited spectral k range and there-fore large 2 uncertainties). Individual temperatures were left as free parameters. The results of this fit are shown in Fig. 9(b). The highest temperature values obtained for the pure phase are $1200 (100) K, perfectly matching within the uncertainty of the ! transition temperature of iron (1185 K). Therefore, we believe that this estimation is quite reasonable, and the observed trend seems to have a plateau at T ' 1300 K. We assume that the temperature remains relatively stable during the oxidation reaction. Fig. 10 shows the resulting relative fractions of the principal components representing the XAS spectra. The transforma- (a) XAS spectra of pure components used in the linear decomposition fit (shifted vertically for clarity). From bottom to top: blue, 'cold b.c.c.' (at room temperature) -Fe corresponding to the time-stamp 1 in Fig. 8; pink, 'hot b.c.c.' ($1200 K) -Fe (time-stamp 2 in Fig. 8); black, -Fe (time-stamp 3 in Fig. 8); red, Fe 2 O 3 hematite (time-stamp 5 in Fig. 8). (b) Temperature evolution of the -Fe phase estimated from EXAFS fits. Zero on the timescale indicates the moment at which the laser is switched on. Figure 8 Normalized X-ray absorption near Fe K-edge as a function of time. Small black arrows indicate time of the following events: 1, heating laser turned on; 2, sample temperature reaches $1200 K and a phase transition from -Fe to -Fe starts; 3, phase transition is finished, only a pure -Fe phase is observed; 4, first traces of iron oxide appear in spectra; 5, the oxidation is complete and the sample is fully converted to Fe 2 O 3 hematite. tion of iron to iron oxide requires diffusion of oxygen inside the metal foil, and this reaction occurs significantly slower than theto -iron transition, that is limited only by nucleation and growth of the new phase and occurs within 2 ms. We would like to point out that we did not observe any metastable phases of iron oxides other than hematite in our experiment. No simple general theory describing the kinetics of solidstate chemical reactions exists. The diffusional transformations are controlled by a number of processes, such as interatomic diffusion, interface migration, the propagation of crystal defects, etc. However, often a simplified formula x(t) = 1 À exp(Àkt n ) is used to describe the growth of a new phase, where x is the fraction of the new phase, t is time, n and k are constants related to geometrical conditions of the grain boundaries and the rate of precipitation (Helgason et al., 1999). We obtained the values of 316 (14) and 1.43 (1) for k and n, respectively (green dashed curve in Fig. 10). Chemical kinetics in solid-gas systems has its own specifics. For example, it was observed that, due to the formation of a surface layer of reaction products on top of a solid, the kinetics typically has two regimes: a fast linear growth at the beginning and a slower parabolic region afterwards (Lee & Rapp, 1984). In the present study, the deviation from linear kinetics starts above $20% of conversion, which corresponds to approximately a 0.75 mm-thick layer of hematite on both sides of the iron foil. This value is quite close to 0.5 mm, reported as a typical layer thickness for intermetallics and silicides (Dybkov, 2002) when the reaction kinetics changes from linear to parabolic regimes. Conclusion We present a new detector based on a CCD technology for fast time-resolved EXAFS spectroscopy at the ESRF with an acquisition rate in excess of 4 kHz. The performance of the new detector appears very attractive for a number of X-ray absorption spectroscopy applications, including in situ kinetics, shock wave experiments, pulsed field experiments and other time-resolved studies. As an example, a full time-resolved in situ XAS study of the oxidation of metallic iron in air at high temperature is presented. The existing implementation of the FReLoN system has an exposure time limited to 200 ms. Ongoing developments will allow exposure times to be reduced down to 30 ms, taking full benefit of the electronic shutter facility. Pushing towards higher time resolution (shorter exposure times) would imply switching from a CCD technology to fast photodiodes (Headspith et al., 2007). In principle, for even shorter exposures it is possible to use the time structure of a synchrotron storage ring. For example, if a single electron bunch is selected for measurements, time resolution of the order of 100 ps can be achieved even with a slow detector.
5,877.2
2014-10-03T00:00:00.000
[ "Engineering", "Physics" ]
Entanglement islands in higher dimensions It has been suggested in recent work that the Page curve of Hawking radiation can be recovered using computations in semi-classical gravity provided one allows for"islands"in the gravity region of quantum systems coupled to gravity. The explicit computations so far have been restricted to black holes in two-dimensional Jackiw-Teitelboim gravity. In this note, we numerically construct a five-dimensional asymptotically AdS geometry whose boundary realizes a four-dimensional Hartle-Hawking state on an eternal AdS black hole in equilibrium with a bath. We also numerically find two types of extremal surfaces: ones that correspond to having or not having an island. The version of the information paradox involving the eternal black hole exists in this setup, and it is avoided by the presence of islands. Thus, recent computations exhibiting islands in two-dimensional gravity generalize to higher dimensions as well. Introduction The RT/HRT/EW formula [1][2][3] for computing entanglement entropies is a remarkable entry in the holographic dictionary.We are instructed to find a codimension-two surface in the bulk that minimizes the generalized entropy functional. 1 This codimension-two surface is called the quantum extremal surface (QES) and the value of the generalized entropy functional on the QES gives the entanglement entropy.Furthermore, the bulk region between the QES and the boundary, the entanglement wedge, can be reconstructed just using the knowledge of the corresponding boundary subregion [6][7][8][9][10][11]. The papers [12,13] considered the coupling of a large AdS black hole to a flat space bath region, allowing the black hole to evaporate.The entanglement entropy of the black hole was seen to undergo a first order phase transition following the appearance of a new nontrivial quantum extremal surface at late times. Following this idea, [14] considered a two-dimensional gravity+matter theory, where the matter sector has a three-dimensional holographic dual.The main result of [14] is that the entanglement wedge of Hawking radiation at late times contains an "island" that lies in the interior of the black hole.This was also suggested in [12].From a 2d viewpoint, this island is completely disconnected and spacelike separated from the naive domain of dependence of the region where the Hawking radiation lives.The 3d geometry connects these two pieces of the entanglement wedge. The general lesson is that one should include contributions from islands in order to compute entanglement wedges and entropies of quantum systems coupled to gravity.The role of islands becomes crucial if there is a lot of entanglement between the bulk fields in the naive region and the island, for then, it can be beneficial to pay a cost proportional to the area of the island while incurring lots of savings in the bulk entropy.A prototypical case is to compute the entanglement entropy of the Hawking radiation that lies in the asymptotically-flat, weak-gravity region. The state considered in [14] was time-dependent since the black hole is evaporating.In [15], the situation was simplified and it was demonstrated that islands exist even in large AdS black holes that are in equilibrium with a flat space bath region (and hence the geometry is static).All explicit computations in [15] were also for a two-dimensional gravity+matter theory, since this allows for some simple analytic expressions. The goal of this note is to demonstrate that islands also exist in higher dimensions.For that purpose, we consider the equilibrium setup of [15], but in four -dimensional gravity+matter theories.To facilitate the computation of quantum extremal surfaces, we use the trick from [14] of taking the matter CFT 4 to have a five-dimensional holographic dual.In other words, we consider a Randall-Sundrum type setup with a 4d brane in a 5d ambient spacetime [16][17][18].In this setup, quantum extremal surfaces in 4d become ordinary RT surfaces in 5d, and thus it becomes a tractable problem to compute them. In particular, we will focus on the version of the information paradox described in section 4 of [15], see also [19]. 2 This involves the thermofield double of a black hole coupled to, and in equilibrium with, a bath at some temperature.That is, there are two black holes, both coupled to their own baths. 3One starts with a Cauchy slice through the middle of the Penrose diagram, and moves it forward in time on both sides.See figure 4. The question is what is the entanglement entropy of the union of the two baths as a function of this time?Naively, this entropy increases linearly in time, forever.This happens because the bath is exchanging particles with the black hole: Hawking particles enter the bath and their entangled partners fall into the black hole.The mass of the black hole is not changing because we are in the Hartle-Hawking state, but the underlying exchange of quanta goes on.This is the analog of Hawking's calculation. At late times, however, the entanglement wedge of the union of the bath regions contains an island that extends outside the horizon [15].The generalized entropy of this QES saturates at late time, and is approximately equal to twice the Bekenstein-Hawking of a single black hole.This happens because the island contains the Hawking partners, and thus by including the island we save on the S bulk term in the generalized entropy.Thus, overall, the entropy grows linearly in time for a while before saturating. In this note, we demonstrate that the same resolution works even in higher dimensions.The problem of setting up the above paradox in the 4d eternal black hole with a matter sector that has a 5d holographic dual reduces to finding a static 5d geometry with the correct boundary conditions.We construct this geometry numerically using the DeTurck trick [22][23][24].This involves solving coupled PDEs for five functions of two variables each, see (17) for the ansatz for the line element.For a picture of the integration domain and the behavior of the space near the conformal boundaries and the Planck brane, see figure 2. As already noted, quantum extremal surfaces in 4d become ordinary extremal surfaces in 5d.In the numerically constructed 5d geometry, we numerically find the extremal surfaces that are relevant for computing the entropy of the union of the two baths.There are two qualitatively different types of extremal surfaces, see figure 5.The extremal surface that dominates at early times goes through the horizon, and the entropy computed using this surface increases linearly in time.This is because of the stretching of space inside the horizon, as described in [25].However, there is another extremal surface that dominates at late times.This extremal surfaces always stays outside the horizon and ends on the Planck brane.The entropy computed using this surface saturates at late times, essentially because, being completely outside the horizon, it does not get affected by the stretching of space inside the horizon. Thus, our results provide a highly nontrivial check that the results of [12][13][14][15] are unchanged k r e n C f n x X l 3 P i a j C 8 5 0 Z 4 / 8 g f P 5 A 8 j 2 l w k = < / l a t e x i t > w < l a t e x i t s h a 1 _ b a s e 6 4 = " K + x a c 0 4 2 c w x / 4 H z + A O q P j Q I = < / l a t e x i t > Figure 1: A simple geometry with a RS or a Planck brane, discussed in [33].The RS or the Planck brane along lies along the locus z " ´w tan θ.The induced geometry on the brane is AdS 4 with length scale (9).The angle θ is fixed by the tension parameter α in the action (1) via the relationship (6). upon increasing the spacetime dimension: The information paradox is averted by the emergence of an island in the relevant entanglement wedge at late times. Increasing the dimensionality of the setup of section 4 of [15] is a significant step forward because a possible criticism [26] of refs.[13][14][15] is that the explicit computations were only done in 2d AdS-JT gravity [27][28][29][30], which is known to be dual to an ensemble of Hamiltonians, rather than a single fixed Hamiltonian [31,32].So one might wonder if 2d AdS-JT gravity is somehow not representative of a typical gravity theory.The result of this paper gives strong evidence that the gravity computations of [13][14][15] generalize to higher dimensions. The organization of this paper is as follows.In section 2, we discuss the action for the 5d gravity theory and the boundary conditions.In section 3, we describe the technique for numerically finding the static geometry.In section 4, we describe the relevant extremal surfaces, including the one that corresponds to having an island, and discuss how it avoids the information paradox in this setting.We conclude in section 5 with some discussion and future directions.Appendix A contains some details about the convergence of the numerical methods used. Setup of the problem As mentioned in the introduction, following [14], we want to consider a "doubly-holographic" setup, but in higher dimensions.We take a 4d AdS gravity theory coupled to a matter CFT 4 that has a 5d holographic dual.We wish to consider a large black hole in this theory that is in equilibrium with a flat space bath region containing the same matter CFT 4 .Thus, we are led to consider the following action: Here B denotes the Planck or the RS brane [16] and it should be seen as one of the boundary components of the bulk spacetime.The quantity L is the AdS 5 length scale, and α is proportional to the tension of the brane, see (8) below.The Gibbons-Hawking term at the UV boundary has been omitted to avoid clutter. Varying the action (1) with respect to the metric gives us the Einstein equations Henceforth, upper case Latin indices will refer to the five-dimensional indices and lower case Latin indices will refer to coordinates along the brane.The boundary term in (1) [16,17], which has a Planck brane ending on the conformal boundary of AdS.This configuration was also considered in [33,34] in the context of AdS/BCFT.We start with pure AdS written in Poincaré coordinates Now consider the surface z " ´w tan θ, where θ is some angle between 0 and π{2.We only keep the region z ą ´w tan θ for w ă 0. Of course, we always restrict to z ą 0 even for w ą 0. See figure 1. We now want to implement the boundary condition (3), which is a form of the Israel junction conditions [35].Computing the extrinsic curvature on the surface w `z tan θ " 0, we get Plugging this into (3), we get that the parameter α in the action (1) determines the angle θ via the relationship From the Israel junction condition we know that the quantity can be interpreted as the stress tensor of a codimension one object.This stress-tensor can be interpreted as arising from 3-brane with tension This value of the brane tension is consistent with the fact that the last term in the action (1) is equal to α 8πG 5 times the worldvolume of the brane.Finally, note that by substituting w " ´z cot θ in the AdS 5 line element (4), and rescaling z, we see that the induced metric on the brane is nothing but AdS 4 with a length scale Let us now turn to the description of the actual numerical solution that we seek. 3 The numerical solution The DeTurck trick To find solutions, we will use the so-called DeTurck trick, which was first proposed in [22], and reviewed extensively in [23,24].Let us first write the Einstein equation ( 2) in trace reversed form The idea is that, instead of directly solving (10), one considers the modified equation where ξ A :" " Γ A BC pgq ´ΓA BC pḡq ‰ g BC is the so-called DeTurck vector, and ḡ is a reference metric. 4he reference metric ḡ is required only to be regular and satisfy the same boundary conditions as g on Dirichlet boundaries, but is otherwise arbitrary.In particular, if there are Neumann boundaries, the reference metric ḡ is not required to satisfy the Neumann boundary condition there.The equation ( 11) is nice because the choice of gauge needed to solve Einstein's equations now appears as a choice of ḡ.Further, if we are looking for static solutions, then (11) together with either Dirichlet or Neumann boundary conditions is an elliptic problem, and is thus locally well-posed.(For Neumann boundaries, the DeTurck vector is also required to satisfy ξ ¨n " 0, where n is the normal vector to the boundary.)This is a major advantage over the original Einstein equation ( 10), whose character depends on the gauge choice even when seeking static solutions. Solutions of (11) are not necessarily solutions of (10), because of the new added term ∇ pA ξ Bq .Possible solutions with ξ ‰ 0 are called DeTurck solitons.It can be shown that DeTurck solitons do not exist for static and certain stationary solutions of (11) with purely Dirichlet boundaries [36,37].In this case there is a complete equivalence between solutions of ( 11) and (10).On solutions with ξ " 0, the gauge choice is a generalisation of harmonic coordinates, given by x A " Γ A BC pḡqg BC , where stands for the scalar Laplacian in the metric g.However, for Neumann boundary conditions on the metric, of the form (3), this has never been proved.Although in this case one cannot prove ξ " 0 on solutions of (11), one can still make progress because solutions of elliptic equations are locally unique.Hence, an Einstein solution cannot be arbitrarily close to a DeTurck soliton, and one should be able to distinguish the Einstein solutions of interest from DeTurck solitons by monitoring the quantity ξ A ξ A appropriately. The metric ansatz Let us define r w :" w `z cot θ.For numerical purposes, we take the domain of integration to be r w ą 0 and impose (3) together with ξ A n A " 0 on the edge of the computation domain.Since we are interested in the Hartle-Hawking state, we want to have a bulk horizon that intersects the brane.Furthermore, the geometry should be such that at large r w it should approach a five-dimensional planar black hole, whose line element reads RS or Planck brane Right bath < l a t e x i t s h a 1 _ b a s e 6 4 = " d C 1 e 2 J s e 9 c P M Z S < l a t e x i t s h a 1 _ b a s e 6 4 = " u o P r b b t c x Z r I D g f s X i j j q j P U 8 g We have a two-sided AdS black hole, with each side coupled to a bath.On the right, we show the integration domain used in the numerics x P p0, 1q and y P p´1, 1q.The objective is to solve for five metric functions Q 1 , . . ., Q 5 of two variables each (17), in this domain.We numerically solve only in the region y ą 0, the rest is obtained simply by symmetry.On the left edge of this diagram, at x " 0, we have the RS or the Planck brane where the 4d gravity region lives and the boundary condition ( 3) is imposed.On the top and bottom edges we have the two baths.As x Ñ 1, the metric approaches that of a 5d planar AdS-Schwarzschild black hole.The reader might find it useful to note the points ABHCD on both diagrams.The precise induced geometry on the segment BC is determined by the numerical solution, and the left picture is just a cartoon. For numerical convenience we want to work with compact coordinates only, so we define a new coordinate x P p0, 1q via x 1 ´x :" r w " w `z cot θ . Note that x " 1 is the asymptotic region r w Ñ `8.Finally, we change from z to a coordinate y where constant t slices are manifestly regular at the event horizon z " z `.One such choice is given by y :" In terms of pt, x, y, w 1 , w 2 q coordinates, the planar black hole reduces to where y `:" z ´1 `, and Gpyq :" `2 ´y2 ˘`2 ´2y 2 `y4 ˘. We are finally ready to present our metric ansatz: Here Q I , with I P t1, 2, 3, 4, 5u, are functions of px, yq P p0, 1q 2 to be determined by solving (11).For the reference metric we take the line element (17) with Let us now discuss the boundary conditions.At the horizon, located at y " 0, we impose Neumann boundary conditions for all variables, i.e.B y Q I | y"0 " 0 together with Q 1 px, 0q " Q 2 px, 0q, which in turn enforces the Hawking temperature to be At the conformal boundary, located at y " 1, and also at x " 1 we demand g to approach the reference metric ḡ, that is to say Finally, at the brane location, that is x " 0, we demand the boundary condition (3) together with Q 3 p0, yq " cot θ and ξ a n a " ´ξx " 0. See figure 2 for a cartoon depiction of the integration domain.These boundary conditions yield Robin-type boundary conditions on Q 1 , Q 2 , Q 4 and Q 5 at x " 0. It is then a simple exercise to show that (11) with such boundary conditions, gives rise to an elliptic problem [36]. To solve the resulting system partial differential equations, we used a standard pseudospectral collocation approximation on Chebyshev-Gauss-Lobatto points and solved the resulting non-linear algebraic equations using a damped Newton-Raphson method.The resulting method does not exhibit exponential convergence in the continuum limit due to the existence of non-analytic behaviour close the conformal boundary [38,39]. 5Instead, we will find a power law convergence as we approach the continuum limit. Induced geometry on the brane Recall that the boundary condition for the metric on the brane is Neumann rather than Dirichlet.Hence, the actual induced metric on the brane is determined numerically, and does not have a simple analytic expression.All we know is that there is a horizon at y " 0. In this subsection, we characterize the behavior of the induced geometry as θ becomes small.The upshot is that, in the limit θ ! 1, the induced geometry on the brane is close to that of a 4d planar AdS black hole. In order to see this, consider the auxiliary line element of a four-dimensional planar black hole with horizon located at Z `and AdS 4 length scale L 4 : Its associated Hawking temperature is given by T 4D " 3 4πZ `.If we want to match the temperature of our numerical solution reported in (18), we should impose Z `" 3 4 y `. To compare the line element (19) with the induced metric on the brane, we change to a new set of coordinates tt, P 4D , w 1 , w 2 u, where P 4D is the proper distance from the horizon: We then look at ´gtt {L 2 4 and g w 1 w 1 {L 2 4 as functions of pP 4D q, and compare with the results obtained from computing the same quantities using the induced metric on the brane.on the RS brane (located at x " 0) as a function of the proper distance from the horizon P 4D .In the top row, θ « 1.47113, and in the bottom row, θ « 0.343024.The blue disks correspond to the numerical data, and the solid blue lines are obtained from the 4d planar AdS black hole geometry.It is clear that as θ becomes smaller, the induced geometry on the brane gets closer to that of a 4d planar AdS black hole More explicitly, the induced metric on the brane can be read off from (17) and is given by Again, we can change to proper distance coordinates tt, P 4D , w 1 , w 2 u by defining where L 4 was is given by (9).Numerically, computing P 4D pyq can be tricky, because of the divergence of the integrand in the limit ỹ Ñ 1 ´.To bypass this difficulty, we consider instead One can show that the integrand in the first line is finite 6 as ỹ Ñ 1 ´, while the integral in the second line can be readily done analytically, and carries all the divergences: We plot our comparisons in figure 3. The top row corresponds to θ « 1.47113 and the bottom row corresponds to θ « 0.343024.In all plots in figure 3, we have taken y `" 1.The blue disks correspond to the numerical data, and the solid blue lines are obtained from the 4d planar black hole geometry, as detailed above.The trend is clear: As θ becomes smaller, the induced geometry gets closer to that of a 4d planar AdS black hole. Extremal surfaces and the island Recall that we want to consider a version of the information paradox in 4d gravity theory coupled to a 4d matter sector.In this theory, we are considering a black hole coupled to, and in equilibrium with, a bath at nonzero temperature.We are also working in the two-sided purification, or the thermofield double, of the coupled system.So there are two black holes and two baths.We would like to compute the von Neumann entropy of the union of the left and the right bath regions as a function of time, where the time dependence is introduced by moving time forwards on both sides.See figure 4. The two-dimensional version of this problem was considered in section 4 of [15].See also [19] and the recent paper [40]. We would like to compute 4d quantum extremal surfaces [3] for the union of the blue regions in figure 4. Since this is a very hard problem, we have made the simplification that the matter CFT 4 has a 5d holographic dual, as in [14], and so the 4d quantum extremal surfaces become ordinary 5d RT surfaces.Note that we are imagining toroidally compactifying the transverse directions to get IR-finite entropies. Extremal surfaces at t " 0 We would like to compute these 5d RT surfaces [1].More precisely, we would like to extract the extremal surfaces that anchor at the boundary at a given location x " x B ą 0. We will numerically compute extremal surfaces on the t " 0 slice of the line element (17). 6Explicitly, we have lim a a T 2 J A t s Z U T P S q 9 5 c / M / r p i a 8 9 a d c J q l B y Z a L w l Q Q As emphasized in [14], there are two extremal surfaces of interest emanating from x B : the ones that penetrate the horizon, and the ones that end up anchoring on the brane (recall that the brane is located at x " 0), see figure 5. We will denote the area of the surface that penetrates the horizon by A H px B q and the area of the surface that ends on the Planck brane by A BM px B , y B q, where y B is the value of y at which this surface intersects the brane.Formally, both these areas are infinite, because of the divergence at the conformal boundary.However, the difference between these two is well defined. Let us define the area difference ∆Apx B , y B q :" A BM px B , y B q ´AH px B q , which is finite for any pair px B , y B q.We should also minimize this with respect to y B and define ∆Apx B q :" min We will simply compute ∆Apx B , y B q for several values of y B and look for a minimum.We will see that there is unique value of y B that minimizes ∆Apx B , y B q. Our extremal surfaces are parametrized by coordinates σ μ, with μ " 1, 2, 3.For the surfaces that penetrate the horizon, we choose σ 1 " y, σ 2 " w 1 and σ 3 " w 2 , so that the extremal surfaces can be parametrized by x " F pyq in the px, yq plane.To compute such curves we look at the Euler-Lagrange equations derived from S " where dotted indices run over the spatial coordinates x, y, w 1 , w 2 , but not over time.We will not present the explicit equations of motion following from (27) because they are not illuminating.Compare with figure 2. The orange curve corresponds to an extremal surface ending on the brane with y B « 0.31602p1q, while the blue curve correspond to an extremal surface that penetrates the bifurcating Killing surface smoothly.There is, in fact, a continuous family of orange extremal surfaces and there is a unique one amongst them with the smallest area, see figure 6. Suffice it to say that they are second order ODEs in F pyq, and thus can only be solved once two boundary conditions are supplied.One of these boundary conditions is imposed at the conformal boundary, where we demand F p1q " x B , while at the horizon we demand F 1 p0q " 0. For the surfaces that end on the Planck brane, one has to proceed with more care, because if we try to think of these surfaces as a function ypxq or xpyq, these functions will be multi-valued, see the orange curve in figure 5. To bypass this, we introduce two parametrizations in two different parts of the surface.For a range y P p1, y c q we take x " F pyq, i.e. we choose σ 1 " y, σ 2 " w 1 q as a function of y B , computed for x B " 1{2 and θ " π{4.In the left panel we have y B P r0.0028p7q, 0.35379p9qs, whereas on the right we zoom in close to the point where the horizon intersects the brane.The surface corresponding to the minimum in this figure is the correct RT surface at late times.and σ 3 " w 2 .As boundary conditions we demand F p1q " x B and F py c q " x c ą x B , which yields a unique solution in this interval for given values of x B , x c and y c .For x P p0, x c q we choose σ 1 " x, σ 2 " w 1 and σ 3 " w 2 with y " P pxq.We view the resulting second order ordinary differential equation as an initial value problem, where we demand P px c q " y c and P 1 px c q " F 1 py c q ´1.Finally, we read off y B " P p0q from the integration procedure. For numerical stability, we found that it was crucial to use the same parametrization for both surfaces near the boundary, as the leading divergences in (25) were easier to cancel. The results are shown in figure 5, where we plot an example of the two types of curves in the px, yq plane.In this figure we used θ " and x B " 1{2. the value of y B « 0.31602p1q for this specific plot.For the surface that ends on the brane, we vary x c and y c , which in turn varies y B .As we do so, we compute ∆Apx B , y B q as in the left panel of figure 6.In the right panel of figure 6, we zoom in close to the point where the horizon intersects the brane and find that ∆Apx B , y B q is minimized for some value y B " y ‹ B ą 0. For the particular run shown in figure 6, we find that this occurs for y ‹ B « 0.067224p5q.Since the minimum is very shallow, one might wonder whether this is a numerical artefact.To show that this is not the case, we also plot error bars in figure 6, which are estimated via the numerical convergence studies performed in appendix A. Time dependence of the entropy and the island As shown in figure 6, the surface that penetrates the horizon (blue in figure 5) has smaller area at t " 0 and thus is the correct RT surface to use.The time dependence we are considering involves moving the two sides forwards in time, see figure 4. As in [15,25], the area of of this surface increases (linearly after a few thermal times) as we perform the time evolution.As explained nicely in [25], the intuitive reason behind this is the stretching of space behind a black hole horizon [41,42].We would have an information paradox if this entropy increase continued forever, because the von Neumann entropy of the union of the two baths should saturate close to 2S BH .(Note that we are imagining the transverse directions to be toroidally compactified.) The resolution is that the area of the surface that ends on the Planck brane (orange in figure 5) approaches approximately 2S BH at late times, as in [15].Again, the intuitive reason is that since this surface does not penetrate the horizon, it does not get affected by the stretching of space inside the horizon.Thus, the surface that ends on the Planck brane will win at late times. The exchange of dominance between these two surfaces leads to an entropy that increases linearly for a while before saturating.This is the resolution of the information paradox in this setting. Note that at late times, the entanglement wedge of the union of the left and the right baths contains an island.The island is the region on the left vertical line in figure 5 in between the two points where the orange curves intersect it. Discussion Following section 4 of [15], we discussed a version of the information paradox in a four-dimensional black hole coupled to a bath in the Hartle-Hawking state.Time dependence is introduced by moving time forwards on both sides.We ask for the von Neumann entropy of the union of the left and right baths as a function of time, as depicted in figure 4. To facilitate computations of quantum extremal surfaces, the matter is described by a CFT 4 that has a five-dimensional holographic dual [14]. We numerically solved Einstein's equations using the DeTurck trick and found a static fivedimensional geometry having two flat UV boundaries and a Planck brane, see figure 2. This geometry has a bifurcate horizon that intersects the Planck brane.In this setup, quantum extremal surfaces in 4d become usual RT surfaces in 5d.We have computed the extremal surfaces that correspond to computing the entropy of the union of the left and the right baths.There are two types of surfaces, as shown in figure 5.One type of extremal surface (blue in figure 5) penetrates the horizon and is the dominant one at early times.However, its area increases as a function of time because of the stretching of space inside the horizon [25].If there was no competing extremal surface, this would lead to an indefinite growth of entropy.However, we know that the entropy of the union of the two baths, being equal to the entropy of the two black holes, should saturate close to 2S BH .The resolution is that, in fact there is a second type of surface (orange in figure 5) that ends on the Planck brane.Its saturates 2S BH , and thus, it wins at late times.Overall, we get an entropy that grows linearly and then saturates. This also means that the entanglement wedge of the union of the two baths contains an island at late times [15].In figure 5, this island is the region on the left vertical edge between the two points where the orange curves intersect the vertical line. The results of this paper unambiguously show that at least some of the gravity computations of [13][14][15] done in AdS-JT gravity generalize to higher dimensions.In particular, the microscopic fact that 2d AdS-JT gravity is dual to an ensemble of Hamiltonians, rather than a single one, plays no crucial role as far as gravity computations are concerned.One can speculate about the possibility that quantities computed using semiclassical gravity path integrals should always be interpreted as suitably-ensemble-averaged quantities, and that to reproduce all the features present in observables of normal unitarily evolving quantum systems, one perhaps needs stringy physics in the bulk and all sorts of additional effects. In conclusion, this paper provides the first setup where entanglement islands [12][13][14][15] have been computed in higher dimensions.The conclusion is the same: Islands appear in entanglement wedge of the Hawking radiation at late times and this stops the indefinite growth of von Neumann entropy, giving an answer consistent with unitarity and a finite density of states. There are quite a few natural extensions of our work.We found the static geometry and the two types of extremal surfaces numerically at t " 0, and then used general reasoning to deduce the time dependence of the areas.It would be interesting to explicitly compute the time dependence of the extremal area surfaces.It would also be interesting to see if one can make any analytic statements in the limit θ Ñ 0. This is the limit where the length scale of AdS 4 (9) goes to infinity.Finally, it would be interesting to see if the scenario of "uberholography" [43], found to hold in the 2d/3d setup of [14] in the recent paper [44], persists in higher dimensions. 8 n j K M I J n M I 5 e H A F d b i D B j S B w S M 8 w y u 8 O b H z 4 r w 7 H 4 v W g p P P H M M f O J 8 / p U W P L A = = < / l a t e x i t > Boundary of AdS 5 < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 8 5 g j 9 w P n 8 A 2 s u N g w = = < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = " h L + F a L t O T 9 l u w f L W 3 U t 0 8 x l 3 P c w = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L x 4 h k U c C G z I 7 9 M L I 7 O x m Z t Z I C F / g x Y P G e P W T v P k 3 D r A H B S v p p F L V n e 6 u I B F c G 9 f 9 d n J r 6 x u b W / n t w s 7 u 3 v 5 B 8 f C o q e N U M W y w W a g D g w E 8 w y u 8 O c J 5 c d 6 d j 3 l r z s l m D u E P n M 8 f 2 U W N g g = = < / l a t e x i t > A < l a t e x i t s h a 1 _ b a s e 6 4 = " L i h t v 2 j Y S e 0 R a Y b w w P d S 8 1 4 1 b o c = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I + o F 4 + Q y C O B D Z k d e m F k d n Y z M 2 t C C F / g x Y P G e P W T v P k 3 D r A H B S v p p F L V n e 6 u I B F c G 9 f 9 d n J r 6 x u b W / n t w s 7 u 3 v 5 B 8 f C o q e N U M W y w W 7 x K m p W y d 1 G u 1 C 9 L 1 d s s j j y c w C m c g w d X U I V 7 q E E D G C A 8 w y u 8 O Y / O i / P u f C x a c 0 4 2 c w x / 4 H z + A J Q r j M k = < / l a t e x i t > B < l a t e x i t s h a 1 _ b a s e 6 4 = " w W 6 O X Y F o J g 3 R v l r Y I d l x q P z Q z S E = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 8 E L x 4 h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J W v j M o = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " O m n s B I c I 7 u L 9 x T 1 n n c l a c U z B 0 H k = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L x w h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s 1 b s s j j y c w T l c g g c 3 U I U a 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J 7 H j N A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " r e 7 6 Z k t 0 i S C 9 u I A R b C E d p P G T K d g = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L h 4 h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s 1 4 n r U r Z u y p X G t e l 6 l 0 W R x 7 O 4 B w u w Y M b q M I 9 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J c z j M s = < / l a t e x i t > D < l a t e x i t s h a 1 _ b a s e 6 4 = " W o f l C 9 O s Y A D N d O + 1 4 n z c Y B d C A m o = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x G Q Y 9 B P X h M w D w g W c L s p D c Z M z u 7 z M w w D K / w 5 j w 6 L 8 6 7 8 7 F o z T n Z z D H 8 g f P 5 A 5 i 3 j M w = < / l a t e x i t > A < l a t e x i t s h a 1 _ b a s e 6 4 = " L i h t v 2 j Y S e 0 R a Y b w w P d S 8 1 4 1 b o c = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I + o F 4 + Q y C O B D Z k d e m F k d n Y z M 2 t C C F / g x Y P G e P W T v P k 3 D r A H B S v p p F L V n e 6 u I B F c G 9 f 9 d n J r 6 x u b W / n t w s 7 u 3 v 5 B 8 f C o q e N U M W y w W X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J W v j M o = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " r e 7 6 Z k t 0 i S C 9 u I A R b C E d p P G T K d g = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L h 4 h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s 1 4 n r U r Z u y p X G t e l 6 l 0 W R x 7 O 4 B w u w Y M b q M I 9 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J c z j M s = < / l a t e x i t > D < l a t e x i t s h a 1 _ b a s e 6 4 = " W o f l C 9 O s Y A D N d O + 1 4 n z c Y B d C A m o = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x G Q Y 9 B P X h M w D w g W c L s p D c Z M z u 7 z M w w D K / w 5 j w 6 L 8 6 7 8 7 F o z T n Z z D H 8 g f P 5 A 5 i 3 j M w = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " O m n s B I c I 7 u L 9 x T 1 n n c l a c U z B 0 H k = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L x w h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s Figure 2 : Figure2: On the left, we show the Penrose diagram of the 4d geometry.We have a two-sided AdS black hole, with each side coupled to a bath.On the right, we show the integration domain used in the numerics x P p0, 1q and y P p´1, 1q.The objective is to solve for five metric functions Q 1 , . . ., Q 5 of two variables each(17), in this domain.We numerically solve only in the region y ą 0, the rest is obtained simply by symmetry.On the left edge of this diagram, at x " 0, we have the RS or the Planck brane where the 4d gravity region lives and the boundary condition (3) is imposed.On the top and bottom edges we have the two baths.As x Ñ 1, the metric approaches that of a 5d planar AdS-Schwarzschild black hole.The reader might find it useful to note the points ABHCD on both diagrams.The precise induced geometry on the segment BC is determined by the numerical solution, and the left picture is just a cartoon. Figure 3 : Figure 3: Plots of ´gtt {L 24 and g w 1 w 1 {L 2 4 on the RS brane (located at x " 0) as a function of the proper distance from the horizon P 4D .In the top row, θ « 1.47113, and in the bottom row, θ « 0.343024.The blue disks correspond to the numerical data, and the solid blue lines are obtained from the 4d planar AdS black hole geometry.It is clear that as θ becomes smaller, the induced geometry on the brane gets closer to that of a 4d planar AdS black hole A < l a t e x i t s h a 1 _ b a s e 6 4 = " L i h t v 2 j Y S e 0 R a Y b w w P d S 8 1 4 1 b o c = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I + o F 4 + Q y C O B D Z k d e m F k d n Y z M 2 t C C F / g x Y P G e P W T v P k 3 D r A H B S v p p F L V n e 6 u I B F c G 9 f 9 d n J r 6 x u b W / n t w s 7 u 3 v 5 B 8 f C o q e N U M W y w W x 7 O 4 B w u w Y M b q M I 9 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J W v j M o = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " O m n s B I c I 7 u L 9 x T 1 n n c l a c U z B 0 Hk = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L x w h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k s Y h V J 6 A a B Z f Y N N w I 7 C Q K a R Q I b A f j + 7 n f f k K l e S w f z C R B P 6 J D y U P O q L F S o 9 Y v l t y y u w B Z J 1 5 G S p C h 3 i 9 + 9 Q Y x S y O U h g m q d d d z E + N P q T K c C Z w V e q n G h L I x H W L X U k k j 1 P 5 0 c e i M X F h l Q M J Y 2 Z K G L N T f E 1 M a a T 2 J A t s Z U T P S q 9 5 c / M / r p i a 8 9 a d c J q l B y Z a L w l Q Q E 5 P 5 1 2 T A F T I j J p Z Q p r i 9 l b A R V Z Q Z m 0 3 B h u C t v r x O W p W y d 1 W u N K 5 L1 b s s j j y c w T l c g g c 3 U I U a 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J 7 H j N A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " r e 7 6 Z k t 0 i S C 9 u I A R b C E d p P G T K d g = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H b R R I 9 E L h 4 h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a W q O 9 1 d Q S K 4 N q 7 7 7 e Q 2 N r e 2 d / K 7 h b 3 9 g 8 O j 4 v F J S 8 e p Y t h k sY h V J 6 A a B Z f Y N N w I 7 C Q K a R Q I b A f j 2 t x v P 6 H S P J Y P Z p K g H 9 G h 5 C F n 1 F i p U e s X S 2 7 Z X Y C s E y 8 j J c h Q 7 x e / e o O Y p R F K w w T V u u u 5 i f G n V B n O B M 4 K v V R j Q t m Y D r F r q a Q R a n + 6 O H R G L q w y I G G s b E l D F ur v i S m N t J 5 E g e 2 M q B n p V W 8 u / u d 1 U x P e + l M u k 9 S g Z M t F Y S q I i c n 8 a z L g C p k R E 0 s o U 9 z e S t i I K s q M z a Z g Q / B W X 1 4 n r U r Z u y p X G t e l 6 l 0 W R x 7 O 4 B w u w Y M b q M I 9 1 K E J D B C e 4 R X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A J c z j M s = < / l a t e x i t > D < l a t e x i t s h a 1 _ b a s e 6 4 = " W o f l C 9 O s Y A D N d O + 1 4 n z c Y B d C A m o = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x G Q Y 9 B P X h M w D w g W c L s p D c Z M z u 7 z M w 1 e 3 H L I 1 Q G i a o 1 h 3 P T Y w / o c p w J n B a 6 K Y a E 8 p G d I A d S y W N U P u T + a F T c m a V P g l j Z U s a M l d / T 0 x o p P U 4 C m x n R M 1 Q L 3 s z 8 T + v k 5 r w 2 p 9 w m a Q G J V s s C l N B T E x m X 5 M + V 8 i M G F t C m e L 2 V s K G V F F m b D Y F G 4 K 3 / P I q a V b K 3 k W 5 U r 8 s V W + y O P J w A q d w D h 5 c Q R X u o Q Y N Y I D w D K/ w 5 j w 6 L 8 6 7 8 7 F o z T n Z z D H 8 g f P 5 A 5 i 3 j M w = < / l a t e x i t >x @ < l a t e x i t s h a 1 _ b a s e 64 = " o O R S f U a k S v P d O O H v 5 t b n d E 7 W o U Q = " > A A A B 8 X i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 K r t V 0 G P R i 8 c K9 g P b p c y m 2 T Y 0 m 1 2 S r F h K / 4 U X D 4 p 4 9 d 9 4 8 9 + Y b f e g r Q 8 Figure 4 : Figure4: Shown here is a two-sided 4d black hole (with two of the spatial dimensions suppressed) coupled to two baths.See also figure2.We want to compute the entanglement entropy of the union of the two blue regions shown.This diagram lives on the boundary of a static 5d spacetime whose exterior region was computed numerically in section 3. Figure 5 : Figure5: The two types of extremal surfaces, computed numerically at t " 0 in the background geometry found numerically in section 3.In this figure, we have taken θ " π{4 and x B " 1{2.The horizontal black dotted lines B at the top and bottom are the left and right baths.The dashed black line B along the left edge is the location of the brane, which contains the 4d black hole.The horizontal red dashed-dotted line in the middle is the 5d bifurcate horizon, which meets the brane at the 4d horizon.Compare with figure2.The orange curve corresponds to an extremal surface ending on the brane with y B « 0.31602p1q, while the blue curve correspond to an extremal surface that penetrates the bifurcating Killing surface smoothly.There is, in fact, a continuous family of orange extremal surfaces and there is a unique one amongst them with the smallest area, see figure6. Figure 6 : Figure6: This figure depicts ∆Apx B , y B q as a function of y B , computed for x B " 1{2 and θ " π{4.In the left panel we have y B P r0.0028p7q, 0.35379p9qs, whereas on the right we zoom in close to the point where the horizon intersects the brane.The surface corresponding to the minimum in this figure is the correct RT surface at late times. Figure 7 : Figure7: (a) Plot of δ N px B , y B q computed for several values of N labeled in the plot.For y B P r0.0028p7q, 0.35379p9qs the relative error is smaller than 10 ´6.(b) Plot of δ N p0.5, 0.3q in a log ´log scale computed for several values of N .The numerical data is represented by the blue disks, and the solid blue line is a best fit curve which yields δ N p0.5, 0.3q " N ´3.13 . allows for the usual Dirichlet boundary conditions, but we will infact impose the other possible alternative
14,698.6
2019-11-21T00:00:00.000
[ "Physics" ]
Caveat (IoT) Emptor: Towards Transparency of IoT Device Presence As many types of IoT devices worm their way into numerous settings and many aspects of our daily lives, awareness of their presence and functionality becomes a source of major concern. Hidden IoT devices can snoop (via sensing) on nearby unsuspecting users, and impact the environment where unaware users are present, via actuation. This prompts, respectively, privacy and security/safety issues. The dangers of hidden IoT devices have been recognized and prior research suggested some means of mitigation, mostly based on traffic analysis or using specialized hardware to uncover devices. While such approaches are partially effective, there is currently no comprehensive approach to IoT device transparency. Prompted in part by recent privacy regulations (GDPR and CCPA), this paper1 motivates and constructs a privacy-agile Root-of-Trust architecture for IoT devices, called PAISA: Privacy-Agile IoT Sensing and Actuation. It guarantees timely and secure announcements of nearby IoT devices' presence and their capabilities. PAISA has two components: one on the IoT device that guarantees periodic announcements of its presence even if all device software is compromised, and the other on the user device, which captures and processes announcements. PAISA requires no hardware modifications; it uses a popular off-the-shelf Trusted Execution Environment (TEE) -- ARM TrustZone. To demonstrate its viability, PAISA is instantiated as an open-source prototype which includes: an IoT device that makes announcements via IEEE 802.11 WiFi beacons and an Android smartphone-based app that captures and processes announcements. Security and performance of PAISA design and its prototype are also discussed. INTRODUCTION Internet of Things (IoT) and embedded (aka "smart") devices have become an integral part of modern society and are often (and increasingly) encountered in many spheres of everyday life, including homes, offices, vehicles, public spaces, ports, and warehouses.It is estimated that, by 2030, there will be over 29 billion Internetconnected IoT devices [115]. Unlike general-purpose computers, IoT devices are specialized and their main functions involve some forms of sensing and/or actuation.Some of them perform safety-critical tasks and collect sensitive personal information.IoT device manufacturers understandably prioritize (novel) functionality, external aesthetics, easeof-use, and other factors, while security is usually treated as a secondary issue or an afterthought.This is partly due to various constraints, including physical space, energy, and monetary cost. All of the above are merely research proposals.Although device manufacturers sometimes integrate research-originated techniques into their products, they rarely acknowledge the adoption of external research results.Furthermore, there are no strong compelling factors nudging the manufacturers towards adoption of security features. Although there are several guidelines 1 for IoT security, they do not consider user privacy in the general sense.Such well-intentioned guidelines are aimed at device owners or operators, who are generally well aware of device placement and capabilities.However, IoT devices impact all human users in their vicinity by sensing them and/or controlling their environment. This occurs in public places, such as parks, public transport, office buildings, concert halls, stadiums, and airports.It also happens in less-public places, such as hotels and private rentals, e.g., Airbnb.In the latter, users tend to be wary of unfamiliar surroundings [74,132] partly because they are unaware of nearby devices, their capabilities, what data exactly is being collected, and how it is (or will be) used.In particular, the issue of undeclared and hidden cameras has plagued the private rental industry [130]. We believe that, ideally, there would be an agreed-upon means of informing nearby (and thus potentially impacted) users about the presence of IoT devices as well as their capabilities and current activities.This would facilitate an informed decision by the users, i.e., whether to stay or leave the IoT-instrumented space. Motivation Based on the preceding discussion, the main motivation for this work is the need to take a step towards a privacy-compliant IoT ecosystem where all impacted users are made aware of nearby IoT devices, which empowers them to make informed decisions.Another inspiration stems from recent data protection regulations, such as the European General Data Protection Regulation (GDPR) [103] and California Consumer Privacy Act (CCPA) [86].These regulations aim to protect user privacy by stipulating that service providers must be accountable and ask for user consent before collecting, processing, storing, and sharing user data.We want to apply the same principle to IoT devices. Note that these regulations are clearly focused on privacy, meaning that, in the IoT context, they naturally apply to devices that sense the environment.Whereas, our scope is broader -it includes actuation-capable devices that can directly impact nearby users' security and even safety.For example, consider a situation where a hotel guest with epilepsy is unaware of a "smart" fire/smoke alarm in the room which turns on a strobe light when it detects smoke or fire.Unexpected light strobing can easily cause an epileptic seizure or worse. 2 Another example is an Airbnb renter who is unaware of a smart door-lock that can be (un)locked remotely which presents a risk of the door being closed or opened without the renter's knowledge.Whereas, if forewarned, the renter could disable it for the period of stay.To this point, a 2017 incident with an Austrian Hotel where all smart locks were hacked illustrates the danger. 3ddressing privacy concerns in the IoT context poses two challenges: (1) How to make users aware of the presence of nearby devices? (2) How to ask for consent to: collect information (in case of sensing), or control the environment (in case of actuation)?In this paper, we take the first step by focusing on (1), while viewing (2) as its natural follow-up.Current means of achieving (2) mostly focus on obtaining user consent [40,58,62,70].For example, studies on Privacy Assistants [40,58,70] focus on automating the process of acquiring user preferences/consent efficiently.Another research direction [62,67,121] provides design (and implementation) guidelines for user privacy choices that address regulatory considerations. Regarding (1), there are several approaches for informing users about ambient devices.One approach involves manually scanning the environment using specialized hardware [8,12,89,114].Another way is by monitoring wireless traffic, i.e., WiFi and/or Bluetooth [68,112,113].Though somewhat effective, such techniques are cumbersome and error-prone, since it is not always possible to thoroughly scan the entire ambient space.Also, these approaches can be evaded if a device is mis-configured or compromised.Nevertheless, they represent the only option for discovering hidden and non-compliant devices. Instead of putting the burden on the users to monitor and analyze wireless traffic, we want to construct a technique that guarantees that all compliant IoT devices reliably announce their presence, which includes their types and capabilities.Consequently, a user entering an unfamiliar space can be quickly warned about nearby IoT activity.We believe that this is an important initial step towards making future IoT devices privacy-compliant.We imagine later integrating the proposed technique with other consent-seeking platforms. Overview & Contributions We construct a technique called PAISA: Privacy-Agile IoT Sensing and Actuation, that guarantees timely and secure announcements about IoT device presence and capabilities.We use the term privacyagile to denote PAISA service -explicit user awareness of all nearby PAISA-compliant IoT devices.Each PAISA-compliant device reliably broadcasts secure announcements at regular intervals, ensuring continuous awareness, unless it is compromised via physical attacks or is powered off. PAISA has two main components: (1) one on the IoT device that guarantees periodic announcements of its presence, and (2) the other that runs on the user device (smartphone); it captures and processes announcements.To guarantee secure periodic announcements on the IoT device, PAISA relies on the presence of a Trusted Execution Environments (TEE) or some other active Root-of-Trust (RoT) component.The TEE ensures guaranteed and isolated execution of PAISA Trusted Computing Base (TCB).On the user device, PAISA imposes no special requirements to capture and process announcements: it simply uses standard network drivers to read announcement packets and validate them in an application. Anticipated contributions are: • Motivation for, and comprehensive treatment of, a privacyagile RoT architecture for IoT devices.To the best of our (current) knowledge, no prior work systematically approached privacy compliance in the IoT ecosystem, given that relevant attempts [68,101,112,113], are either ad-hoc or not applicable to a wide range of devices.• Design and construction of PAISA, a secure and privacyagile TEE-based architecture that reliably informs nearby users about IoT devices.Notably, PAISA does not require any custom hardware, unlike some prior work, e.g., [22,46].It uses off-the-shelf popular TEE, e.g., ARM TrustZone [32].• A fully functional prototype implementation of PAISA, which includes: (a) a prototype IoT device based on ARM Cortex-M33 featuring announcements via IEEE 802.11WiFi beacons, and (b) an Andriod application running on Google Pixel 6, which extracts and displays the announcements to the user.All source code is publicly available at [28]. Scope, Limitations, & Caveats As with most new designs, PAISA has certain limitations: • With regard to scope, it applies to a class of devices equipped with some basic security features, e.g., ARM TrustZone.Thus, it is unsuitable for simple "bare-metal" devices or even slightly higher-end ones that lack a secure hardware element.• In terms of the security level, it offers protection against hacked (directly re-programmed) or malware-infected devices.However, it does not defend against non-compliant devices.This includes devices that are home-made, jerryrigged, or produced by non-compliant manufacturers.• Furthermore, PAISA does not defend against local jamming or wormhole attacks [71,78]. 4The latter is nearly impossible to thwart.However, we propose a method to partially handle these attacks in Sections 4.3 and 5.2.• Finally, we do not explore policy issues and implications, i.e., the focus is on reliably informing users about adjacent devices.What users do with that information is left to future work.While we acknowledge that a practical system must include this component, space limitations make it hard to treat this topic with the attention it deserves. Targeted IoT Devices This work focuses on resource-limited IoT devices that have strict cost and energy constraints.Such devices tend to be deployed on a large scale and are meant to perform simple tasks, e.g., thermostats, security cameras, and smoke detectors.Due to the constraints, they are often equipped with micro-controller units (MCU), such as ARM Cortex-M series [19].Nonetheless, our work is also applicable to higher-end computing devices (e.g., smartwatches, drones, and infotainment units) that are equipped with a TEE.Recall that very simple devices that have no security features are out of scope. Figure 1 shows a general architecture of a device with an MCU and multiple peripherals.An MCU is a low-power computing unit that integrates a core processor, main memory, and memory bus on a single System-on-a-Chip (SoC).Its main memory is usually divided between program memory (or flash) where the software resides, and data memory (or RAM), which the software uses for its stack, heap, and peripheral memory access.A typical MCU also contains several internal peripherals such as a timer, General-Purpose Input/Output (GPIO), Universal Asynchronous Receiver/Transmitter (UART), Inter-Integrated Circuit (I2C), and Serial Peripheral Interface (SPI). Sensors & Actuators: Multiple purpose-specific sensors and actuators are connected to the MCU via internal peripherals.While sensors collect information from the environment, actuators control it.Examples of sensors are microphones, GPS units, cameras, as well as smoke and motion detectors.Examples of actuators are speakers, light switches, door locks, alarms, and sprinklers.Network Interfaces: IoT devices are often connected to the Internet and other devices, either directly or via a controller hub or a router.Thus, they are typically equipped with at least one network interface (such as WiFi, Bluetooth, Cellular, Ethernet, or Zigbee) attached to the MCU via internal network peripherals, e.g., UART, I2C, or SPI.WiFi and Cellular are used for wireless Internet connectivity at relatively high speeds.Bluetooth and Zigbee are used for relatively low-speed short-range communication with other devices, e.g., a smartphone for Bluetooth, or a controller hub for Zigbee.Since WiFi is currently the most common interface available for IoT devices [122], PAISA uses it for broadcasting device announcements.However, any other broadcast media (wired or wireless) can be supported; see Section 8 for more details. Table 1 shows some examples of (low-end) commodity IoT devices with sensors, actuators, and their network interfaces. Trusted Execution Environments (TEEs) A TEE is a hardware-enforced primitive that protects the confidentiality and integrity of sensitive software and data from untrusted software, including user programs and the OS.Similar to some prior work [20,35,73,102], we use ARM TrustZone-M as the TEE for the PAISA prototype.TrustZone-M is available on ARM Cortex-M23/M33/M55 MCUs [32].However, any TEE that offers trusted peripheral interfaces can be used instead.ARM TrustZone-M ARM TrustZone partitions the hardware and software within the MCU into two separate isolated regions: Secure and Normal.The former contains trusted security-critical code and data, while the latter houses user programs (or the device software).The MCU switches between secure and non-secure modes when accessing Secure and Normal regions, respectively.TrustZone hardware controllers prevent the MCU from accessing memory assigned to Secure region when it is running in non-secure mode, resulting in a secure execution environment.Moreover, at boot time, TrustZone verifies the integrity of trusted code via secure boot and always begins executing from the Secure region before jumping into the Normal region.TrustZone for ARMv8-M MCUs is called TrustZone-M (TZ-M). TZ-M features non-secure callable functions (NSC) for Normal region software to invoke trusted code.Also, TZ-M can lock internal peripherals into the Secure region making them inaccessible to the Normal region via the TrustZone Security Controller (TZSC) that, when configured at boot, maps desired peripherals into the Secure region.This mapping configuration is controlled by TZSC and is checked by the secure-boot process at boot time.Furthermore, interrupts attached to secure peripherals are always directed to the corresponding Interrupt Service Routines (ISR) in the Secure region.Also, TrustZone Illegal Access Controller (TZAC) raises a SecureFault exception, when a security violation is observed, to the Nested Vectored Interrupt Controller (NVIC) which is then securely processed by exception handlers. Sensor Actuator Network I/F X-Sense smart smoke detector [17] smoke, carbon monoxide detector alarm WiFi Amazon smart plug [1] switch WiFi Blink Mini Security Camera [3] microphone, motion, camera speaker WiFi Google Nest thermostat [6] light, motion, temperature, humidity heating, cooling WiFi iRobot Roomba 694 [9] cliff, dirt, optical brush/vaccum motor, drive motor WiFi Fitbit -fitness tracker [5] accelerometer, heart rate monitor, GPS, altimeter vibrating motor, speaker Bluetooth Wyze Lock Bolt -smart lock [16] fingerprint lock, speaker Bluetooth PAISA relies on TZ-M for enabling a secure execution environment for its TCB and for implementing secure peripherals.For a comprehensive overview of TrustZone, see [129].Other Active Roots-of-Trust (RoTs) Active RoTs prevent security violations, unlike their passive counterparts that detect them [44,63,111,116].TEEs are considered active RoTs since they prevent violations by raising hardware-faults/exceptions, which are handled in the secure mode.Besides TEEs, some active RoTs have been proposed in the research literature, e.g., [22,47,99,128].Notably, GAROTA [22] and AWDT [128] offer guaranteed execution of secure ISRs when a configured peripheral is triggered.Although the current focus is on off-the-shelf devices, we believe that PAISA can be applied to either GAROTA or AWDT devices.Section 8 discusses the applicability of PAISA to other architectures. Remote Attestation (RA) RA is a security service that enables the detection of malware presence on a remote device (Prv) by allowing a trusted verifier (Vrf) to remotely measure software running on Prv.RA is a challengeresponse protocol, usually realized as follows: (1) Vrf sends an RA request with a challenge (Chal) to Prv. (2) Prv receives the attestation request, computes an authenticated integrity check over its software memory region (in program memory) and Chal, and returns the result to Vrf. (3) Vrf verifies the result and decides if Prv is in a valid state.The integrity check is performed by computing either a Message Authentication Code (e.g., HMAC) or a digital signature (e.g., ECDSA) over Prv's program memory.Computing a MAC requires Prv to share a symmetric key with Vrf, while computing a signature requires Prv to have a private key with the corresponding public key known to Vrf.Both approaches require secure key storage on Prv.RA architectures for low-end MCUs [44,99] use MACs whereas higher-end TEEs (e.g., Intel SGX [77] and AMD SEV [24]) use signatures. PAISA uses RA to ensure integrity of normal device operation, i.e. the device software controlling sensors and actuators.However, PAISA relies on TZ-M on the MCU to perform attestation locally, instead of via an interactive protocol.Also, it uses signatures to report the attestation result, similar to [24,77]. DESIGN OVERVIEW PAISA primarily involves two parties: an IoT device ( ) and a user device ( ), e.g., a smartphone or a smartwatch.PAISA is composed of two modules: announcement on and reception on .Announcement: On , the announcement module is trusted and housed inside a TEE.It ensures that, at periodic intervals, broadcasts an announcement to other devices within its immediate network reach.Such "reach", i.e. distance, is specified by the network interface, e.g., 802.11WiFi beacons go up to 100 meters [15].Importantly, PAISA guarantees that announcement packets are broadcast in a timely manner, even if all device software is compromised.This is achieved via a secure timer and a secure network interface, available on TZ-M. An announcement packet consists of a fresh timestamp, a device description (sensors, actuators, and their purpose) and a signature that authenticates the origin of the packet as a legitimate . Reception: On , the reception module captures the announcement packet via its network interface (of the same type as on ).The module then parses the packet, validates its timestamp and signature, and conveys the presence of and functionality to the user. The proposed design presents some challenges: Device State & Attestation: Merely broadcasting static information, such as a device description, is not enough.If software is compromised, information disseminated via announcement packets is invalid since software does not match the device description.For example, consider a user who enters an Airbnb rental and learns about a motion detector/tracker from PAISA announcements.Suppose that this motion detector is compromised and the malware notifies the adversary about the user's presence and movements.To handle such cases, the user needs authentic real-time information about the software running on at the announcement time.Therefore, PAISA attests software and includes the timestamped attestation report in the announcement.The reception module on must check the attestation report as part of validating the announcement.If the attestation check fails, must be compromised and cannot be trusted, regardless of the description in the announcement.Replay Attacks & Freshness: To protect against replay attacks and establish freshness of announcements (via timestamps), needs a reliable source of time.However, a real-time clock is generally not viable for resource-constrained devices [27,29,97].To this end, PAISA includes a time synchronization technique: at boot time, synchronizes with a trusted server managed by the device manufacturer.See Sections 4.2 and 5.2 for details. To summarize, PAISA is comprised of all aforementioned components.Figure 2 presents a high-level overview of PAISA workflow.As soon as boots, it synchronizes its time with the manufacturer server.Next, it attests its software and composes an announcement packet including the current timestamp, the attestation result, the device description, and a signature.Then, broadcasts the packet via WiFi.This is repeated for every timer interrupt, which is scheduled (likely configured by the manufacturer 5 ) according to the desired use-case.Each announcement is received by the PAISA app on every user device within range.After validating the announcement, the app alerts the user to 's presence. Entities Involved PAISA considers three entities: , , and the manufacturer server ( ), which is responsible for provisioning at production time. is a resource-constrained IoT device installed either (1) in a public space, e.g., airports, restaurants concert/sports venues, or stores, or (2) in a semi-private space, e.g., hotel rooms or Airbnb rentals. is assumed to be equipped with a TEE to protect PAISA TCB from untrusted software (including the OS). is the personal and trusted device of the user.It is assumed to be within network transmission range of . has an app that receives and verifies PAISA announcements. is a back-end (and sufficiently powerful) trusted server hosted by manufacturer. PAISA assumes multiple -s and multiple -s in the same IoT-instrumented space, i.e., within network transmission range. receives announcements from multiple -s. -s are unaware of -s in their vicinity.PAISA uses public key signatures to authenticate and verify announcements.We assume a publicprivate key-pair ( , ) for each and another key-pair ( , ) for each . is used to authenticate as part of announcement verification. PAISA Protocol Overview PAISA protocol has three phases: Registration, BootTime, and Runtime.Figure 3 shows its overview.Registration phase takes place when is manufactured and provisioned.At the time of the registration, besides installing software, installs PAISA TCB on and provisions it with a device ID, a description, and a keypair ( , ) using Provision request.Further details about the device description are in Section 5.2.A provisioned is eventually sold and deployed by its owner/operator.BootTime phase is executed at boot, after a reset or a power-on.Before going into normal operation, synchronizes its time with using TimeSync 3-way protocol.At the end of this phase, the initial announcement is generated.Runtime phase corresponds to 's normal operation.In this phase, announces its presence based on a preset timer interval.Announcement periodicity is set by .(We are not advocating allowing owners to set this.) Whenever triggered by the timer, Announcement procedure is invoked.It attests software and broadcasts an announcement (Msg anno ).A nearby receives Msg anno using its Reception app, which parses and verifies Msg anno .If the verification succeeds, Msg anno is displayed to the user. For the complete protocol description, see Section 5.2. Adversary Model We consider an adversary Adv that has full control over memory, including flash and RAM, except for the TCB and its data inside the TEE.Adv can attempt to tamper with any components and peripherals, including sensors, actuators, network interfaces, and debug ports, unless they are configured as secure by the TEE.All messages exchanged among , , and are subject to eavesdropping and manipulation by Adv, following the wellknown Dolev-Yao model [56].Furthermore, Registration phase is considered secure - is trusted to correctly provision and keep the latter's secrets.Also, Reception app on is also considered trusted. DoS Attacks: Adv can essentially incapacitate ("brick") by consuming all of its resources by malware.It can also keep all peripherals busy in an attempt to prevent PAISA TCB from broadcasting Msg anno packets.It can ignore or drop outgoing packets or flood with incoming malicious packets.We also consider DoS attacks whereby a malware-controlled reboots continuously and floods with frivolous TimeSync requests.However, we do not consider Adv that uses signal jammers to block from receiving Msg anno .Such attacks are out of scope and there are techniques [95,96,105] to prevent them. Replay Attacks: we consider replay attacks whereby Adv replays old/stale Msg anno -s from any PAISA-compliant -s.We also consider DoS attacks on , e.g., Adv replays old Msg anno -s to swamp network interface. However, PAISA provides to coarse-grained location information, i.e., where was manufactured and where it was deployed at Registration phase.Physical Attacks: PAISA does not protect against physically invasive attacks on , e.g., via hardware faults, modifying code in ROM, and extracting secrets via side-channels.We refer to [106] for protection against such attacks.However, PAISA protects against non-invasive physical attacks, i.e., if Adv tries to physically reprogram the device using wired debug interfaces such as JTAG.Such attacks are prevented using the secure boot feature of the TEE on .Non-Compliant Devices: We do not consider attacks where Adv physically infiltrates and deploys malicious (non-compliant) hidden devices in an IoT-instrumented space.As mentioned earlier, there are "spyware-type" techniques, such as [12,89,114], and other prior work, such as [112,113], that scan the area for hidden devices.Albeit, even these techniques are error-prone, potentially computationally expensive, and time-consuming for users, and/or require additional equipment.Runtime Attacks: Another limitation of PAISA is that it does not handle runtime control-flow attacks, such as buffer overflows, as well as non-control-flow and data-only attacks.PAISA can only detect software modifications via attestation.For mitigating these runtime attacks, there are techniques such as Control Flow Attestation (CFA) and Control Flow Integrity (CFI) [20,43,49,52,93,116]. Dealing with these attacks and deploying countermeasures is a good idea, though it is out-of-scope of this paper.Furthermore, many CFA/CFI techniques are resource-intensive, making their use challenging in IoT settings. Security & Performance Requirements Recall that the main objective of PAISA is to make privacy-agile i.e., by guaranteed periodic announcements from about its activity to adjacent -s, in the presence of Adv defined in Section 4.3.To that end, PAISA must adhere to the following properties: • Unforgeability: Announcements must be authenticated. should be able to verify whether Msg anno is from a legitimate , i.e., Adv should not be able to forge Msg anno . • Timeliness: Announcements must be released at fixed time intervals.Adv should not be able to prevent Msg anno -s from being sent out.• Freshness: Announcements must be fresh and must reflect the current (software) health of .Adv should not be able to launch replay attacks. With respect to performance, PAISA must achieve the following: • Low latency of Announcement: Announcements must be quick with minimal impact on normal utility.• Low bandwidth of Announcement: Announcements must be short to consume minimal network bandwidth on and . PAISA DESIGN This section elaborates on the design and protocol overview presented in Sections 3 and 4. Design Challenges There are a few design challenges (besides those mentioned in Section 3) to be addressed in order to achieve the security and performance requirements of PAISA.DoS Attacks Prevention on : Adv can launch DoS attacks by either keeping the MCU or the network peripherals busy, as mentioned in Section 4.3.To prevent such attacks, PAISA configures both the timer and the network peripheral as secure peripherals controlled by the TEE.By doing so, PAISA ensures that the MCU jumps into the TCB whenever the secure timer raises an interrupt according to scheduled periodicity.Moreover, the timer interrupt is marked with the highest priority so that no other interrupt can preempt it.This configuration (that determines which timer and network peripheral are trusted, and their interrupt priorities) is securely stored within the TEE.Hence, Adv cannot tamper with it.This also prevents DoS attacks that attempt to keep from executing PAISA TCB that provides guaranteed periodic broadcast of Msg anno -s.A typical target might have 2-6 timers and multiple network peripherals, such as UART, SPI, and I2C on an MCU.PAISA reserves one timer and one network peripheral for TCB usage.This means that the network interface (e.g., WiFi or BlueTooth) connected to that reserved network peripheral is marked as exclusive.We admit that reserving a network interface exclusively for TCB use might be expensive for , since at least one other interface (for regular use) would be needed. To address this issue, we implement a secure stub, akin to the ideas from [65,87,125], to share the reserved network interface between secure and non-secure applications, detailed in Section 6.3.For further discussion on this issue, see Section 8. Bandwidth of Msg anno : Broadcast messages are subject to size constraints that impact network efficiency and transmission capacity, regardless of the network type.Since the device description can be of arbitrary size, to minimize the size of Msg anno , PAISA uses a fixed size broadcast message by placing all pertinent information in a manifest file (Manifest I dev ). -generated Msg anno -s carry only: (1) a URL that points to Manifest I dev , and (2) some metadata: PAISA Protocol Recall that PAISA includes three phases: Registration, BootTime, and Runtime.Below we describe each phase in detail. Registration. In this phase, interacts with to provision it with secrets and information needed to enable PAISA.Figure 5 depicts this phase.Device Manifest: creates Manifest I dev for , including device ID ( ), a description which includes: 7device type/model, manufacturer, date/location of manufacture, types of sensors/actuators, deployment purpose, network interfaces, owner ID, and location of deployment Figure 4 shows Manifest I dev examples.Manifest I dev can also contain a link to developer documentation, as mentioned in [42].Note that, whenever the owner changes 's location, the corresponding manifest must be updated accordingly.The granularity of this location information influences the ability to mitigate wormhole attacks.We believe that the contents of Manifest I dev suffice to make a user aware of capabilities.However, the exact contents of Manifest I dev are left up to the manufacturer. stores each Manifest I dev it in its database and generates a publicly accessible link URL Man .Since URL Man can be long, we recommend using a URL shortening service (such as Bitly [2] or TinyURL[14]) to keep URL Man short and of fixed size. Hereafter, we use URL Man to denote the short URL and URL Man Fullthe original URL.(Note that if the shortening service is not used, then URL Man is identical to URL Man Full .)For simplicity's sake, besides manufacturing , we assume that is responsible for deploying and maintaining the software ( ) on .However, in practical scenarios, other entities, such as software vendors, can be involved in managing individual applications on .In such cases, vendors must be integrated into the trust-chain by including their information and certificates into Manifest I dev .Whenever a vendor-imposed software update occurs, Manifest I dev must be updated and re-signed by .We further discuss this update process in Section 8. Provision: installs and PAISA TCB ( PAISA ) into the normal and secure regions of , respectively. ensures that the timer and the network peripheral are configured as secure and exclusively accessible to PAISA .Also, sends and a hash of to to be stored in PAISA .Next, PAISA picks a new public/private key-pair ( , ) and sends to for certification. also gives the current timestamp to PAISA , to be used for implementing a clock on (see Section [13] powered by a separate power source, thus ensuring that time is always accurate.However, most resource-constrained IoT devices lack such an RTC.To this end, PAISA includes a secure time synchronization (TimeSync) protocol between and .It assumes that is both reachable and available at all times. The main idea of TimeSync is to receive the latest timestamp from whenever (re)boots, or (optionally) at regular intervals.Figure 6 shows the BootTime protocol.TimeSync: After completing the boot-up sequence, sends a time synchronization request SyncReq to , which includes and the previous timestamp time prev given by at Provision or TimeSync of the last boot.SyncReq also contains a signature to authenticate its origin as a legitimate , and prevent DoS attacks on via flooding of fake requests. 8Upon receiving SyncReq, verifies the signature using and responds with SyncResp that includes the current timestamp time cur .Upon receipt of a SyncResp, verifies the signature using obtained at Provision.If verification succeeds, updates its local timestamp and sends an authenticated acknowledgment SyncAck to .Finally, verifies SyncAck and updates its local registered time database for .Next time requests a TimeSync, will know whether the signature is based on the same time prev it previously sent.At the end of the protocol, and have the same time cur .Given the unavoidable network transmission latency, we suggest keeping a window of acceptance when verifying timestamps. Subsequently, can be synchronized with by re-starting the secure timer after receiving and updating time prev .Thereafter, computes the latest time by adding time prev and the secure timer value; we denote this time as time dev .However, since this secure timer value might still deviate due to hardware inconsistencies, repeating TimeSync at regular intervals is recommended. Runtime. The current PAISA design uses a push model, whereby periodically transmits Msg anno -s at fixed intervals.An intuitive alternative is to use a pull model, in which announces its presence first and, in response, solicits information from all nearby -s.This is similar to the Access Point (AP) discovery process in WiFi: emits a "Probe Request" to which an AP responds with a "Probe Response" containing information about the various network parameters to establish the connection.In the same fashion, that receives a "Probe Request" could include Msg anno in the "Probe Response" and send it to .One advantage of the pull model is that Msg anno -s are only sent when they are needed, thus reducing the burden on individual -s and easing the network traffic congestion.On the other hand, it becomes more challenging to deal with "sleeping" or intermittently powered-off -s, thereby raising the energy consumption issues.In any case, we intend to explore the pull model further as part of near-future work. PAISA runtime shown in Figure 7 Attest and Announce periodicity: If T Attest is the same as T Announce , then attestation and announcement are performed sequentially.This is recommended so that always receives the latest information about .However, periodicity can be adjusted based on device capabilities and desired use-cases.If is a weak low-end device and/or must prioritize its normal applications, T Attest can be longer than T Announce . 9 We note that user linkage might occur if fetches multiple Manifest I dev -s from the same , assuming the latter is honestbut-curious.To mitigate this, there are well-known techniques for anonymous retrieval, such as Tor.Although this issue is somewhat outside the scope of this paper, we discuss it further in Section 8. IMPLEMENTATION This section describes PAISA implementation details.All source code is publicly available at [28]. Implementation Setup As , we use NXP LPC55S69-EVK [11] development board, based on ARM Cortex-M33 MCU (in turn based on ARMv8-M architecture) equipped with ARM TrustZone-M (TZ-M).The board runs at 150 MHz with 640KB flash and 320KB SRAM.For the network interface, we connect a ESP32-C3-DevKitC-02 [4] board, via UART to the NXP board.This network interface runs 2.4 GHz WiFi (802.11b/g/n) and it is connected to the internet via a local router. is emulated using a Python application running on a Ubuntu 20.04 LTS desktop with an Intel i5-11400 processor at 2.6GHz with 16GB RAM. is connected to using UDP for TimeSync. As , we use a Google Pixel 6 [7], with 8 cores running at up to 2.8GHz, which is used for .Both and use WiFi as their network interface to transmit/receive announcements.Figure 8 depicts the implementation architecture and Figure 10 illustrates the complete prototype.TCB configuration on TZ-M: CTIMER2 and UART4 peripherals are configured as secure, ensuring that only TCB can access them.This assurance is provided by TZ-M which raises a Se-cureFault (i.e., a hardware fault) whenever a non-secure application attempts to modify the configuration or access the secure peripherals directly.When a SecureFault is issued, the MCU enters into the SecureFault handler within the TCB, where PAISA resets the MCU.Therefore, even if Adv attempts to cause a DoS attack by raising SecureFaults, PAISA issues announcements by transmitting new Msg anno as soon as the device awakes, before any normal activity.Also, the secure timer is configured, with the highest priority, to interrupt the MCU via the NVIC every T Announce .Hence, no other user-level interrupt can preempt the announcement schedule. Implementation Challenges How to announce?An interesting challenge is how to broadcast Msg anno when does not have a connection with .A naive option is to broadcast Msg anno via UDP packets.However, this is not a robust model, since the local WiFi router in the subnet must be trusted to relay packets to -s.Moreover, it requires -s to be connected to the router to receive Msg anno -s, which is not a fair assumption.To mitigate this issue, we use the IEEE 802.11 standard WiFi Beacon Frames [15].Beacon frames are typically used by routers or APs to advertise their presence.PAISA can implement such beacon frames to broadcast its Msg anno letting other devices know presence, akin to a router.More specifically, PAISA uses vendor-specific elements in the beacon frame to populate Msg anno .Msg anno size limitation: Msg anno size is limited to 255 bytes as per the length of a vendor-specific element in a beacon frame.Hence, to fit into that size, we minimized all fields in Msg anno .By using Bitly, URL Man can be reduced to 11 bytes.By using ECDSA with Prime256v1 curve, Sig anno can be reduced to 64 bytes.By using the UNIX Epoch format, time dev requires only 4 bytes.Only 5 bytes are needed for the attestation report, including one byte for the attestation result (a boolean) and 4 bytes for the attestation timestamp.In total, Msg anno size is about 116 bytes including a 32-byte nonce. A typical WiFi router beacon frame observed in our experiments is between 200 and 450 bytes.The beacon frame generated by PAISA Msg anno is 240 bytes.It is relatively small since it contains only one vendor-specific element and no other optional tags (besides required fields), in contrast with a typical beacon frame that carries multiple proprietary optional tags.Signing overhead: Computing a signature is performance-intensive.Some very low-end devices cannot even afford them due to heavy cryptographic computations, and some take several seconds to do so.Fortunately, TEEs such as TrustZone, are (although optional) usually equipped with cryptographic hardware support.In our implementation, we use the cryptographic accelerator, CASPER, on the NXP board to perform Elliptic Curve Cryptography (ECC) to reduce signing overhead. Trusted Software in 𝐼 𝑑𝑒𝑣 Figure 8 shows that contains three applications: non-secure application in the normal region, PAISA TCB in the secure region, and network stack connected to the secure UART4 interface.Non-secure application: We implemented a sample thermal sensor software as a non-secure application in the normal region.The software reads temperature data from the sensor (on the NXP board) every second and sends it to an external server via the network interface.Since the network interface is exclusive to the secure world, we implemented a secure stub that can be invoked by an NSC function, allowing non-secure applications to access the network interface.This stub always prioritizes PAISA announcements over other requests. For cryptographic operations, we use Mbed TLS library [10] on both and .At Provision, and both sample new pairs of ECC keys based on the Prime256v1 curve.PAISA TCB mainly contains three modules: Secure Timer ISR, Attestation, and Announcement.Secure Timer ISR, connected to CTIMER2, is executed when the announcement interval T Announce is triggered via the NVIC.This ISR first calls Attestation module, if T Attest is met, and then invokes Announcement module.Attestation module computes SHA256 over application program memory, in 4KB chunks, and generates Att report , as shown in Figure 7. Next, Announcement module creates Msg anno and sends it to the WiFi interface using USART_WriteBlocking().Network Stack: The ESP32-C3-DevKitC-02 board houses WiFi and Bluetooth on a single board, running on a 32-bit RISC-V singlecore processor running at 160 MHz.The board complies with IEEE 802.11b/g/n protocol and supports Station mode, SoftAP mode, and SoftAP + Station mode.PAISA TCB uses Station mode for TimeSync with and SoftAP mode for Announcement to . After receiving Msg anno via uart_read_bytes(), WiFi module generates a beacon frame using esp_wifi_80211_tx() API and sets SSID="PAISA".Figure 9 shows an example beacon frame produced.It includes Msg anno in the vendor-specific element: first byte (0) indicates Element ID, second byte (083) denotes length of the tag, and next three bytes (000, 014, 06) represent Organizationally Unique Identifier (OUI) for Netgear, while remaining bytes carry Msg anno contents.The beacon frame is transmitted according to same WiFi beacon standard. Reception App in 𝑈 𝑑𝑒𝑣 We implemented Reception as an Android app on -Google Pixel 6.It was developed using Android Studio.To scan for beacon frames, Reception requires location and WiFi access permissions enabled by setting ACCESS_FINE_LOCATION and CHANGE_WIFI_STATE in the app configuration. Reception uses getScanResult() API in wifi.ScanResult library to scan and identify WiFi beacon frames containing SSID= "PAISA".Then, it uses marshall() API from os.Parcel library to extract the list of vendor-specific elements from the frame.Next, the app parses Msg anno and fetches Manifest I dev from URL Man using getInputStream API in net.HttpURLConnection library.After receiving Manifest I dev , it verifies signatures in Manifest I dev and Msg anno using the corresponding public keys via java.security library.Finally, it displays the device description and the attestation report on screen, as shown in Figure 10.Reception app also has "SCAN PAISA DEVICE" button (as shown in the figure) to explicitly scan for . EVALUATION This section presents the security and performance analysis of PAISA. Security Analysis We argue the security of by showing an Adv (defined in Section 4.3) that attempts to attack TimeSync and Announcement modules, and how PAISA defends against such Adv.Adv who controls the normal region of , can attack PAISA in the following ways: (a) attempt to modify the code, data, and configuration of the secure modules, or try to read (b) attempt to keep normal application busy (for e.g., by running an infinite loop), (c) attempt to continuously raise interrupts to escalate into the privileged mode of execution in the normal region, (d) attempt to broadcast fake or replay old Msg anno -s, (e) tamper with or drop TimeSync messages, and (f) attempt to leak privacy of . First, the TZSC in TZ-M hardware ensures the protection of all memory within the secure region including the secure peripheral configuration.Thus, it raises a SecureFault when (a) occurs and gives control back to the secure region handler. Second, the NVIC configuration of MCU ensures that the secure timer has the highest priority (i.e., not preemptible), and when that timer interrupt occurs, it guarantees to invoke the secure timer ISR within the secure region.Hence, despite Adv attempts to block announcements by (b) or (c), Announcement is executed in a timely manner.Moreover, the network module is under the control of secure UART, thus, even that cannot be blocked by malicious applications.Additionally, since the announcements reach within one hop, Adv on the internet is totally harmless. Third, the unforgeability guarantee of signature schemes ensures that Adv cannot generate a correct Msg anno without knowing .This entails, Adv cannot modify the Attest report to hide compromised applications, modify the timestamp of old Msg anno to create fake new ones, or make a Msg anno point to a wrong Manifest I dev ; as catches these during Verify.And similarly, Adv cannot get away with replaying old Msg anno with valid Attest report because detects obsolete messages based on the timestamp in it.Hence, (d) is not possible. Fourth, messages exchanged in TimeSync are all authenticated with signatures, so tampering is not viable.Next, since the network module on is secure, Adv cannot drop packets going out of .However, Adv on the internet can intercept and drop messages that are in transit between and .For that, PAISA carefully retransmits when necessary as mentioned in Section 5.2.Additionally, Adv can launch network DoS attacks by flooding or during TimeSync.Nonetheless, this does not harm the purpose of PAISA because, in that case, did not even boot to resume its activity, so no need to announce Msg anno anyway. Lastly, Adv compromising one or more can attempt to trace location.However, by virtue of PKC, need not connect to any to learn about the IoT activity in the vicinity.Therefore, there is no user privacy leakage at all. The above five points conclude the security argument of PAISA, ensuring it meets all security requirements stated in Section 4.4. Performance Analysis Note that we measure the mean and standard deviation of each performance value over 50 iterations. Performance of : PAISA overhead on is measured in two phases: BootTime and Runtime. BootTime comprises the time taken for device initiation (InitDevice), TimeSync, and Announcement.During InitDevice, initiates the MCU itself and peripherals including timers, sensors, actuators, and network interfaces.Next, during TimeSync, initiates its WiFi module in Station mode to connect to using UDP.After a successful connection, and communicate to synchronize the former's clock.Then, executes Announcement to issue its first Msg anno .As shown in Table 2, the time for InitDevice is 9.66ms with negligible standard deviation.Whereas, average latency of TimeSync is 1,076ms with a significant deviation of 187ms.This is because TimeSync includes network delay and all messages exchanged between the parties.Another reason for the high mean latency of TimeSync is due to: (a) two signing operations during SyncReq and SyncAck, and (b) one verification operation during SyncResp.Each ECDSA signing/verification operation takes ≈ 230ms at 150MHz.Finally, Announcement takes 236ms, which includes one signing operation and a beacon frame transmission.Adding all these, the total boot time is about 1.3s, which is mostly due to TimeSync and Announcement.However, since this happens infrequently, we believe it is reasonable.Runtime overhead stems from the PAISA Announcement module.Figure 11 shows the performance of Announcement with variable size of the attested region.The latency for generating and signing an Msg anno is constant since the signature is over a fixedsized value.Attestation latency grows linearly with the attested memory size since it requires hashing.However, signing takes significantly longer, about 230ms, than attestation, which only requires 1ms for 64KB.This is because public key operations naturally take more time than hashing.Therefore, Announcement latency almost equals that of one signature operation.Also, the software size of mid-to-low-tier devices is typically under 100KB.Even if it reaches 1MB, attestation would take only ≈ 16ms, which is 14 times less than one signature.Furthermore, during Announcement, the runtime overhead of the network interface is negligible, amounting to ≈ 135s, which has minimal impact on overall latency. DISCUSSION & LIMITATIONS We now discuss some limitations of PAISA and potential mitigations. Run-time Overhead: To measure run-time overhead on , we define CPU utilization ( ) as the percentage of CPU cycles that can be used by the normal application amidst the announcements, denoted by = + .Here, is the CPU cycles for the normal application between two announcements, which equals to T Announce , and is the time taken for one announcement, which is nearly 250 ms (from Section 7.2).So if T Announce = 1, then = 80% of normal utility, which is not good for general applications.If T Announce = 100, then = 99.7%, but it is not good for the users since they could not be aware of up to 100s.Therefore, depending on the application, there is a desired balance between the normal utility and the announcement interval. There are other ways to reduce the overhead of PAISA. If the normal application binary size is large, T Attest can be increased to lower the overhead at every T Announce .However, this might not yield much of a reduction since, as can be seen in Figure 11, signing incurs higher overhead than attestation.Therefore, we consider the following option. If the activity schedule of is known, it can pre-compute multiple Msg anno -s during idle time and later release one at a time.In this case, amortized (real-time) overhead would be significantly lower, since it would be only due to broadcasting Msg anno .For example, a smart speaker can precompute a day's worth of announcements at midnight and gradually release them.However, this approach is only applicable to devices that are not real-time and/or safetycritical.Also, in settings where a group of very low-end devices (e.g., smart bulbs) is connected to a local hub or controller, the latter can act as a PAISA proxy, i.e., it can broadcast a collective announcement on behalf of the entire group of its constituent devices.Compatibility with other RoTs: PAISA can be applied to any architecture that offers a secure timer and a secure network interface.ARM TrustZone-A (TZ-A) is widely available in higher-end IoT devices that rely on ARM Cortex-A-based microprocessors (e.g., Raspberry Pi and Rock Pi).Since TZ-A offers similar guarantees to TZ-M, PAISA can be directly realized on the former. For lowest-end MCUs, such as TI MSP430 [76] and AVR ATMega8 [18], an active RoT, called GAROTA [22], offers a secure timer, GPIO, and UART peripheral support based on some additional custom hardware.PAISA can be applied to GAROTA by extending the secure timer TCB of GAROTA to include periodic announcements. Furthermore, there is a software-based MultiZone TEE [69] for RISC-V-based MCUs.Relying on Physical Memory Protection Unit (PMP), Multizone divides memory and peripherals into well-isolated regions, called Zones, which are configured at compile-time.PAISA can be implemented as one of the Zones with a timer peripheral and a network peripheral assigned to it.Compatibility with Other Network Interfaces: We believe that PAISA is compatible with other network interfaces besides WiFi, such as Bluetooth-Low-Energy and Cellular.For example, with Bluetooth version 5.0 and above, devices scan for other nearby devices by broadcasting packets that contain the sender address and advertising payload which can be up to 255 bytes.A PAISA announcement (116 bytes) can easily fit into this payload.Secure Update on : To support secure software updates on , or software vendors can initiate an update request by sending the new software along with its authorization token.This token is generated using a private key for which the corresponding public key is known to .Implementing this process requires extending PAISA TCB to include token verification and update installation.We expect that this update procedure can be implemented in a manner similar to existing frameworks, such as [47,82,109].User Linkage: There are both practical and conceptual techniques for anonymous retrieval that can be used to fetch Manifest I dev -s.The former include Tor, Mix Networks (e.g., Jondo and Nym), and peer-to-peer networks (e.g., I2P, Freenet).They all facilitate anonymous communication, however, their use might be illegal in some jurisdictions, while in others their use might be impractical due to additional requirements, such as Virtual Private Network (VPN).Conceptual techniques include privacy-preserving cryptographic constructs, such as Private Information Retrieval (PIR) [26,94] and Oblivious RAM (ORAM) [88,124].Using these types of techniques would require building customized "wrappers" for PAISA.PAISA TCB: As discussed in Section 7.2, though the TCB size of the main device is small, the total size (including the network driver) increases the attack surface. Unfortunately, this is unavoidable because PAISA's main objective is guaranteed announcements which necessitates its reliance on a trusted network interface.However, to alleviate this problem, we suggest pruning the network module to only contain what is absolutely necessary.For example, PAISA only requires the driver to establish a UDP connection with and broadcast WiFi beacon frames.The rest of the driver module (including TCP, HTTP, etc.) can be removed, thus significantly reducing the binary size.However, if normal applications want to use these protocols (via the secure stub mentioned earlier), the driver has to retain them.Exclusive Network Module: To ensure protection from DoS attacks, PAISA requires exclusive access to a network peripheral on .This is because a shared network interface can be easily exploited by Adv by keeping the interface busy and not allowing Msg anno packets to be sent out. However, reserving a network interface exclusively for TCB use is expensive, since the budget might not permit an additional interface (in terms of cost and/or energy) for normal use.To address this concern, we suggest using techniques such as [65,87,125] that involve a secure stub that shares peripherals between secure and non-secure programs.The main idea is to lock the network interface as a trusted peripheral controllable only by TZ-M.Also, a stub is implemented in the secure region that carefully parses inputs and relays them to the trusted interface.This stub is made available to normal applications by exposing an NSC function callable from the normal region.Furthermore, the stub must also implement a scheduling queue for handling requests from both secure and non-secure applications.This way, there is no need to equip with an additional interface.We implement a basic functionality of this approach as a proof-of-concept.It is available as part of [28].Nonetheless, we emphasize that, for the "timeliness" property of PAISA, the Announcement module is always given higher priority for accessing the network interface.Role of : PAISA relies on for TimeSync and hosting a database for Manifest I dev .If the number of -s provisioned by is high and is consistently overloaded with requests, we suggest using helper third-party servers in the local area of deployment.Of course, such servers must be certified by to prove their authenticity when responding to TimeSync and Manifest I dev retrieval requests. Related work can be classified into six categories: Active RoTs proactively monitor activity on MCUs to prevent (or minimize the extent of) compromises.For example, [22,46,47] are co-design (hardware/software) architectures that guarantee the execution of critical software even when all device software is compromised.[46] guarantees sensor data privacy by letting only authorized software access sensor data via secure GPIO peripherals.On the other hand, [47] prevents code injection attacks by allowing only authorized software to run on the MCU while preventing any other software from modifying it except via secure authorized updates.Whereas, [72,128] rely on ARM TrustZone or a similar class of MCUs to protect devices from being "bricked", by resetting and updating the device whenever it does not respond to a watchdog timer.Remote Attestation: There is a large body of research proposing remote attestation architectures on wide-range of devices.[25,36,44,45,57,84,91,99,100,108,111,120] propose attestation architectures for MCUs.There are also other architectures such as [20,48,49,52,53,63,116,131] that discuss runtime attestation techniques, including control-flow, data-flow attestation, for lowend MCUs.All the aforementioned attestation architectures can be integrated with active RoTs mentioned earlier to enable PAISA. For servers and high-end IoT, there are TEE architectures such as Intel SGX [77], AMD SEV [24], Sanctum [41] and Keystone [85] that provide attestation APIs for attesting in-enclave applications.However, these are not applicable for PAISA because PAISA attests and reports the normal region instead of the secure region.ARM TrustZone: Lots of prior work leveraged TrustZone to improve the security of systems from various perspectives.[35,73,92] use TZ-A as an authorization tool for non-secure applications.[35] proposes an authorization architecture to regulate smaller user devices connected to IoT hubs, enabled by TZ-A.[73] implements a user authentication scheme based on TZ-A on smartphones.Besides these, TZ-M is also used to enhance security in several constrained settings, e.g., to optimize secure interrupt latencies [102], improve real-time systems [126], mitigate control-flow attacks [20,90], and add support for virtualization [104].Similarly, in PAISA, we use TZ-M to trigger announcements at regular intervals.Hidden IoT Device Detection: To detect hidden IoT devices in unfamiliar environments, there are a few approaches proposed in recent years."spyware" solutions such as [12,114] are popular detectors; however, the detector should be in close proximity to the IoT device.[89] designs specialized hardware -a portable millimeter-wave probe -to detect electronic devices.[107] leverages the time-of-flight sensor on commodity smartphones to find hidden cameras.However, they either take significant time or require specialized hardware to detect the devices.Moreover, they can only detect IoT devices, but cannot identify them. On the other hand, [68,101,112,113] observe WiFi traffic to identify hidden devices.In particular, [112] monitors coarse attributes in the WiFi 802.11 layer to classify IoT devices.[113] establishes causality between WiFi traffic patterns to identify and localize an IoT device.[101] uses autoencoders for automatically learning features from IoT network traffic to classify them.However, all the aforementioned techniques rely upon probabilistic models, hence, they can be error-prone, especially when there are newer devices or when the adversary is strong enough to bypass the detection logic; moreover, they are computationally intensive.Conversely, PAISA takes a systematic approach to make users aware of the devices with minimal computation on their end.Furthermore, PAISA announcements convey more information regarding the device such as its revocation status, software validity, and complete device description, which is not possible with other approaches.Broadcasting Beacon Frames: [38] proposes a technique, Beaconstuffing, that allows Wi-Fi stations to communicate with APs without associating with any network.Subsequently, many applications of Beacon-stuffing have been introduced over the past decade.[23] uses beacon frames to figure out if a given device is physically located nearby a user device while the user is using the former for Two-Factor Authentication.[118] achieves two-way data encryption transmission by injecting custom data into the probe request frame.[54] proposes a smartphone-based Car2X communication system to alert users about imminent collisions by replacing the SSID field in the beacon frame with the alert message.Following the 802.11standard, [66] shows that custom information can be embedded in a beacon frame by modifying vendor-specific fields.IoT Privacy: Some prior work focused on enhancing user privacy in the context of IoT via Privacy Assistants (PA-s) user notices, and consent.PA-s [58,70,79] provide users with an automated platform to configure their privacy preferences on nearby IoT resources.For example, a recent study [40] interviews 17 participants to learn user perceptions of several existing PA-s and identifies issues with them.It then suggests ideas to improve PA-s in terms of automated consent, and helping them opt out of public data collections.[62] explores a comprehensive design space for privacy choices based on a user-centered analysis by organizing it around five dimensions (e.g.type, functionality, and timing).It also devises a concrete use case and demonstrates an IoT privacy choice platform in real-world systems. Furthermore, some research efforts have explored privacy and security labels (akin to food nutrition labels) for IoT devices.For example, [59] suggests a set of IoT privacy and security labels based on interviews and surveys.It identifies 47 crucial factors and proposes a layered label approach to convey them.[60] conducts a survey with 1, 371 online participants to evaluate the privacy factors proposed in prior research with two key dimensions: an ability to convey risk to consumers and an impact on their willingness to purchase an IoT device.Also, the study yields actionable insights on optimizing existing privacy and security attributes of IoT labels.Similarly, [61] conducts a survey with 180 online participants in order to evaluate the impact of five security and privacy factors (e.g.access control) on participants' purchase behaviors when individually or collectively presented with an IoT label.The study underscores participants' willingness to pay a substantial premium for devices with better security and privacy practices. These prior results are valuable and relevant to this paper since they provide guidelines for which privacy-related factors should be reflected in Manifest I dev and how to utilize them in order to attain acceptable user experience with effective privacy configurations. CONCLUSIONS This paper suggests taking a systematic approach to making IoT devices privacy-agile by advocating that devices periodically inform nearby users about their presence and activity.As a concrete example of this approach, we presented the design and construction of PAISA: a secure and privacy-agile TEE-based architecture that guarantees secure periodic announcements of device presence via secure timer and network peripherals.We implemented PAISA as an end-to-end open-source prototype [28] on: (1) an ARM Cortex-M33 device equipped with TrustZone-M that broadcasts announcements using IEEE 802.11WiFi beacons, and (2) an Android-based app that captures and processes them.The evaluation shows that takes 236ms to transmit an announcement and it only takes 1sec for the app to process it. Figure 1 : Figure 1: Architecture of an IoT Device.This example shows the peripherals of a security camera. Figure 4 : Figure 4: Examples of Manifest I dev .Left one is for Google Thermostat [6] and right one is for Blink Security Camera [3]. Figure 7 : Figure 7: Runtime Phase of PAISA update, maintenance shutdown, or a change of the shortened URL,), sends the updated URL Man to at the time of TimeSync.Attest and Announce periodicity: If T Attest is the same as T Announce , then attestation and announcement are performed sequentially.This is recommended so that always receives the latest information about .However, periodicity can be adjusted based on device capabilities and desired use-cases.If is a weak low-end device and/or must prioritize its normal applications, T Attest can be longer than T Announce .9In our experiments, Attest time is much smaller than Announce time because signing takes more time than just hashing a small amount of memory.Reception: After receiving Msg anno from , first parses it and checks if the received time dev is within [time U dev −, time U dev ], where time U dev is the clock value of , and is the toleration delay window of the assumed network.If Msg anno is fresh, then fetches Manifest I dev from the link URL Man and verifies Manifest I dev based on the public key and the signature Sig Man embedded in Manifest I dev .Next, it verifies the signature of Msg anno with the public key of , also embedded in Manifest I dev .Upon successful verification of the signatures, acknowledges the legitimacy of the announcement source, thereby confirming that the corresponding is in its network reach.Furthermore, by reading Attest, learns whether is a trustworthy state since the last attestation.If Attest fails, disregards Msg anno and alerts the user of a potentially compromised . Figure 10 : Figure 10: PAISA Proof-of-Concept.The Phone screenshot on the right side shows Reception app with device details of (emulated on the NXP board beside it). Figure 11 : Figure 11: PAISA Announcement Overhead on at Runtime. Table 1 : Various Types of IoT Devices with different Sensors, Actuators, and Network Interface. , where PAISA is PAISA TCB software, time cur is the current timestamp, URL Man Full is the full URL of URL Man if the URL is shortened, and URL Man is the shortened URL.(d) PAISA in picks a new keypair ( , ), stores , and outputs to .(e) computes Sig Man := SIG( , Manifest I dev ), where SIG is a signature function, and appends Sig Man and to Manifest I dev hosted at URL Man .Registration Phase of PAISA.a timestamp, and a signature of Msg anno .For the sake of simplicity, we assume that Manifest I dev is hosted on . receives Msg anno , verifies it, extracts the URL, and fetches Manifest I dev from .Note that Manifest I dev can also be hosted by other third parties or on a blockchain; its authenticity is based on 's signature at the time of provisioning. 5.2.2). appends and the hash of to Manifest I dev .Finally, to authenticate Manifest I dev , signs Manifest I dev using and appends the signature and its own certificate to Manifest I dev .Alternatively, could directly register with a Certificate Authority (CA) if there is a suitable deployed public key infrastructure (PKI), and include 's certificate in Manifest I dev .Also, URL Man Full is included in Manifest I dev so that , when it later uses URL Man , can detect if the redirection is wrong.Also, for sanity purposes, can include a "status" flag in Manifest I dev to indicate if is revoked, e.g., reported stolen.5.2.2 BootTime.As mentioned earlier, Msg anno must contain the timestamp of to prevent replay attacks.Some IoT devices Protocol 2. PAISA BootTime consists of one procedure, TimeSync, and is realized as follows.TimeSync [ ←→ ]: Assume a map :=< , time I dev > maintained by , where is ID of provisioned using Provision and time I dev is the latest registered timestamp of .TimeSync is defined by three interactions [SyncReq, SyncResp, SyncAck ]: (a) SyncReq [ −→ ] : When boots: (i) Computes SyncReq := ( , N 1 dev , time prev , Sig Req ), where N 1 dev is a nonce, time prev is the previous timestamp, and Sig Req := SIG( , H ( | |N 1 dev | |time prev + 1) ) (2) (ii) Sends SyncReq to .(b) SyncResp [ ←− ] : Upon receiving SyncReq, : (i) Checks if time prev + 1 is consistent with the latest registered timestamp in .If fails, outputs ⊥ and ignores SyncReq.(ii) Verifies Sig Req using .If fails, outputs ⊥ and ignores SyncReq; otherwise, continues.(iii) Computes SyncResp := ( , N 1 dev , N 1 svr , time cur , Sig Resp ), where N 1 svr is a nonce and time cur is the current timestamp of , and Sig Req := SIG( , H ( | |N 1 dev | |N 1 Msvr | |time cur ) ) (3) (iv) Sends SyncResp to .(c) SyncAck [ −→ ] : Upon receiving SyncResp, : (i) Verifies SyncResp using .If fails, outputs ⊥, ignores SyncResp, and repeats TimeSync; otherwise continues.(ii) Sets time prev := time cur from SyncResp.Sig Ack := SIG( , H ( | |N 2 dev | |N 1 Msvr | |time prev ) ) (4) (iv) Sends SyncAck to .Finally, verifies Sig Ack with .If successful, stores time prev as the latest registered timestamp of . Announcement on is part of PAISA , installed at Provision time, and (2) Reception is an app on , installed by the user.Announcement: PAISA implements two time intervals using secure timer on , T Attest and T Announce , which govern when Attest and Announce must be executed, respectively, triggered by the timer interrupt.During Attest, i.e., when time dev matches T Attest , PAISA measures memory containing and compares it with the hash of stored at Provision time.If the measurements match, sets Att result = and Att report = (Att result , time dev ) and stores the latter in secure RAM.During Announce, i.e., when time dev matches T Announce , generates new Msg anno composed of: a nonce, the current timestamp time dev , URL Man given at Provision time, Att report from the latest attestation as per T Attest , and a signature over its content.The size of Msg anno depends on the signature algorithm used.Also, whenever the Manifest I dev or URL Man is updated (e.g., software Protocol 3. PAISA runtime consists of two procedures: Announcement and Reception: Announcement [ ←→ ]: Let time dev be clock realized using a secure timer and the latest timestamp received via TimeSync.Announcement is defined by two sub-procedures [Attest, Announce ].Also, let T Attest and T Announce be the periodicity of Attest and Announce, respectively.(a) Attest [ −→ ] : If time dev % T Attest == 0, generates an attestation report: (i) Measures program memory: := H ( ).(ii) Sets Att result := 1 if == , where is the expected hash of software installed during Provision.Otherwise, Att result = 0. (iii) Outputs Att report := (Att result , time dev ), where time dev is the timestamp when the attestation report is generated.(b) Announce [ −→ ] : If time dev % T Announce == 0, broadcasts an announcement packet: (i) Generates Msg anno := (N dev , time dev , URL Man , Attest, Sig anno ), where N dev is a nonce, time dev is the current timestamp, URL Man is the stored link pointing to Manifest I dev given at Provision, and Sig anno := SIG( , H ( | |N dev | |time dev | |URL Man | | Attest) ) (5) (ii) Broadcasts Msg anno .Reception [ ←→ ]: maintains a timer time U dev synchronized with the world clock.Upon receiving Msg anno from a , executes Reception.Reception is defined by a sub-procedure [Verify ] : (a) Parses Msg anno and extracts: (time dev , URL Man , Attest, Sig anno ).Next, fetches Manifest I dev from URL Man .(b) Verify [ −→ ] : Upon receipt of Manifest I dev , verifies Msg anno : (i) Checks if (time U dev − ) < time dev , where is the tolerance delay window.If not, discards and outputs ⊥. (ii) Retrieves Sig Man and from Manifest I dev , and verifies Sig Man using .If fails, aborts and outputs ⊥. (iii) Retrieves and verifies Sig anno .If fails, aborts and outputs ⊥.(c) Outputs (Manifest I dev , Attest). (1)olves two procedures:(1) In our experiments, Attest time is much smaller than Announce time because signing takes more time than just hashing a small amount of memory.Reception: After receiving Msg anno from , first parses it and checks if the received time dev is within [time U dev −, time U dev ], where time U dev is the clock value of , and is the toleration delay window of the assumed network.If Msg anno is fresh, then fetches Manifest I dev from the link URL Man and verifies Manifest I dev based on the public key and the signature Sig Man embedded in Manifest I dev .Next, it verifies the signature of Msg anno with the public key of , also embedded in Manifest I dev .Upon successful verification of the signatures, acknowledges the legitimacy of the announcement source, thereby confirming that the corresponding is in its network reach.Furthermore, by reading Attest, learns whether is a trustworthy state since the last attestation.If Attest fails, disregards Msg anno and alerts the user of a potentially compromised . Table 2 : PAISA Overhead on at BootTime.Performance of : The latency of Reception application is shown in Table3.It takes 1,070ms with a deviation of 247ms to receive one Msg anno .This large deviation is due to two factors: the time to fetch Manifest I dev depending on network delay and frequency, plus context switching time on the smartphone.Note that Google Pixel 6 has heterogeneous cores (2 cores @ 2.8GHz, 2 cores @ 2.25GHz, and 4 cores @ 1.8GHz), thus, the overall frequency is represented as [1.8-2.8]GHz in Table3.Despite it taking 1s for one message, there is not much impact in case of multiple -s, because Msg anno processing can be done concurrently via threading (AsyncTask).Therefore, upon launching the Reception app, the delay in receiving most announcements is expected to be within a few seconds. Table 3 : PAISA Overhead on and .Performance of : TimeSync has one signing and two verification operations which take about 1ms each at 2.6GHz.Hence, the average latency of TimeSync is 5.6ms with a deviation of 2.77ms, mostly due to network delay.This latency is reasonable, despite handling multiple devices, because they can be served in parallel.Moreover, TimeSync only occurs at reboot which is quite infrequent for each .Manifest I dev size: Many factors, such as device description, cryptographic algorithm, key size, type of certificates, and encoding method used in certificates, influence the size of Manifest I dev .Thus, Manifest I dev can vary from a few to a few hundred KB-s.The size of Manifest I dev used in our evaluation is 2,857 bytes.TCB size: As mentioned in Section 6.3, PAISA TCB consists of software in TZ-M of the main NXP board and the driver in the network ESP32 board.On the main board, the TCB is 184KB (includes Mbed TLS), and 682KB on the network board (includes the network stack).
16,108.4
2023-09-07T00:00:00.000
[ "Computer Science", "Engineering" ]
RINT1 Loss Impairs Retinogenesis Through TRP53-Mediated Apoptosis Genomic instability in the central nervous system (CNS) is associated with defective neurodevelopment and neurodegeneration. Congenital human syndromes that affect the CNS development originate from mutations in genes of the DNA damage response (DDR) pathways. RINT1 (Rad50-interacting protein 1) is a partner of RAD50, that participates in the cellular responses to DNA double-strand breaks (DSB). Recently, we showed that Rint1 regulates cell survival in the developing brain and its loss led to premature lethality associated with genomic stability. To bypass the lethality of Rint1 inactivation in the embryonic brain and better understand the roles of RINT1 in CNS development, we conditionally inactivated Rint1 in retinal progenitor cells (RPCs) during embryogenesis. Rint1 loss led to accumulation of endogenous DNA damage, but RINT1 was not necessary for the cell cycle checkpoint activation in these neural progenitor cells. As a consequence, proliferating progenitors and postmitotic neurons underwent apoptosis causing defective neurogenesis of retinal ganglion cells, malformation of the optic nerve and blindness. Notably, inactivation of Trp53 prevented apoptosis of the RPCs and rescued the generation of retinal neurons and vision loss. Together, these results revealed an essential role for TRP53-mediated apoptosis in the malformations of the visual system caused by RINT1 loss and suggests that defective responses to DNA damage drive retinal malformations. INTRODUCTION Several human diseases that affect the central nervous system (CNS) originate from mutations in genes of the DNA damage response (DDR) pathways (Jackson and Bartek, 2009;McKinnon, 2017). RINT1 (Rad50-interacting protein 1) was initially described as a regulator of the G2/M cell cycle checkpoint, centrosome integrity and chromosomal segregation (Xiao et al., 2001;Lin et al., 2007). Additional roles for RINT1 were described, including regulation of autophagy and Golgi-ER trafficking mechanisms (Hirose et al., 2004;Arasaki et al., 2006;He et al., 2014). Rint1 inactivation in the developing brain is lethal, causes massive apoptosis of neural progenitor cells, and was associated with DNA damage accumulation, impaired ER-Golgi homeostasis and autophagy inhibition (Grigaravicius et al., 2016). While these findings reinforced the importance of RINT1 for progenitor cells survival, it remains unclear how and which of the multiple functions of RINT1 contributes to its pleiotropic effects in physiological and pathological contexts. The neural retina is the CNS tissue that detects and transmits visual stimuli to the brain through axonal projections of the retinal ganglion cells that compose the optic nerve (Horsburgh and Sefton, 1986;Dowling, 1987). Malformation and/or degeneration of retinal ganglion cells can cause irreversible blindness (Taylor, 2007;Almasieh et al., 2012). The architecture of retinal tissue and the mechanisms that govern the generation of retina neurons during development are highly conserved in vertebrates, making the retina an excellent system to study neurogenesis in the CNS (Centanin and Wittbrodt, 2014). Retinal ganglion cells are the first neurons generated and, as well as other retinal cell types, originate from multipotent retinal progenitor cells (RPCs). Precise coordination of the RPCs proliferation, survival and neurogenesis is essential for the formation of a functional retina (Dyer and Cepko, 2001;Ohnuma and Harris, 2003) and it is well established that RPCs rely on classical cell cycle checkpoints in response to exogenous DNA damaging agents (Herzog et al., 1998;Borges et al., 2004;Mayer et al., 2016). However, few studies approached how defects in physiological DDR affects the genesis of retinal neurons (Baranes et al., 2009;Baleriola et al., 2010;Rodrigues et al., 2013;Alvarez-Lindo et al., 2019). In humans, RINT1 mutations have been recently associated with a developmental multisystem disorder (Cousin et al., 2019) and in mice, loss of RINT1 in vivo causes progenitor cell death and is lethal (Lin et al., 2007;Grigaravicius et al., 2016). In a context where different molecular mechanisms for RINT1 have been described (Kong et al., 2006;Lin et al., 2007;Arasaki et al., 2013;Tagaya et al., 2014), characterizing how Rint1 loss of function leads to cell death will contribute to determine its essential roles for progenitor homeostasis. TP53 is a master regulator of DDR and key for DNA damage induced cell death of progenitor cells, however TP53-independent responses to DNA damage have been reported (Pietsch et al., 2008;Valentine et al., 2011;Reinhardt and Schumacher, 2012;Fagan-Solis et al., 2020). Importantly, activation of DDR in the CNS of mice may trigger distinct TRP53-dependent outcomes (Frappart and McKinnon, 2007;Lee et al., 2012b;Lang et al., 2016), and it has not yet been studied whether TRP53 is required for the developmental malformations caused by RINT1 loss. To bypass the lethality caused by Rint1 inactivation in the embryonic brain and understand the long-term consequences of its inactivation to CNS development, we conditionally inactivated Rint1 in retinal progenitor cells (RPCs). Our findings indicate that RINT1 is essential to prevent endogenous DNA damage accumulation, but is not required for the activation of cell cycle checkpoint. In Rint1-deficient retinas, RPC committed to differentiate into retinal ganglion cells die by apoptosis severely compromising retinogenesis and optic nerve formation. Remarkably, inactivation of Trp53 in the Rint1deficient retinas rescued the RPCs death and fully restored retinal structure and vision, demonstrating that RINT1is essential for retinal development and indicating that the cell death of progenitors is key for developmental malformations caused by RINT1 deficiency. Ethics Statement, Mice, and Genotyping All experiments with rodents were planned according to international rules and were approved by the Ethics Committee on Animal Experimentation of the Health Sciences Center (CEUA, CCS) of the Federal University of Rio de Janeiro in Brazil and approved by the governmental review board of the state of Baden-Württemberg (Regierungspräsidium Karlsruhe-Abteilung 3-Landwirtschaft, Ländlicher Raum, Veterinär-und Lebensmittelwesen) in Germany. RNA Extraction, cDNA Synthesis, and Real-Time RT-PCR Retinas were dissected in cold PBS and lysed in 1 mL of Trizol (Thermo Fisher Scientific, cat# 15596026). Following, mechanical lysis of the tissue using a 100U syringe, standard Trizol extraction was performed and the pellet resuspended in 20 µL of ultrapure water (Thermo Fisher Scientific, 10977). Analysis of rRNA integrity was performed by electrophoresis in a 1% agarose gel and RNA concentration and purity were determined using a Nanodrop TM 2000 spectrophotometer; 1 µg of total RNA was treated with DNase (rDNase kit, Ambion, AM1906) and contamination with genomic DNA was verified by PCR using primers for genomic DNA and electrophoresis. cDNA was synthetized using first-strand cDNA synthesis kit (GE, 27-9261-01) following the manufacturer's instructions. To label S-phase cells in vivo, intraperitoneal injections of 50 µg/g of body weight of BrdU (Sigma Aldrich, cat# B5002) were performed. Eyes were collected 1 h after injection. TUNEL [Click-iT TUNEL Alexa Fluor 488 Imaging Assay (Invitrogen, C10245)] analysis was performed following manufacturer's instructions. Fluorescent images were captured using a Leica TCS-SPE with an AOBS confocal microscope system. In addition to TUNEL assay and cleaved-caspase-3 staining, apoptotic cell death was also analyzed through the detection of pyknotic nuclei, a classical morphological hallmark of apoptosis. Pyknotic nuclei were identified in retinal tissue sections previously stained with nuclear dyes (DAPI or SYTOX green) based on its morphology of compacted, spherical and intense (brighter) nuclear staining that reveals the higher degree of nuclear chromatin condensation (Soriano et al., 1993;Ziegler and Groscurth, 2004;Kroemer et al., 2009) (Figure 3A). Optomotor Response Test Measurements of visual acuity by optomotor response were performed using OptoMotry as previously described (Cavalheiro et al., 2017;Rocha-Martins et al., 2019). Visual accuracy threshold was determined by systematic increments of the spatial frequency until the animal no longer responded. The experimenter was blind in relation to mice genotypes. Experimental Design, Quantifications, and Statistical Analysis At least three mice were used on each analysis and the number of mice used on each experiment was plotted as a dot in each graph (black dots for control = Rint1 Ctrl , brown dots for cKO = Rint1 α-Cre and red dots for DKO = Rint1; Trp53 α-Cre mice. For every statistical analysis, the measurement obtained for each mouse in a given experiment was used as an independent value (n). Due to the pattern of the Cre-mediated recombination in α-Cre retinas (Marquardt et al., 2001) (Cre recombination occurs only in retinal periphery), in experiments involving histological sections, we analyzed and quantified only the retinal periphery (∼250 micrometers most-peripheral regions of each side of the retinal section). To standardize regions between different samples, only sections in which the optic nerve was visible were used for quantifications. At least three sections from each mouse were quantified and the obtained mean was the measurement used for each mouse. Quantifications in the neuroblastic layer (NBL) were normalized by area (mm 2 ) and quantifications on the ganglion cell layer (GCL) were normalized by length (µm) of retinal tissue. GraphPad Prism software was used for statistical analysis. Student's t-test or one-way ANOVA were performed as indicated on each figure legend. Computations assumed the same scatter (s.d.) and Gaussian distribution between groups. p-values are based on two-sided tests. RINT1 Is Essential for Retinal Development and Its Loss Causes Blindness To investigate RINT1 function during retinogenesis, we used a previously generated Rint1 floxed mice (Grigaravicius et al., 2016) and crossed with an α-Cre mouse line (Marquardt et al., 2001) that leads to Rint1 genetic inactivation in retinal progenitor cells (RPCs). Real-time RT-PCR studies revealed that Rint1 is expressed through out mouse retinal development (Supplementary Figure S1A) and PCR analysis confirmed the recombination of the floxed allele in the Rint1 α-Cre (Rint1 F/F ; α-Cre +/− ) retina (Supplementary Figure S1B). Inactivation of Rint1 specifically in the RPCs induced optic nerve hypoplasia and mildly affected eye growth (Figures 1A,B). Consistent with the spatial pattern of α-Cre-mediated recombination (Marquardt et al., 2001), the periphery of adult Rint1-deficient retinas was severely affected, confirming that RINT1 is required for retinal morphogenesis ( Figure 1C). To test whether the malformation of Rint1-deficient retinas would impact visual function, we performed an optomotor response analysis that revealed a severe visual acuity impairment of Rint1 α-Cre mice ( Figure 1D). These findings indicate that RINT1 is crucial for retinal development and for visual function. DNA Damage Accumulation and Checkpoint Activation Following RINT1 Loss To better understand the defective morphogenesis of Rint1 α-Cre retina, we evaluated the consequences of RINT1 loss to key cellular events of early retinogenesis. In progenitor cells of the brain, Rint1 inactivation caused genomic instability (Grigaravicius et al., 2016); therefore, we asked whether RINT1 loss would affect the DDR in RPCs. An increased proportion of γH2AX positive (+) cells suggested an accumulation of endogenous DNA damage in the Rint1-deficient RPCs (Figures 2A,B). Since DNA damage can activate distinct cell cycle checkpoints and pause the cell cycle, we asked whether the proliferation of RPCs would be affected following RINT1 loss. First, we analyzed the distribution and scored the proportion of PCNA, a progenitor cell marker expressed in all phases of the cell cycle. No difference in PCNA + cells was found in the Rint1 α-Cre embryonic retinas (E15.5) (Supplementary Figure S2). Next, we pulse-labeled progenitor cells entering the S-phase with bromodeoxyuridine (BrdU) and quantified the proportion of BrdU + RPCs. No alteration in the proportion of BrdU + cells was observed (Figures 2C,D), indicating that total number of RPCs FIGURE 2 | DNA damage accumulation and normal cell cycle checkpoint following Rint1 inactivation in RPCs. (A,C,E,G,I) Representative images of γH2AX, BrdU, phospho-H3 (pH3), anaphase mitotic nuclei, and phospho-Chk1 (pChk1) immunostaining in Rint1 Ctrl and Rint1 α-Cre retinas at E14.5 or E15.5 (as indicated). (B,D,F,H,J) Quantification of γH2AX + , BrdU + , pH3 + , anaphase nuclei, and pChk1 + cells in Rint1 Ctrl and Rint1 α-Cre retinas at E14.5 or E15.5. Statistical analysis: Student's t-test. *p < 0.05; **p < 0.01; ***p < 0.001. Error bars indicate SD. Scale bars: 50 µm. NBL, neuroblastic layer. Frontiers in Cell and Developmental Biology | www.frontiersin.org is unaltered and that these progenitors normally enter S-phase in Rint1-deficient retinas. RINT1 was previously associated with the regulation of G2/M cell cycle checkpoint following irradiation (Xiao et al., 2001). To test whether inactivation of Rint1 could impact the transition of progenitors between cell cycle phases, we scored phospho-histone H3 (pH3) + RPCs and, based on the nuclear morphology, the number of RPCs reaching anaphase. A decrease in pH3 + cells (Figures 2E,F) and a reduction of RPCs in anaphase (Figures 2G,H) was detected in the Rint1 α-Cre retinas, suggesting that the accumulation of DNA damage caused by RINT1 loss activates a cell cycle checkpoint that prevents RPCs to reach final phases of mitosis. ATR-mediated phosphorylation of Chk1 is a hallmark of replicative stress and mediates both intra-S and G2/M checkpoints (Liu et al., 2000;Saldivar et al., 2018). To test whether RINT1 loss would lead to Chk1 activation in RPCs, we scored the proportion of phospho-Chk1 (pChk1) + cells. An increase of pChk1 + cells was observed in Rint1 α-Cre embryonic retinas (Figures 2I,J). Altogether, these findings indicate that in the absence of RINT1, RPCs accumulate endogenous DNA damage, likely during replication, and activate cell cycle checkpoints in the absence of RINT1. Rint1 Inactivation Induces Cell Death in the Embryonic Retina Replication-associated accumulation of DNA damage and activation of cell cycle checkpoints may induce cell death (Nowsheen and Yang, 2012;Saldivar et al., 2017), therefore we interrogated whether RINT1 loss would cause cell death in developing retina. An increase in apoptosis was observed in Rint1-deficient embryonic retinas as revealed by the quantification of pyknotic nuclei (Figures 3A,B), TUNEL + (Figures 3C,D) and cleaved caspase-3 (cCasp3 + ) cells (Figures 3E,F). During mid-gestational stages of mouse retinogenesis, in addition to the expansion of progenitor pools, a proportion of the RPCs exit cell cycle and undergo cell differentiation (Agathocleous and Harris, 2009). To determine whether RINT1 loss would induce apoptotic cell death of RPCs, we performed a double staining for TUNEL and PCNA at Frontiers in Cell and Developmental Biology | www.frontiersin.org E15.5. Approximately half of the TUNEL + cells were PCNA + in Rint1 α-Cre retinas (Figures 3G,H), confirming that proliferating RPCs undergo apoptosis and suggesting that postmitotic cells may also die following Rint1 inactivation. Apoptosis of Rint1-Deficient RPCs Compromises Ganglion Cell Layer Generation Retinal ganglion cells are the first cell type to be generated during retinogenesis (Sidman, 1961;Rapaport et al., 2004). In the mouse, their birth begins around E11, peaks during mid-gestation while newborn retinal ganglion cells migrate to the ganglion cell layer (GCL) (Drager, 1985;Young, 1985; Nguyen-Ba-Charvet and Rebsam, 2020). The detection of PCNAnegative apoptotic cells in Rint1-deficient retinas may be explained by the loss of PCNA in dying progenitors or by the apoptosis of postmitotic cells after RINT1 loss. Therefore, we tested the hypothesis that Rint1-deficiency affects RPCs committed to become ganglion cells and/or postmitotic cells that migrate toward the GCL. Quantification of TUNEL + cells in the GCL confirmed that postmitotic neurons die in Rint1deficient embryonic retinas ( Figure 4A). To examine whether RINT1 loss affects RPCs committed to differentiate into retinal ganglion cells, we performed a double staining for TUNEL and Athonal 7 (Atoh7), a master regulator of retinal ganglion cells identity and differentiation (Brown et al., 2001;Wang et al., 2001;Yang et al., 2003;Brzezinski et al., 2012). The proportion of TUNEL/Atoh7 double positive RPCs sharply increased in Rint1 α-Cre retinas (Figures 4B-D). Next, we asked whether the apoptosis of postmitotic neurons and of RPCs committed to become ganglion cells in Rint1 α-Cre retina affect the formation of the GCL, where ganglion cells and displaced amacrine cells reside after migration. No alteration in the number of neurons in the GCL was detected at E15.5; however, during postnatal stages, fewer neurons occupy the GCL of Rint1 α-Cre retinas ( Figure 4E). These findings suggest that the defective neurogenesis and optic nerve hypoplasia of Rint1 α-Cre mice is caused by the apoptosis of both postmitotic neurons and committed RPCs. Trp53 Inactivation Rescues Phenotypes Caused by RINT1 Loss Whenever Rint1 was inactivated in vivo progenitor cells died causing severe phenotypes (Lin et al., 2007;Grigaravicius et al., 2016). Inactivation of DDR and DNA repair factors in neural progenitors lead to DNA damage-induced TRP53dependent apoptosis (Frappart and McKinnon, 2007;Lee et al., 2012b). Previously, Grigaravicius et al. found evidence of TRP53 stabilization in Rint1-deficient neural progenitor cells, but the role of TRP53 was not studied. Therefore, to test whether TRP53-mediated apoptosis drives the malformations of Rint1deficient retinas, we generated a Rint1; Trp53 α-Cre mice DKO. Adult DKO retinas displayed all nuclear and plexiform layers and phenotypically resemble control retinas, indicating that Trp53 inactivation fully rescued the retinogenesis of Rint1deficient retinas (Figures 5A-C). Quantification of pyknotic nuclei revealed that Trp53 inactivation prevented the apoptosis caused by RINT1 loss in developing retinas ( Figure 5D). Finally, the DKO mice displayed a normal optomotor response, confirming that blockade of RPCs apoptosis fully rescued retina morphology and vision ( Figure 5E). These findings indicate that the TRP53-mediated cell death of the Rint1-deficient neural progenitor cells drives the defective morphogenesis caused by RINT1 loss in the CNS (Figure 5F). DISCUSSION Visual function relies on the coordination of progenitor cells expansion and neurogenesis during retinal development. The comprehension of the molecular basis of how physiological DNA damage affects retinogenesis is still limited and may have relevant implications for regenerative medicine. Here, we showed that RINT1 protects retinal progenitor cells against DNA damage and apoptosis in vivo. In the absence of RINT1, retinogenesis was severely affected, leading to optic nerve malformation and vision impairment as revealed by optomotor response tests. Our model of retina-specific inactivation of Rint1 suggests that retina structure and electrical function are compromised. However, further functional analysis, such as electroretinogram (e.g., flash visual evoked potentials -VEP) or pattern VEP are required to determine the exact functional deficits contributing to the decreased visual acuity. Our findings are summarized in Figure 5F. Multiple cellular and molecular mechanisms were previously described for RINT1 (Xiao et al., 2001;Kong et al., 2006;Lin et al., 2007;Arasaki et al., 2013). In the brain, RINT1 prevents genomic instability, regulates ER/Golgi homeostasis and is required for the clearance of autophagosomes (Grigaravicius et al., 2016). Here, we show that shortly after RINT1 loss, progenitor cells committed to differentiate into ganglion cells accumulate DNA damage and undergo TRP53-mediated apoptosis. It was proposed that RINT1 and RAD50 interact and regulate G2/M cell cycle checkpoint in response to irradiation (Xiao et al., 2001), but little is known about how RINT1 prevents the accumulation of endogenous DNA damage in progenitor cells. In contrast to previous studies, our finding that fewer RPCs reached anaphase in Rint1-deficient retinas, indicate that RINT1 is not essential for the activation of functional cell cycle checkpoints in neural progenitor cells. The activation of ATR kinase in Rint1-deficient RPCs, as demonstrated by the phosphorylation of CHK1, suggests that DNA damage may arise during DNA replication. Indeed, RINT1 function is directly related to the MRN complex that is essential for the repair of DNA double strand breaks (Lamarche et al., 2010;Scully et al., 2019). More specifically, during DNA replication, the MRN complex participates in the activation of ATR, resolution of transcription-replication conflicts and replication fork restart (Duursma et al., 2013;Syed and Tainer, 2018). We hypothesize that RINT1 loss leads to replicative stress by disturbing the function of RAD50 and, thereafter, the MRN complex. In this context, we have shown that NBS1/Nbn regulates DNA damage accumulation in RPCs. Its loss leads to Trp53-mediated apoptosis that impairs the generation of retinal ganglion cells and drives retinal malformations. Statistical analysis: One-way ANOVA followed by Tukey's post-test. ***p < 0.001; ****p < 0.0001. Scale bar: 100 µm. c/d: cycles/degree. also protects retinal progenitor cells from DNA damage and apoptosis, highlighting the importance of these pathways for neural progenitor cells homeostasis (Rodrigues et al., 2013). Studies about the mechanisms of RINT1 during replication may provide important insights of how neural progenitors control genome stability. The consequences of defective DDR and its impact in developmental neurogenesis have been well studied in the brain. Inactivation of components of DNA replication machinery, DNA damage signaling pathways (Frappart et al., 2005;Lee et al., 2012a,b) as well as DNA repair factors (Lee et al., 2000;McKinnon, 2007, 2008;Baranes et al., 2009) revealed different levels of CNS malformations. In contrast, even though congenital disorders caused by mutations in DDR genes exhibit retinal malformations (Lim and Wong, 1973;Erdöl et al., 2003;Bhisitkul and Rizen, 2004;Chai et al., 2009;Krzyżanowska-Berkowska et al., 2014;Sasoh et al., 2014), the impact of defective DDR in retinogenesis and visual impairment still awaits investigation. Studies about the DNA damage signaling and repair factors revealed optic nerve morphological alterations in Nbn-deficient retinas, but loss of NBN and ATM did not impact retinal neurogenesis (Baranes et al., 2009;Rodrigues et al., 2013). Consistent with the reduced cellularity of the ganglion cell layer, Rint1deficient retinas also displayed malformation of the optic nerve. However, it cannot be discarded that defective axon growth or guidance may contribute to the described phenotype. In addition, because RINT1 loss impaired the generation of cells of the ganglion cell layer and possibly other cell types, perhaps RINT1 may have DDR-independent roles in the developing retina. Even though RINT1 was shown regulate ribosomal gene transcription (Yang et al., 2016), we do not anticipate a role of RINT1 in transcriptional networks of retinal cell types specification and propose that RPC apoptosis is a major driver of the retinal malformations. An interesting question in the field is why distinct DDR response pathways differentially affect neurogenesis. Considering that the retina is an ideal model to investigate neurogenesis, further studies may lead to a better comprehension of the relationship between DDR and neurogenesis with broad implications to the whole nervous system. Rint1 inactivation in non-dividing postmitotic neurons of the adult cerebellum causes neurodegeneration of Purkinje cells (Grigaravicius et al., 2016). During embryogenesis, RINT1 is essential for the survival of committed RPCs (Atoh7 + ) and postmitotic neurons of retinal ganglion cell layer (GCL). The apoptosis of retinal cell types that compose the GCL may be due to the previous accumulation of DNA damage in RPCs before they exit cell cycle. However, genomic instability independent functions of RINT1 in early-born retinal ganglion cells may not be ruled out. In Rint1-deficient cerebellum, 35% of Purkinje cell exhibited Golgi fragmentation while less than 1% accumulated DNA damage (Grigaravicius et al., 2016), suggesting that defective DDR may have a limited contribution to the degeneration of adult cerebellar neurons. ER-Golgi homeostasis, vesicle trafficking and autophagy were also shown to be important for the survival of retinal ganglion cells during retinogenesis and optic nerve degeneration (Boya et al., 2016;Adornetto et al., 2020). Further studies will be necessary to determine whether the apoptosis of postmitotic retinal neurons may be due to the previous accumulation of DNA damage in RPCs or pleiotropic RINT1 functions in these non-dividing neurons. The relevance of RINT1 for human diseases was highlighted by several studies. The tumor predisposition of Rint1 heterozygous mice indicated a role as tumor suppressor (Lin et al., 2007). Interestingly, genomic studies of human cancers suggested an oncogene or cancer predisposition gene function in glioblastomas, breast cancer and acute myeloid leukemia (Quayle et al., 2012;Park et al., 2014;Shahi et al., 2019;Simonetti et al., 2019). RINT1 mutations were identified in patients of the ALF multisystem developmental disorder (Cousin et al., 2019) and in patients of Lynch syndrome (Park et al., 2014), that often presents retinal pigment epithelium hypertrophy (CHRPE) (Lynch et al., 1987). While, RINT1 variants may have the potential to impact protein-protein interactions (Otterpohl and Gould, 2017), the mechanisms underlying the contributions of RINT1 to these pathologies are not yet understood. TRP53-mediated cell cycle arrest and apoptosis are common responses to DNA damage in progenitor cells (Hafner et al., 2019). Because blockade of TRP53-mediated apoptosis fully rescued retina morphogenesis and function, we propose the cell death of progenitors is key for developmental malformations caused by RINT1 deficiency. Understanding the biology of that dictates accumulation of physiological DNA damage and progenitor cells elimination is of great importance a wide range of human pathological conditions, including developmental diseases and cancer. DATA AVAILABILITY STATEMENT All datasets presented in this study are included in the article/Supplementary Material. ETHICS STATEMENT The animal study was reviewed and approved by the Ethics Committee on Animal Experimentation of the Health Sciences Center (CEUA, CCS) of the Federal University of Rio de Janeiro, Brazil, and Baden-Württemberg (Regierungspräsidium Karlsruhe-Abteilung 3-Landwirtschaft, Ländlicher Raum, Veterinär-und Lebensmittelwesen) in Germany. AUTHOR CONTRIBUTIONS AG, P-OF, and RM conceived and designed the experiments. AG, GM-R, and RM analyzed the data and performed the experiments. AG, GM-R, P-OF, and RM wrote the manuscript. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS We thank Isabele Menezes, Raphaela Magano, and Severino Gomes for technical assistance and Dra. Graziela Ventura for assistance in the confocal microscopy facility of the Instituto de Ciências Biomédicas (ICB, UFRJ). We thank Vinícius T. Ribas (UFMG) for critical reading of this manuscript.
5,712
2020-07-30T00:00:00.000
[ "Biology", "Medicine" ]
Gamma convergence and renormalization group: Two sides of a coin? We discuss, both from the point of view of Gamma convergence and from the point of view of the renormalization Group, the zero range strong contact interaction of three non-relativistic massive particles. Formally, the potential term is g(δ(x3-x1)+δ(x3-x2)),g<0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ g (\delta (x_3-x_1) + \delta (x_3 -x_2)), \;\, g < 0 $$\end{document} and is the limit ϵ→0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \epsilon \rightarrow 0$$\end{document} of approximating potentials Vϵ(|xi-x3|)=ϵ-3V(|xi-x3|ϵ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ V_\epsilon (|x_i -x_3|) = \epsilon ^{-3} V ( \frac{|x_i - x_3|}{\epsilon }) $$\end{document} , V(x)∈L1(R3)∩L2(R3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ V( x) \in L^1(R^3) \cap L^2 (R^3) $$\end{document}. The presence of a delta function in the limit does not allow the use of standard tools of functional analysis. In the first approach (European Phys. J. Plus 136-363, 2021), (European Phys. J. Plus 1136-1161, 2021), we introduced a map K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{K}$$\end{document}, called Krein Map , from L2(R9)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^2 (R^9) $$\end{document} to a space (Minlos space) M\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{M}$$\end{document}) of more singular functions. In M\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ { \mathcal M}$$\end{document}, the system is represented by a one parameter family of self-adjoint operators. In the topology of L2(R9)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^2 (R^9)$$\end{document}, the system is an ordered family of weakly closed quadratic forms. By Gamma convergence, the infimum is a self-adjoint operator, the Hamiltonian H of the system. Gamma convergence implies resolvent convergence (An Introduction to Gamma Convergence Springer 1993) but not operator convergence!. This approach is variational and non-perturbative. In the second approach, perturbation theory is used. At each order of perturbation theory, divergences occur when ϵ→0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \epsilon \rightarrow 0$$\end{document}. A finite renormalized Hamiltonian HR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_R$$\end{document} is obtained by redefining mass and coupling constant at each order of perturbation theory. In this approach, no distinction is made between self-adjoint operators and quadratic forms. One expects that H=HR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ H = H_R $$\end{document}, i.e., that “renormalization” amounts to the difference between the Hamiltonian obtained by quadratic form convergence and the one obtained by Gamma convergence. We give some hints, but a formal proof is missing. For completeness, we discuss briefly other types of zero-range interactions. Introduction We consider the strong contact (zero range) interaction of two non-relativistic particles of equal mass with a third massive particle. In quantum mechanics, the Hamiltonian of separate strong contact of a particle with two identical ones [1,2] is described by the limit, when → 0, of the Hamiltonians H H 0 + i 1,2 V (|x i − x 0 |) where H 0 is the three-body non-relativistic free Hamiltonian and V (|x i − x 0 |) −3 V ( |x i −x 0 | ) , V (x) ∈ L 1 (R 3 ) ∩ L 2 (R 3 ). In the limit → 0, the interaction is represented formally by two delta functions δ(x i − x 0 ) i 1, 2. Formal perturbation theory leads to divergences. In order to describe the system by a self-adjoint Hamiltonian, one may follow two approaches: Gamma convergence or renormalization. The first approach, through Gamma convergence [1,2,6] , is variational and non-perturbative. The limit is obtained in the sense strong resolvent convergence. We recall briefly elements of Gamma convergence [6], a tool of common use in the theory of composite materials but seldom used in quantum mechanics. We introduce first a map K ( Krein map ) of the formal Hamiltonian to a Hilbert space M [3] of more singular functions; the map is fractioning and mixing and acts differently on H 0 (an operator) and on δ(x − x j ) (a quadratic form). For historical reasons, we call Minlos space the space M. In M, the kinetic energy and the interaction potential have opposite sign and the same degree of singularity. The system is described by a well order family of self-adjoint operators [4,5]. Returning to the topology of L 2 (R 9 ) produces a sequence of well-ordered weakly closed quadratic forms. Notice that we do not invert the Krein map; this map is fractioning and mixing, and therefore, it is not invertible. By Sobolev embeddings, compactness holds and the infimum of these quadratic forms can be closed strongly; its closure is a self-adjoint operator H, the Hamiltonian of our system. Gamma convergence [6] implies strong resolvent convergence of the Hamiltonians H to H. Notice that the sequence of operators H diverges in the strong operator topology We give in the next Sections some details; for full proofs, we refer to [1,2,6]. Remark 1 Since the Krein map is mixing and fractioning, this approach is along the lines of rearrangement inequalities (notice that we take the infimum of a sequence of quadratic forms). Their role has been stressed in particular by E.Lieb. Remark 2 This strategy of using as intermediate step a map to a space of more singular functions goes back to Friederichs [12] for the Laplacian in R + ; therefore, the map we use could be also called Friederichs map. In the second approach (renormalization) [8][9] [10], the potential term is represented as two delta "functions." The Hamiltonian is also here a formal limit for → 0 of Hamiltonians The limit is now considered as limit of quadratic forms. Formal perturbation theory leads at each order to a quadratic form that divergences when → 0. At each order in , these divergences are renormalized by redefining the parameters, mass and coupling constant. This sequence of renormalizations defines the renormalization (semi-)group. In quantum mechanics and in non-relativistic field theory, this sequence of operations converges to a limit (a fixed point) [8][9][10] . Contact interaction in quantum mechanics; Krein map, minlos space and gamma convergence We recall briefly the steps taken in [1,2]. For further details, we refer to [1,2] . We consider in R 3 the strong contact interaction of two no relativistic identical massive particles of coordinates x 1 , x 2 with a third massive particle of coordinate x 0 . Contact interactions are self-adjoint extension of the symmetric operator H 0 0 , the free Hamiltonian of a three body system restricted to functions that vanish in a neighborhood of the contact manyfold The operators that describe strong contact are limits as → 0, in strong resolvent sense, of Hamiltonians H with potentials that scale as We stress that convergence holds in the strong resolvent sense, i.e., the limit of the resolvents is the resolvent of a self-adjoint operator. When → 0, the resolvent family R (z − H ) −1 remains uniformly bounded and analytic outside any cone along the real axis and vertex in a convenient C < 0. Resolvent identities are satisfied for ≥ 0. The limit is therefore the resolvent of a self-adjoint operator bounded below. This operator is not the strong limit of H as → 0. The sequence H diverges to +∞ as → 0. The self-adjoint extension is constructed in [1,2] through a non-perturbative procedure based on Gamma convergence [6] , a variational tool introduced by E. de Giorgi and of common use in the theory of composite materials. As intermediate step we introduced in [1, 2] a map K from L 2 (R 9 ) to a space of more singular functions. We call this map Krein map K, and we call the target space Minlos space M [3]. The map is fractioning (the functions in the new space are more singular) and mixing (the map does not preserve the channel stricture). The target space is K ≡ H . Its action is different on the kinetic energy and on the "potential" ;notice that the former is a self-adjoint operator and the latter is a quadratic form. The Krein map is mixing and fractioning and can be regarded as a microscope that permits to see fine details of the interaction. Recall that W and 1 H 0 commute as quadratic forms (as can be seen in Fourier space). Therefore, the system is abelian. If the potential is negative (attraction) in M, the kinetic and potential part has the same singularity (a pole) but with opposite signs at the origin in the difference of the coordinates of the particles that are in strong contact. Therefore, [4,5] in M, the system is represented by a one parameter ordered sequence of self-adjoint operators. The parameter is the angular momentum of the motion in the system in which the barycenter is a rest. Each operator has an infinite sequence of bound states with eigenvalues that scale geometrically. We have studied this system in [1,2]. Remark Notice that we do not invert the Krein map; this map is fractioning and mixing, and therefore, it is not invertible. The Krein map is only an instrument (a microscope) to put in evidence the "optimal" macroscopic picture. Finding the optimal structure was also the original purpose of renormalization. To extract a self-adjoint operator (the Hamiltonian of our system) we make use of a variational procedure, Gamma convergence [6], introduced by E. de Giorgi and mostly used in the analysis of finely fragmented materials. The Gamma limit F(y) of a set of quadratic form in a Banach space Y is the quadratic form defined by the relations In our case, due to the special form of the potential, the Gamma limit F(y) of a set of quadratic form is the quadratic form defined by the relations The first condition implies that F(y) is a common lower bound for the forms F n , the second implies that this lower bound is optimal. The condition for the existence of the Gamma limit is that the sequence be contained in a compact set for the topology of Y (so that a Palais-Smale convergent subsequence exists). In our case, the topology of Y is the Frechet topology defined by Sobolev semi-norms and compactness follows from the absence of zero energy resonances. Therefore, in our case, the Gamma limit exists and it is a quadratic form which is bounded below. Notice that the Krein map is order preserving, and therefore, in our case, the sequence is monotone. The limit form is strictly convex, and by a theorem of Kato [7], it can be closed strongly and provides the "physical" Hamiltonian" H Gamma convergence implies strong resolvent convergence [6] and since the limit is a self-adjoint operator this implies also convergence of spectra and of wave operators. Recall that the resolvent family of a self-adjoint operator H is defined as . Consider now the approximating Hamiltonians H . The Krein map is positivity preserving. Since the potential is negative and increasing in absolute value when decreases to zero, they form a decreasing sequence It follows that the resolvents of the operators H have a limit for → 0, and the resolvent of the operator H is the strong limit of the resolvents of the Hamiltonians H . No rate of convergence can be given in the parameter . Remark that resolvent converge means that the limit for → 0 of the resolvents is the resolvent of a self-adjoint operator H lim . We have noticed that H diverges in norm for → 0. It is likely that the difference H − H lim is the diverging term that is present in a perturbative analysis. Since the potential is of finite range ( in fact, zero range), the system described by H lim is asymptotically free and the wave operator can be defined. Remark In the two-dimensional case, the role of resolvent convergence has been stressed in [11]. Renormalization In our analysis, of the interactions through Gamma convergence, we noticed that divergences occur because we consider the limit of the quadratic forms. In renormalization theory, these divergencies are "cured" at each order in by introducing renormalization [8][9][10] , i.e., a modification of the diverging parameters of the theory (mass and charges). The guiding principle is to subtract divergencies. The redefinition is through the subtraction of countably many terms which are either quadratic (Kinetic energy) or cubic (interaction) and a redefinition of the Hilbert space. At each order in perturbation theory, this procedure of renormalization provides a well-defined symmetric quadratic form bounded below (recall that in renormalization, no distinction is made between self-adjoint operators and symmetric quadratic forms). This sequence of renormalizations forms an abelian semigroup with a limit (a fixed point, an attractor). The fixed point provides the "physical" value of the parameters (charges) for the "physical" Hamiltonian. The final result is the renormalized Hamiltonian a well-defined quadratic form. The guiding principle of renormalization is to subtract divergencies. The redefinition is through the subtraction of countably many terms which are either quadratic in the fields (Kinetic energy) or cubic (interaction) and a redefinition of the Hilbert space. At each order in perturbation theory, this procedure of renormalization provides a well -defined symmetric quadratic form bounded below. No distinction is made between self-adjoint operators and symmetric quadratic forms). This sequence of renormalizations forms an abelian semigroup with a limit (a fixed point, an attractor). This fixed point provides the "physical" value of the parameters (charges and masses) for the "physical" Hamiltonian. The final result is the renormalized Hamiltonian. The proof requires many estimates. By construction, the limit is a symmetric quadratic form. While in general, symmetry of the limit quadratic forms is evident, it is hard to verify that it is strongly closed and represents a self-adjoint operator. In the formal process of renormalization, no distinction is made between self-adjoint operators and symmetric quadratic forms. And even if one were able to prove self-adjointness of the limit it would be hard to find its spectrum. (In our approach, through Gamma convergence, one finds the resolvent of the limit operator and therefore its spectrum.) Notice that in perturbation theory, one renormalizes first and then takes the limit → 0, while in Gamma convergence, the Krein map is defined after taking the limit → 0 (at the cost of introducing temporarily a new space, Minlos space, of more singular functions). In both approaches, there is no control on the rate of convergence in the parameter . The advantage of the approach through Gamma convergence, compared with the use of the renormalization group, is that it is within the theory of self-adjoint extensions and that the regularity of the wave functions plays a role but not the statistics. Therefore, the method applies to Bosons and to Fermions. In both cases, one has no rate of convergence in the parameter . It may be possible to prove convergence with parameter 1 log Weak contact A weaker form of local contact is the weak contact. These interactions occur mostly in quantum mechanics, whereas strong contact is typical of quantum field theory. A weak contact of the Hamiltonian is the limit in strong resolvent sense of Hamiltonians with potentials that scale as V (|x i −x 0 |) In three dimensions, this implies the presence of a zero energy resonance. Also, this interaction can be analyzed using perturbation theory and renormalization; the renormalization is weaker here. In [1,2], we studied weak contact making use of a Krein map and a Minlos space. Weak contact is the limit → 0 of approximating Hamiltonians H with potentials . We denote these limits with i ; they are infinite step functions. The Krein (rearrangement) map is also here mixing, but it is fractioning in a weaker form; it acts in the same way on the kinetic energy and on the potential. 0 . Also here, the Krein map is mixing and fractioning and can be regarded as a microscope that permits to see fine details of the interaction. Recall that W and 1 H 0 commute as quadratic forms (as can be seen in Fourier space). Therefore, also here, the system is abelian. Also here if the potential is negative (attraction), and in M, the kinetic and potential parts have the same singularity (a pole) but with opposite signs. Also here, the system is represented in M by an ordered family of self-adjoint operators. This corresponds in the physical Hilbert space to a family of quadratic forms. Their infimum is a self-adjoint operator, the Hamiltonian of the system. In [1,2], we have studied the case of three particles in mutual weak contact; this system has as semiclassical limit the three-body problem in Newton mechanics. We have also studied the case of two pairs of particles in which each particle of one pair is in weak contact with both particles of the other pair. We have proved that if there is a very weak repulsion between the pairs ( a resonance), the structure is described by the Ginsburg-Landau functional. If the repulsion is absent the energy functional is the version of the Gross-Pitaevski functional that has an essential singularity at contact. (In the literature, the resonances are often called "ghosts.") This functional describes the Bose-Einstein condensate in the low density regime. Weak contact of the barycenters of two pairs in strong contact gives the building block for a Bose-Einstein condensate in the high density regime. Remark 1 Notice that semi-classically weak contact corresponds to Coulomb interaction. Classically weak and strong contact is, respectively, holonomic and an-honolomic constrains. We do not give here the details. Remark 2 In two dimensions, there is only one type of contact interactions, the weak one. This interaction is studied in [11] using perturbation theory and renormalization. In [11], it is remarked that it natural to study resolvent convergence. Other non-relativistic quantum systems Along the lines described above, one can compare renormalization and Gamma convergence for other systems. Consider, e.g., a system composed of two pairs of identical particles. The particles can be either bosons or fermions. This system was studied in [2] in quantum mechanics using resolvent convergence. There is a strong contact attractive interaction between the particles in each pair, and there is a further weak contact interaction between their barycenters. Notice that identical fermions with opposite spin orientations can have a strong contact interaction. The system can represent an element of a Bose-Einstein condensate (one can add a regular confining potential that does not interfere with zero-range interactions [2]). Self-adjointness and the spectral properties of this system can be analyzed in quantum mechanics using Gamma convergence [2]. The eigenstates are critical points of a Gross-Pitaevskii energy functional. It is not essential that the "charges" of the particles be equal. Notice that these systems and the system of two particles in strong contact with a third particle studied in [1,2] are in three dimension the only irreducible local systems compatible with strong contact. The analysis of this system using the renormalization group can be done . After suitable subtractions (renormalizations), a quadratic form is obtained, but no proof is available that represents a self-adjoint operator. Renormalization group in non-relativistic field theory We compare now our analysis using Gamma convergence with the analysis through the renormalization group [6] in non-relativistic field theory. In quantum mechanics, the particles are elements of L 2 (R 3 ) and therefore can be localized. This allows for the definition of bound states and scattering states and of attractive point interactions. On the contrary in field theory, the fields are extensive quantities. Consider the local interaction of a massive particle with a non-relativistic field. The interaction is linear in the field, and the potential is represented with a delta function. Divergence occurs at every order of perturbation theory. These divergences are canceled by redefining the parameters of the theory (mass and charge) and the metric topology of the space (wave function renormalization). We consider here renormalization in its original formulation [8][9][10] , i.e., as a mathematical version of the microscope. In this setting, Gamma convergence will be the counterpart of renormalization. The interaction is linear in the field. We make use of Fock space. Also here, we find diverging terms at each order of perturbation. At each order renormalization provides the Hamiltonian as a well defined symmetric quadratic form bounded below. The guiding principe is to subtract divergencies. No distinction is made between self-adjoint operators and symmetric quadratic forms. One renormalizes first and then the take the limit → 0. Formally, this sequence of renormalizations has a limit (a fixed point, an attractor). This fixed point provides the "physical" value of the parameters (charges and masses). The result of renormalization is a symmetric quadratic form. It is assumed that this form represents a self-adjoint operator, the renormalized Hamiltonian. While in general, symmetry of the limit quadratic form is evident, it is hard to verify that it represents a self-adjoint operator. This method seeks operator convergence. Notice that Gamma convergence provides resolvent convergence. It is likely that the "infinite mass and charge renormalization" is due to the fact that the limit resolvent is not the resolvent of the limit quadratic form (which is "infinite") . Gamma convergence and fock space In this section, we sketch a possible use of Gamma convergence instead the renormalization group in Fock space. Notice that the construction of a Fock space over a Hilbert space is separable, and Gamma convergence can be defined for quadratic forms in Fock space. Consider a system of two non-relativistic particles of mass M interacting via contact interactions with non-relativistic particles of mass m. The interaction "creates" or "annihilates" the particle of mass m. We introduce creation and annihilation operators a(x), a * (x) of the particle with mass m. To control the singular nature of the interaction, we introduce the Krein map K and Minlos space M making use of the free Hamiltonian which is quadratic in the field. This space is now a Fock-Minlos space. Also here, the Krein map acts differently on the free Hamiltonian and on the interaction term. Also here, the map it is mixing and fractioning. In this space M, the creation and annihilation operators are bounded operators, and therefore, perturbation theory can be used. In M, the number of particles is not conserved by the Hamiltonian flow, but the free flow and the Krein map commute; therefore, the Krein map can be performed at any time and the resulting theory is stationary. The Krein map changes the metric topology, returning to the original Fock space one has an ordered family of quadratic forms. Again, since there are no zero energy resonances, Gamma convergence applies. The lowest form can be closed in the strong topology and defines a self-adjoint operator in the original Fock space. From Gamma convergence, it follows that also, in Fock space, the renormalized Hamiltonian is the limit in strong resolvent sense of the approximate Hamiltonians H . It is self-adjoint since the forms in M are symmetric. Again the Hamiltonian is the limit in strong resolvent sense as epsilon → 0 of a Hamiltonian in which the potential converges weakly to a delta function. Reference to the contact manyfold is essential. One cannot obtain this extension if one does not construct first the contact manifolds and finds the boundary conditions. Our approach is non-perturbative, in fact, it is "maximally" non-perturbative since in the space M, the kinetic and the potential term has the same weight. Funding Open access funding provided by Scuola Internazionale Superiore di Studi Avanzati -SISSA within the CRUI-CARE Agreement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix: Tracks in a clouds chamber An example of joint strong contact interactions is tracks in a cloud chamber or in photographic plates. Consider a fast neutral particle (a "cosmic ray") which interacts with an atom: the interaction is a separate strong contact interaction with the nucleus and with an outer conduction electron. Both are ejected. Since the interaction is of very short range, one may use a semiclassical description for the particles which are ejected, respectively, as a negatively charged particle (an electron) and positively charged particle (an ion). After the interaction, they follow "classical trajectories." In a supersaturated environment, such as a cloud chamber or a properly treated photographic plate, the particles produce ionization tracks; in the presence of a magnetic field, the tracks have opposite curvatures. We describe in this appendix two possible instances of joint strong contact. Added in proofs: Thanks are due to a referee for constructive criticisms.
5,908
2022-06-01T00:00:00.000
[ "Art", "Physics" ]
A Wavelet Algorithm for Fourier-Bessel Transform Arising in Optics The aim of the paper is to propose an efficient and stable algorithm that is quite accurate and fast for numerical evaluation of the Fourier-Bessel transform of order ], ] > −1, using wavelets. The philosophy behind the proposed algorithm is to replace the part tf(t) of the integral by its wavelet decomposition obtained by using CAS wavelets thus representing ] as a Fourier-Bessel series with coefficients depending strongly on the input function tf(t). The wavelet method indicates that the approach is easy to implement and thus computationally very attractive. Introduction The Fourier-Bessel transform (also designated as Hankel transform) is a very useful tool of mathematical physics [1]. It is a very useful instrument in a wide range of physical problems which have an axial symmetry. It is particularly important in optics and two-dimensional image processing, it naturally occurs in image reconstruction from projections or from reflected pulses, and it is a useful tool in the analysis and synthesis of three-dimensional wave fields. The present development is essentially motivated by optics application. The influence of the Laplacian on a function in cylindrical coordinates is equal to the product of the squared parameter of the transformation and the transform of the function [2] ( 2 2 + 1 ) ( ) ←→ − 2 0 ( ) , There are two types of the Hankel transform. The first one is defined on the semi-infinite interval. In this case the direct and inverse transforms of the ]th kind are represented as a symmetric pair. When we are dealing with problems that show circular symmetry, Hankel transforms may be very useful [3,4]. Laplace's partial differential equation in cylindrical coordinates can be transformed into an ordinary differential equation by using the Hankel transform. Because the Hankel transform is the two-dimensional Fourier transform of a circularly symmetric function, it plays an important role in optical data processing [5][6][7]. In optics, the Hankel transform appears in many contexts, not the least of which is the propagation of cylindrically symmetric laser beams. Most classical optical systems like mirrors or lenses are axially symmetric devices. Hankel transform also proved to be extremely useful in problems associated with seismology, geophysics [8,9], electroscattering, acoustics, hydrodynamics, image processing [10], time dependent Schrodinger equation, and so forth. Mathematical Background. The Fourier-Bessel transform may be defined by the following expression: 2 International Journal of Engineering Mathematics In the case of the finite Hankel transform only a direct transform has an integral form. Without loss of generality its expression is (see [11]). Practical calculation of direct and inverse Hankel transform is connected with two problems. The first problem is based on the fact that not every transform in the real physical situation has analytical expression for result of inverse Hankel transform. The second one is the determination of functions as a set of their values for numerical calculations. The classical trapezoidal rule, Cotes rule, and other rules connected with the replacement of the integrand by sequence of polynomials have high accuracy if integrand is a smooth function. But ( ) ] ( ) (or ] ( ) ] ( )) is a quick oscillating function if (or ) is large. There are two general methods of the effective calculation in this area. The first is the fast Hankel transform [12]. The specification of that method is transforming the function to the logarithmical space and fast Fourier transform in that space. This method needs a smoothing of the function in log space. The second method is based on the separation of the integrand into product of slowly varying component and a rapidly oscillating Bessel function [13]. But it needs the smoothness of the slow component for its approximation by lower-order polynomials. To overcome these difficulties, various different techniques are available in the literature. Several papers have been written on the numerical evaluation of the HT in general and the zeroth order in particular [14][15][16][17][18][19][20][21][22][23][24]. There are two general methods of the effective calculation in this area. The first is the fast Hankel transform [25]. The specification of that method is transforming the function to the logarithmical space and fast Fourier transform in that space. This method needs a smoothing of the function in log space. The second method is based on the separation of the integrand into product of slowly varying component and a rapidly oscillating Bessel function [26]. But it needs the smoothness of the slow component for its approximation by lower-order polynomials. From variety of algorithm, a potential user would probably find it difficult to select any one algorithm that might be best for a particular application. For an overview of these algorithms and their numerical complexity, the reader is referred to [27][28][29][30][31]. The organization of the paper is as follows: Section 2 gives a brief description of the CAS wavelets followed by the derivation of the algorithm in Section 3. The efficiency and stability of the algorithm are shown by applying it to four test functions with known analytical transform in Section 4. At the end, a brief conclusion and future work are given in Section 5. Wavelets and CAS Wavelets. Wavelets constitute a family of functions constructed from dilation and translation of a single function ( ) called the mother wavelets. When the dilation parameter is 2 and the translation parameter is 1 we have the following family of discrete wavelets [32]: where form a wavelet orthonormal basis for 2 ( ). CAS wavelets ( ) = ( , , , ) involve four arguments , , , and , where = 0, 1, . . . , 2 − 1, is assumed to be any nonnegative integer, is any integer, and is normalized time. CAS wavelets are defined as [33] ( ) where CAS ( ) = cos (2 ) + sin (2 ) . An efficient algorithm has been presented for the Fourier-Bessel transform. Outline of Algorithm The function ( ) representing physical fields either are zero or have an infinitely long decaying tail outside a disk of finite radius . Hence, in most practical applications either the signal ( ) has a compact support or for a given > 0 there exists > 0 such that | ∫ ∞ ( ) ] ( ) | < . Therefore, in either case, known as the finite Hankel transform (FHT), is a good approximation of the HT as given by (2). Writing ( ) = ( ) in (8), we get̂] We may expand ( ) as follows: where = ⟨ ( ), ( )⟩. By truncating infinite series (10) at levels = 2 − 1 and = , we obtain an approximate representation for ( ) as where the matrices and are given by Substituting (11) in (9), we get Taking = 1 and = 1, (13) reduces tô Now, we relabel and write (14) aŝ where ] 's are the th-place integral in (14). The integrals arising in (14) are evaluated by using the formulae Re ] > −1 (16) (see [34]) and are calculated with the help of Simpson's onethird rule, Simpson's three-eighth rule, composite Simpson's one-third rule, and composite Simpson's three-eighth rule, respectively. In numerical analysis, Simpson's rule and composite Simpson's rule are method for numerical integration, the numerical approximation of definite integrals. Numerical Results In this section, we test the proposed algorithm (15) by evaluating the approximate Hankel transforms of 4 well-known test functions with known analytical Hankel transforms. Note that in all the examples the truncation is done at level = 1, = 1, and = 60 in (15). We observed that the accuracy of the method is very high even at such a low level of truncation. Simpson's One-Third Rule. See Figures 1 and 2. Example 2. The following example was solved numerically by [35]. Simpson's One-Third Rule. See Figures 9 and 10. Composite Simpson's One-Third Rule. See Figures 13 and 14. Example 3 (sombrero function). A very important and often used function is the Circ function that can be defined as [22] Circ This function is quite common in optical problems where it is used, for instance, to represent a circular pupil of radius . International Journal of Engineering Mathematics The Fourier-Bessel transform of (20) is the well-known "sombrero function." The zeroth-order Hankel transform of Circ( / ) is the sombrero function [29], given by The exact and numerical transforms differ very slightly but the differences are hardly visible. Simpson's One-Third Rule. See Figures 17 and 18. a well-known result. The pair ( ( ), 0 ( )) arises in optical diffraction theory [36]. The function ( ) is the optical transfer function of an aberration-free optical system with a circular aperture, and 0 ( ) is the corresponding spread function. Barakat and Sandler [26] evaluated 0 ( ) numerically using Filon quadrature philosophy but the associated error is appreciable for < 1, whereas our method gives almost zero error in that range. Conclusion Since the basis functions used to construct the wavelets are orthogonal and have compact support, it makes them more useful and simple in actual computations. Also, since the numbers of mother wavelet's components are restricted to one, they do not lead to the growth of complexity of calculations. Our choice of wavelets makes them more attractive in their applications in the applied physical problems as they eliminate the problems connected with the Gibbs phenomenon taking place in [30]. A good agreement between the obtained solution and some well-known results has been obtained. Four test examples are provided to show the advantage of using wavelets. This method is capable of greatly reducing the size of calculations while still maintaining high accuracy of the numerical solution. Proposed wavelet method is very simple and attractive. The implementation of the current approach in analogy to existing methods is more convenient and the accuracy is high. The numerical example and the compared results support our claim. The difference between the exact and approximate solutions for each example was plotted graphically to determine the accuracy of numerical solutions. Future Work. Since computational work is fully supportive of compatibility of the proposed algorithm, the same may be extended to other physical problems also. A very high level of accuracy explicitly reflects the reliability of this scheme for such problems. We would like to stress that the approximate solution includes not only time information but also frequency information due to the localization property of wavelet basis; with some change we can apply this method with the help of other wavelet bases.
2,455.6
2015-08-31T00:00:00.000
[ "Physics", "Engineering" ]
The Effect of Diet on the Cardiac Circadian Clock in Mice: A Systematic Review Circadian rhythms play important roles in regulating physiological and behavioral processes. These are adjusted by environmental cues, such as diet, which acts by synchronizing or attenuating the circadian rhythms of peripheral clocks, such as the liver, intestine, pancreas, white and brown adipose tissue, lungs, kidneys, as well as the heart. Some studies point to the influence of diet composition, feeding timing, and dietary restriction on metabolic homeostasis and circadian rhythms at various levels. Therefore, this systematic review aimed to discuss studies addressing the effect of diet on the heart clock in animal models and, additionally, the chronodisruption of the clock and its relation to the development of cardiovascular disorders in the last 15 years. A search was conducted in the PubMed, Scopus, and Embase databases. The PRISMA guide was used to construct the article. Nineteen studies met all inclusion and exclusion criteria. In summary, these studies have linked the circadian clock to cardiovascular health and suggested that maintaining a robust circadian system may reduce the risks of cardiometabolic and cardiovascular diseases. The effect of time-of-day-dependent eating on the modulation of circadian rhythms of the cardiac clock and energy homeostasis is notable, among its deleterious effects predominantly in the sleep (light) phase and/or at the end of the active phase. Introduction The rotation of the Earth on its axis is marked by a light phase and a dark phase, both with distinct temperature and radiation conditions. This light/dark cycle (LD) is interpreted through the circadian system, represented by a central clock located in the suprachiasmatic nucleus (SCN) of the hypothalamus, and peripheral clocks distributed in other regions of the brain and peripheral organs [1][2][3]. Each nucleated cell has a clock, which adjusts itself through external or internal clues also called zeitgebers (ZT). The central clock is adjusted daily by light, the main environmental cue. In this way, the central clock sends signals to the peripheral clocks, thus synchronizing their circadian rhythms [4,5]. The cellular clock machinery is represented by transcriptional, translational, and post-translational events, loops of positive and negative feedback, performed by a set of genes. The genes encoding the clock mechanism include Clock and Bmal1 (positive loop) and Per1/2/3 and Cry1/2 (negative loop) [5]. In mammals, the circadian clock influences practically all physiological and behavioral aspects, such as the sleep-wake cycle, body temperature, energy metabolism, and the physiology of various organs. [6]. Using the heart as an example, about 6 to 13% of its transcriptome can be controlled by the clock [7,8]. Considering peripheral oscillators, the role of external cues in modulating circadian rhythms is already well described. Food is considered an important synchronizer for peripheral clocks [9,10], as well as metabolism, which has an important effect on both central and peripheral clocks [11] Thus, restricted feeding (RF) to a certain period profoundly affects the physiology and behavior of animals. Among these changes, one may mention changes in locomotor activity, body temperature, food anticipatory behavior, and hormonal secretions [12]. Studies on rodents have shown the beneficial effects of RF on metabolic pathways [13,14]. Mice fed control or high-fat diet (HFD) by RF only in the dark phase showed increased fatty acid oxidation in cardiac muscle, stimulation of fatty acid-responsive genes, improvement of myocardial contractile function, and no alteration in cardiac hypertrophic. RF in wakefulness resulted in metabolic flexibility for cardiac lipid metabolism [15]. Thus, it is important to investigate the relationship between diet and the circadian clock. Changes in the functioning of the clock by environmental signals, especially food, can lead to a better or worse functional picture of physiological and behavioral processes. In light of the above, this systematic review aims to discuss studies that address the effect of diet on the heart clock in mouse models and the chronorupture of the clock and its relationship to the development of cardiovascular disorders in the last 15 years. For the purpose of this study, the most current publications, produced between 2007 and 2022, were considered. This was considered important, considering the evolution of studies in the area. The database search was performed by two researchers: A.B.R.P. and L.T.R. Both researchers read and selected the articles. In cases of conflict, the articles were reevaluated by the same two researchers, A.B.R.P. and L.T.R. Duplicates were removed before screening the articles retrieved. Before establishing the first selection, the pre-defined criteria according to PICOS ("Population", "Intervention", "Comparison", "Results", and "Study Design") were adopted as inclusion criteria to elect the articles ( Table 1). The online software Rayyan (https://www.rayyan.ai/ accessed on 3 August 2022) was used to select the articles. A detailed search flowchart ( Figure 1) illustrates articles that have been shown to meet the established eligibility criteria. The study protocol was registered in the international database PROSPERO with the reference CRD42022360982. After reading the articles, the information was recorded in a catalog file for each study, which contained the following data: title, publication year, population, study objective, methodology, intervention, results, and conclusion. Only the most relevant data for the construction of this article were cataloged; from this summary, data were used in order to synthesize the results. "Intervention", "Comparison", "Results", and "Study Design") were adopted as inclusion criteria to elect the articles ( Table 1). The online software Rayyan (https://www.rayyan.ai/ accessed on 3 August 2022) was used to select the articles. A detailed search flowchart ( Figure 1) illustrates articles that have been shown to meet the established eligibility criteria. The study protocol was registered in the international database PROSPERO with the reference CRD42022360982. Quality Assessment of Studies The risk of bias (RoB) was assessed for each study by "SYRCLE's RoB tool" [17]. This tool was based on the Cochrane Collaboration RoB Tool and it aims to assess the methodological quality in animal experiments. The quality assessment was performed by two researchers and disagreements were resolved between them. Literature Data The database search returned 414 articles, 51 from PubMed, 61 from Scopus, and 302 from Embase. Duplicate articles removed prior to screening were 99, leaving 317. Inclusion and exclusion criteria were applied according to PICOS. A total of 59 articles were selected, of which 40 were excluded: publication type, results, and study design. Thus, 19 studies that met all inclusion and exclusion criteria were contemplated ( Figure 1). Normal diet (ND, D12450B, fat content 10%; Research Diets Inc., New Brunswick, NJ, USA) or HFD (D12492, fat content 60%) The animals were fed normal diet or HFD for 6 weeks until mating, during gestation and lactation (until day 16 of lactation). On day 16 of lactation, both groups were fed ND. The pups were suckled until day 21 of life and then fed ND until the day of euthanasia. Gene and protein expression by qPCR and Western blot, respectively; measurement of the body weight; serological analyses. To investigate the effect of the short-term, mild CR before induction of experimental MI to protect the heart from ischemic injury and to understand the underlying molecular pathways. Regular chow diet or CR (30% less than the calculated mean daily AL food consumption) Mice were fed ad libitum, stressed AL diet, or CR diet for 7 days, prior to MI via permanent coronary ligation and then euthanized. To investigate molecular mechanisms generated by shifting feeding to the rest phase and how this environmental cue alters the expression of circadian clock components, thereby leading to obesity and metabolic syndrome-like pathology. Control diet Mice were fed with a control diet ad libitum or RF during the 12 h light phase (ZT0-ZT12) under LD 12h:12h. Oishi et al., 2017 [29] Male C57Bl/6J mice To investigate the involvement of feeding cycle-dependent endogenous insulin rhythms in the circadian regulation of peripheral clocks, and the effect of exogenous insulin on the expression of clock genes. Mice were fed HFSD for 8 h during nighttime (ZT14-22) or daytime (ZT2-10) for one week. After this period mice were euthanized and the tissues were collected. Measurement of blood hormones; gene expression by qPCR; assay of phosphorylated AKT; wheel-running activity analyses. Mice were fed with a high-fiber diet or acetate supplementation for 3 weeks before sham or DOCA surgery. Morphological analyses in the heart, kidney, and lung; histological analyses in the heart and kidney; bioinformatic analyses; renal and cardiac transcriptome; analyses of the composition of the gut microbiota. Female mice were fed with normal chow diet for 7 days, following DRF or NRF for 7 or 36 days. They were submitted to LL for 9 days, following DRF and LL for 9 additional days. Male mice were fed with normal chow diet for 7 days, following DRF or NRF for 7 additional days. After this period, female and male mice were euthanized. Transcriptome and metabolomic profiling; food intake, body weight, and locomotor activity analyses; global profiling of transcripts; untargeted metabolomics; targeted lipidomics; Acyl-CoA quantification by LC/MS; gene expression by qPCR; transcriptome analyses; circadian rhythmicity analyses; phase set and cistrome enrichment analyses; heatmap of expression profile. EGFP-pA flox mice To investigate the effect of biotin diet on protein biotinylation in several tissues in the BMAL1-BioID mouse model. Mice were fed with a biotin-rich diet or chow diet ad libitum for 7 days and then euthanized. Protein expression by Western blotting; histological analyses; streptavidin blot analysis; biotin labeling assay in vitro; co-immunoprecipitation. Latimer et al., 2021 [36] Male C57Bl/6J, CBK, CON mice To investigate the effect of time of day of dietary BCAA consumption on physiological responses (cardiac growth) and its pathological implications Low BCAA diet (Teklad TD.150662 custom diet; with leucine, isoleucine, and valine 3-fold lower than the standard diet), high BCAA diet (Teklad TD.170323 custom diet; with leucine, isoleucine, and valine 2-fold higher than the standard diet) or a standard diet (Teklad TD.170323 custom diet). The dietary intervention occurred acutely and chronically. Acute intervention: mice were fed with an early high BCAA diet or early low BCAA diet, and a late high BCAA diet or late low BCAA diet for 4 h. Chronic intervention: mice were fed with an early high BCAA diet, and a late high BCAA diet for 4 weeks or 6 weeks. Transverse aortic constriction; CLAMS for evaluation of food/calorie intake, energy expenditure, and physical activity; spectrometry (plasma BCAA levels); quantitative magnetic resonance imaging (lean and fat body mass); gene and protein expression by qPCR and Western blot, respectively; histological evaluation. WT and Pparα-null mice: similar expression profiles and amplitudes for Per1 and Per3 genes with mean acrophase of ZT11.4 and ZT10.6, respectively (liver, BAT, and eWAT) reflecting a 10 h time difference from Bmal1 and Npas2. WT mice fed regular diet: higher Bmal1 and Per2 mRNA expression levels in CT20 and CT8, respectively. Lower levels in CT8 and CT20, respectively. ApoE −/− mice fed HFD: higher levels of Bmal1 and Per2 mRNA in CT0 and CT12, respectively. Lower levels in CT12 and CT0, respectively. No changes in Cry1 mRNA expression levels in both genotypes on regular diet. (−) WT mice with RF: altered expression of clock genes (Bmal1, Per2, Cry2, and Rev-Erbα) and clock-controlled genes (Dbp); lower and less consistent effects on the phases of clock component and output gene oscillations; average phase shifts and repression of amplitude were 3.90 ± 0.83 h and 54 ± 5%, respectively. WT mice with RF: changes in expression of clock genes (Bmal1, Per2, Cry2, and Rev-erbα) and clock-controlled genes (Dbp); dramatic phase shifts in gene expression of clock components and output genes (liver); phase differences within the range of 6 and 11 h (mean 8.38 ± 0.84 h) (liver); smaller and less consistent effects on phase shifts of clock and output gene components (epididymal fat and gastrocnemius muscle); average phase shifts and repression of amplitude were 6.88 ± 2.06 h and 69 ± 3% (epididymal), 3.46 ± 1.41 h and 24 ± 20% (gastrocnemius muscle), respectively. (−) High-fat diet Wang et al., 2015 [28] P17 pups from HFD-fed dams: higher mRNA levels of Bmal1 in ZT1 with a circadian pattern and oscillates in antiphase for the Per2 gene. P35 pups from HFD-fed dams: improved phase changes while maintaining amplitude defects. P17 pups from HFD-fed dams: Fas exhibited a rhythmic expression pattern in control animals, which peaked at ZT9, the late stage of the light phase. Pgc-1α exhibited a significantly rhythmic and lower expression (liver). P35 pups from HFD-fed dams: abolished the circadian expression rhythm of Fas and Pgc-1α (liver). (−) Restricted feeding Noyan et al., 2015 [22] WT mice under short-term CR: modulation in the mRNA profiles of pre-MI genes associated with the circadian clock. WT mice under short-term CR: global change in gene expression associated with oxidative stress, immune function, apoptosis, metabolism angiogenesis, cytoskeleton and extracellular matrix. (+) Female mice on RF for 36 days: phase-locked to LD cycles. Female mice on LL cycle for 9 days: reversed the phase of clock genes. Cardiac transcriptome showed resistance in phase drag by reversed feeding and the fatty acid rhythm was entrained. Female mice on RF for 36 days: phase shift of clock genes similar to animals on 7 days RF (liver and adipose tissue). Female mice on LL cycle for 9 days: increased the behavioral rhythm by 1.7 h, and did not change the phase of clock genes (liver). Induced oscillations in clock genes (adipose tissue); reversed the phase of clock genes (kidney). The liver and adipose tissue transcriptomes were entrained by the reversed feeding. The kidney transcriptome is more resistant to phase drag by reversed feeding. WT and Per2/Tg mice with HFSD: increased body weight, plasma insulin, total cholesterol, and insulin/glucose ratio. PAI-1 mRNA expression levels did not differ. Per2/Tg with HFSD or ND mice: dampening of PAI-1 expression rhythms in the heart. Jcl:ICR mice with KD: reduced body weight and glucose levels, increased plasma levels of FFA and total ketone bodies, increased PAI-1 mRNA levels (heart and liver) with advanced acrophase to 4.7 h (heart), 7.9 h (kidney), and 7.8 h (epididymal fat), and led to phase advancement of the endogenous circadian clock that governs rhythmic behavior. Pparα-null mice fed KD: decreased BW, plasma FFA levels, and total ketone bodies; induced fatty liver; and increased hepatic total cholesterol levels. WT mice fed KD: increased circadian expression of PAI-1 mRNA (heart and liver). Benzafibrate: induced the expression of the Pai-1, Cy4A10, and Fgf21 genes in a PPARα-dependent manner. WT mice on RF: changes in whole-body energy balance, higher food intake during DP, little influence on physical activity rhythms, diurnal variation in plasma glucose and triglyceride levels, body weight gain, and tissue-specific changes in metabolic genes (Accα, Glut2, Lpk, Lgs, Mcad, Dgat2, Acsl1, Atgl, Lipe, Mcp1, Glut4, Pdk4). WT mice with high-fiber diet: altered the composition of the gut microbiota and increased acetate levels. WT mice supplemented with acetate: altered the composition of the intestinal microbiota and increased the percentage of acetate-producing bacteria. WT mice with high-fiber diet and acetate: reduced systolic and diastolic BP levels, altered the renal transcriptome (Rasal1, Cyp414, and Cck) of genes related to renal fibrosis, fluid absorption through sodium channel regulation, anti-inflammatory action, and the cardiac transcriptome (Tcap and Timp4) of genes related to cardiac diseases, pathways acting on cell cycle, replication, translation, mRNA metabolism, respiratory electron chain, mitogen-activated protein kinase signaling, and renin-angiotensin system. (+) Low-phosphate diet Noguchi et al., 2018 [34] Mice with hypophosphatemic diet: increased total cartilage volume fraction, reduced TMD and BV/TV, increased osteochondromic progenitor lineage and impaired chondrocytes, reduced the size of proliferative matrix-forming cells, and affected systemic regulation of mineral metabolism. Transcriptome analyses revealed that 1.879 genes associated with diet and having a circadian pattern of regulation, those with mitochondrial function, including oxidative metabolism and canonical regulatory pathways associated with apoptotic signaling. (N) High-fat diet Mia et al., 2020 [31] HFD increased body weight, adiposity, daily energy expenditure, reduced physical activity, and RER, increased stroke volume, left posterior ventricular wall thickness during systole, the BVW/TL ratio, cardiomyocyte area, cardiac fibrosis, and altered the temporal regulation of the cardiac transcriptome, especially of metabolism-related genes. Day-night differences affected cardiac glucose oxidation, lactate release, and the cardiac lipidome. The diurnal rhythms of lipid metabolism genes (Cd36, Mcd, Lcad, and Lipe) and plasma levels were altered. Time-of-day-restricted feeding restored body metabolic rhythms, normalized the adverse effects of cardiac remodeling (BVW/TL ratio, cardiomyocyte size, cardiac fibrosis, cardiac steatosis), and increased the day-night difference in cardiac lipid metabolism. (−) Quality Assessment of Studies All the included studies in the systematic review had inappropriate random sequence generation. Overall, the studies presented a high risk for selection bias (random sequence generation, baseline characteristics, and allocation concealment), performance bias (random housing and blinding of participants and personnel), and detection bias (random outcome assessment and blinding of outcome assessment). Attrition bias (incomplete outcome data) was intermediate. Reporting bias (selective reporting) was low. The results of the risk of bias in the included studies are shown in Figure S1. Discussion Light is an important environmental cue in dragging biological rhythms. However, other cues are capable of adjusting circadian rhythms, such as diet. Some studies point to the influence of diet composition, meal timing, and dietary restriction on metabolic homeostasis and circadian rhythms at various levels. Understanding the mechanisms underlying these influences will likely provide important insights into the pathogenesis of diet-associated cardiometabolic disorders. Feeding is a strong synchronizer for the peripheral clocks, as is the timing of the day feeding. Given this, we sought to include in this review studies that evaluated the effects of restriction feeding (RF) on clock genes. RF is characterized by a limited period and duration of food access without caloric reduction. Some studies have evaluated the effects of dietary restriction on the expression of clock genes [17,18,24,27,33]. Bray et al. [20] sought to understand the effect of RF in the dark phase in male C57Bl/6J mice. The authors observed markedly different diurnal variations among those animals that consumed the diet in the dark phase compared to those in the light phase. The animals fed in the light phase consumed a larger amount of food immediately after accessing the diet, whereas those in the dark phase did not show the same response. The authors observed that the heart's circadian clock genes did not change phase, but the amplitude of oscillations of clock gene expression was often decreased in this organ. Such findings were observed in adipose tissue and skeletal muscle as well, whereas, in the liver, there was a phase shift of circadian clock genes in those mice fed in the light phase. These results suggest that entrainment-induced feeding is more robust in the liver, and RF in the light phase led to a desynchronization between metabolically active tissues. Furthermore, the authors point out that RF, in the dark phase, resulted in increased caloric intake, reduced energy expenditure, and dependence on fatty acid oxidation. Additionally, Goh et al. [18], when investigating the effects of RF on clock genes, found that access to dietary restriction led to phase shifting in the peripheral clocks of wild-type animals. In this same assay, using PPARα-deficient mice, they observed significant modulation in the heart muscle clock, shifting the acrophase of circadian gene expression by up to an additional 8 h. They observed that dietary restriction reduced the amplitude of expression of Bmal1 and Rev-Erbα genes in WT and homozygous Pparα-null (Pparα-null) mice, respectively, and increased Bmal1 in Pparα-null mice, whereas, under ad libitum conditions, the amplitude and acrophase of clock genes were similar in both genotypes. These knockout animals showed altered cardiac metabolism using glucose as an energy source, leading to loss of contractile function and decreased energy reserves. Under prolonged fasting conditions, these animals developed hypothermia, due to the altered brown adipose tissue (BAT). These findings may be linked to the current observation that BAT and cardiac tissue from Pparα null mice showed altered circadian food transcription factor gene expression. Reilly et al. [19] investigated the possibility of catecholamines to modulate the rhythmicity of peripheral clocks in dopamine β-hydroxylase knockout mouse models (Dbh −/− ) under RF during the light phase. This model is characterized by the non-expression of the enzyme dopamine B-hydroxylase, which takes part in catecholamine biosynthesis [37]. They observed that endogenous concentrations of the catecholamines, norepinephrine, and epinephrine, exerted no effect on the function of peripheral circadian clocks in vivo, among them, the heart. On the contrary, feeding time was shown to be an important modulator of peripheral circadian oscillators. Despite this, they did not study the effect of the RF; Noyan et al. [22] worked with short-term caloric restriction (CR) (30% less of total calories) and investigated its protective effect on ischemic mouse hearts. CR caused a change in gene expression. Bioinformatics analyses showed enriched pathways associated with antioxidant processes, circadian rhythms, and the biological clock. Moreover, short-term CR resulted in increased expression of the clock genes, Per1 and Per2. Studies point to the involvement of clock genes in physiological processes such as energy balance, coordinating and modulating energy metabolism, transcription, signaling, and contractile functions in the heart [38]. In view of this, heart clock genes may be associated with the beneficial effects of calorie restriction [39]. Further, studies point to the protective effect of the Per2 gene in myocardial ischemia [40]. Therefore, cardiac clock genes may be involved in mechanisms protecting the heart against ischemia associated with short-term CR by adapting cellular metabolism. Mukherji et al. [21], however, sought to understand the effect of switching feed from the active to the resting phase (RF condition) on peripheral clock genes. After 8 days of RF in the resting phase, transcript analyses of cardiac clock components showed that the clock genes Per1, Per2, and Rev-Erbα were affected by this condition. RF led to an extra production of corticosterone. Notwithstanding, under ad libitum conditions, endogenous corticosterone did not affect Rev-Erbα expression in the heart, since they were at basal levels. Subsequently, in the 4 days of RF, there was a delay in Rev-Erbα expression and, consequently, a delay in peripheral clock shift. Administration of the glucocorticoid antagonist RU486 or adrenalectomy led to early activation of PPARα in the heart, consequently, stimulating Rev-Erbα expression. The authors concluded that the change in clock gene components under RF conditions is associated with metabolic reprogramming that directly affects circadian clock expression. In the study by Xin et al. [23], they evaluated the effects of reverse feeding in peripheral tissues on metabolism and circadian physiology in mice, among them, the heart. The diurnal rhythms of the clock genes in the heart exhibited a phase shift from 0-3 h. On the other hand, when comparing male and female mice, the diurnal rhythms were damped by RF in male mice. That is, there is a sex-related difference in clock drag under RF. Even though the authors observed a greater resilience in phase drag by reverse feeding in the heart transcriptome, cardiac diurnal metabolites were dragged within one week. In addition, removing the timing signals from the SCN by exposing the animals to constant light (LL) cycle-facilitated phase drag by feeding in both the heart and the other tissues except the liver. Analyses of the cardiac metabolome showed that reversed feeding dragged the diurnal rhythms of fatty acid oxidation and acylcarnitine metabolites. These findings show that cardiac metabolism is influenced by feeding cycles and the circadian clock. Some studies evaluated the effects of HFD in WT mice [25,28,32] or knockout models, such as ApoE −/− mice [24], Clock ∆19/∆19 mice [30], and cardiomyocyte clock mutant (CCM) mice [27], as well as a transgenic model that overexpresses PER2 [26]. Hou et al. [24] investigated the effect of hyperlipidemia induced by an HFD on the expression of clock genes in ApoE-deficient mouse model of atherosclerosis (ApoE −/− ). The authors observed that diet affected the peripheral circadian clocks, but had no effects on the central clock. In the cardiac circadian clock, the peak mRNA levels of the clock genes, Bmal1, Per2, and Cry1, showed a four-hour delay in the onset of the subjective dark period in knockout animals, independent of diet, reinforcing that apolipoprotein E (ApoE) is involved in the expression of these genes. The same phenomenon was observed for plasma lipid levels, having a peak at the onset of the dark period, CT12, in ApoE −/− mice fed with HFD, with no variation in serum levels for WT mice. The authors point out that these variations in serum lipid levels possibly affected the expression of circadian clock genes, mediated by the transcription factors PPARα, retinoid X alpha receptor (RXRα), and REV-ERBα. This circadian disruption associated with the absence of ApoE may be related to the development of atherosclerotic processes and some acute cardiovascular diseases, as pointed out by more recent studies [41][42][43]. Mia et al. [31] sought to evaluate the effect of HFD diet-induced obesity on cardiac metabolic flexibility. Of note, metabolic flexibility is characterized by the adaptation to metabolic and energetic changes in response to physiological stimuli. The authors observed that HFD-induced obesity resulted in increased body weight and adiposity, and altered the 24-h rhythms of body substrate selection during the LD cycle. However, the oxidation of glucose and/or fatty acids by cardiomyocytes during the LD cycle was preserved. That is, metabolic flexibility is preserved for cardiac substrate oxidation in high-fat-fed mice. This diet also stimulated markers related to cardiac hypertrophy, cardiac fibrosis, and steatosis. RNA-sequencing (RNAseq) analyses evidenced diurnal changes in the cardiac transcriptome, particularly, in metabolism-related genes, with only 22% of transcripts unaffected by HFD. Importantly, of the transcripts that were not affected, the clock genes, Bmal1, Clock, Npas2, Nr1d1, Nr1d2, Per2, Per3, and Cry2 stand out. According to the authors, there is the possibility that the heart clock orchestrates the persistent day-night differences in cardiac oxidative metabolism during obesity. In lipid metabolism, triglyceride synthesis was impaired by obesity, which was associated with attenuation in day-night fluctuations of cardiac lipidome. At another time, the authors investigated the effect of RF with HFD in the dark phase only. RF restored the changes in lipid metabolism and cardiac remodeling. Given this, a detrimental effect of HFD-induced obesity on the metabolic flexibility of lipid metabolism is evidenced, and it is, in turn, partially reversed by modulating the timing of food intake. Reitz et al. [30], meanwhile, evaluated whether HFD-fed Clock ∆19/19 mice develop cardiovascular disease since this diet, in this model, results in metabolic syndrome and obesity, risk factors for cardiovascular disorders. In spite of the fact that the animals exhibited a cardiometabolic risk profile, surprisingly, they did not develop cardiac dysfunction and showed preserved cardiac structure and function. Microarray and bioinformatics analyses revealed a pattern of antioxidant activity associated with increased serum levels of the enzymes cardiac catalase (CAT) and glutathione peroxidase (GPx) and the Ppary gene expression, and reduced activation of oxidative stress-related pathways. These findings demonstrate the important role of circadian mechanisms in mediating resilience to cardiovascular disease outcomes. These studies [31,32] only reinforce the influence of the clock on metabolic homeostasis, since, under a condition of circadian chronorupture, there is impaired cardiac metabolism, but the cardiac function is maintained. Studies point out that even under conditions of a non-intact circadian mechanism, cardiac remodeling can be enhanced. However, it is important to note that circadian mechanisms are important since the genes and proteins involved in the observed outcomes are under circadian transcriptional control. Tsai et al. [27] studied the role of the cardiac circadian clock in cardiac metabolic adaptation under HFD for 16 weeks. For this, the authors used a mouse model, in which, CLOCK protein expression is selectively impaired, termed the cardiomyocyte clock mutant (CCM) mice. The animals showed an altered myocardial response to HFD, as well as altered diurnal rhythms of triglyceride and fatty acid metabolism. The diurnal rhythms of myocardial triglyceride levels were markedly attenuated in the heart of the CCM mice, which was associated with circadian mechanisms mediating the regulation of lipolysis over synthesis. Nonetheless, when subjected to an RF condition, the heart lipid metabolism responded differently. Feeding at the end of the active phase resulted in myocardial steatosis with a greater propensity for triglyceride synthesis. Wang et al. [28] evaluated the effects of HDF-induced maternal obesity on clock and metabolism genes in the liver and heart of their offspring. In this study, they evaluated the pups at two time points, pups with 17 days of age (P17) and pups with 35 days of age (P35). The authors observed that HFD during pregnancy and lactation strongly impacted the expression of clock genes, metabolism genes, and inflammatory pathways in both the heart and liver. In the heart, Bmal1 and Per2 genes showed a robust oscillation compared to the metabolism genes carnitine palmitoyltransferase 1b (Cpt1b) and Pparα, which was associated with the early developmental stage of HFD pups. That is, the phase and amplitude changes were more expressive in the pups at 17 days postnatal of HFD-fed female mice. The expression of genes in inflammatory processes was higher in P17 proles from obese females. As for metabolism genes, there was a difference in the pattern of oscillation between the P17 and P35 groups. The authors associated the maternal cues during the first postnatal weeks, whereas, the more "mature" pups (P35) were already influenced by feeding, with peripheral clocks being adjusted by the feeding schedule, in addition to food intake being an important environmental cue for dragging circadian rhythms of peripheral clock genes [9]. Furthermore, diet composition and the prenatal microenvironment would strongly affect the oscillation pattern of circadian rhythms in the heart [28]. Oishi and colleagues have developed studies involving the combination of a high-fat, high-sucrose diet [19,28] on fibrinolysis [19] and on the effect of endogenous insulin [29] at two time points. In the study [26], Oishi et al. [19] evaluated the role of PER2 in plasminogen activator inhibitor-1 (PAI-1) gene expression in a transgenic mouse model overexpressing Per2 (Per2/Tg) and WT mice, both with and without obesity induced by a high-fat/high-sucrose diet (HFSD). HFDS-fed animals showed body weight gain, hypercholesterolemia, and hyperinsulinemia and developed insulin resistance in both genotypes, but did not have their plasma triglyceride levels increased. However, these findings did not differ significantly between the groups. Therefore, the authors suggest that the PER2 protein has no involvement in the metabolic regulation of obese animals induced by an HFSD diet. The PAI-1 gene was suppressed in the heart of Per2/Tg animals, regardless of diet type. This was associated with higher cardiac Per2 gene expression since no PAI-1 gene suppression and no change in Per2 mRNA levels was observed in adipose tissue and liver. However, the expression levels of Bmal1 mRNA were not altered in the heart of Per2/Tg mice, but its attenuation was observed in the liver and adipose tissue. Since PAI-1 levels oscillate over the 24-h rhythm in various organs, such as the liver, adipose tissue, and heart [44], the findings of this study suggest that components of the circadian machinery are strongly involved in regulating its expression in the heart. In summary, diet-induced obesity increases PAI-1 levels, but its transcription is suppressed by the Per2 gene. In 2017, Oishi et al. [29] conducted interventions with the same diet, HFSD. In this study, the authors investigated the effect of endogenous insulin dependent in the feeding cycle on the regulation of peripheral clocks. The animals were fed either in the light phase or in the dark phase for one week. RF led to the synchronization of the circadian rhythms in the insulin of the hormones, glucagon-like peptide-1 (GLP-1), glucose-dependent insulinotropic polypeptide (GIP), and hyperinsulinemia in the light period. When analyzing the expression profile of the clock genes, Per1, Per2, and Dbp, no marked influence of light phase feeding on the heart clock was observed. Only the liver was affected concerning the circadian phase of expression. For that reason, the authors concluded that insulin and RF are not dominant ZT for some peripheral clocks, except for the liver. Nevertheless, the exogenous insulin was able to drag the peripheral clocks. Given these findings, humoral signals involved in synchronizing peripheral clocks are unlikely dominant ZT, since they are strongly influenced by some environmental cues, such as temperature, diet composition, meal timing, physical activity, and light [45]. For example, circulating insulin concentrations increase dramatically in response to diets. Therefore, humoral time signals, such as glucocorticoids and insulin, may serve to stabilize rather than phase-determine peripheral clocks in mammals [46]. Some studies [20,23] have assessed the effect of the ketogenic diet (KD), characterized by high concentrations of lipids and low carbohydrate and protein content [47]. Commonly used in weight loss and diabetes, little is known about its effect on cardiovascular health [48]. Oishi et al. [32] investigated the effect of KD on the temporal expression profile of PAI-1 and clock in peripheral tissues, liver, kidney, adipose tissue, and heart. KD led to hypoglycemia, increased FFA and ketone body levels, and a phase advance on clock genes, PAI-1 mRNA levels, and the rate of behavioral activity in ad libitum-fed mice. The mRNA expression acrophases of the Per2 and Dbp genes were advanced by 5.6 and 6.0 h, respectively, in the heart. When transferred from LD to DD, the phase diet advanced the clocks that govern rhythmic behavior. The authors point out that KD exerted an effect similar to that of CR, fasting, and hypoglycemia, which was associated with cellular energy status. In another report, Oishi et al. [25] compared the effect of KD on the temporal expression profile of clock genes (Bmal1 and Rev-Erbα) and clock-controlled genes (Dbp), and on the induction of PAI-1 gene expression in the liver and heart of PPARα-free mice. A phase advance in Bmal1 expression levels and a decrease in mean Dbp gene expression level were observed in the heart of both genotypes. Similar to the 2009 study [32], Oishi et al. [25] again observed a phase-advancing effect on the rate of behavioral activity in KD-fed animals, independent of PPARα, when transferred from an LD to a constant darkness (DD) condition. They showed that PPARα does not influence the phase-advancing effect on the rate of expression in clock genes and peripheral behavioral activity. This phase change of the KD-induced circadian clock may be influenced by the cellular energy status, such as the ratio of reduced and oxidized nicotinamide adenosine dinucleotide energy [NAD(P)H/NAD(P)+] [49]. Energy homeostasis is maintained by AMP kinase (AMPK), which is activated by some factors such as CR, fasting, hypoglycemia, and KD [50]. Its activation induces PER2 degradation, activating casein kinase Iε, leading to a phase advancement of the circadian clock in vitro [50]. Thus, activated AMPK may be involved in mediating circadian rhythms induced by KD clock regulation in tissues. Finally, we found reports on other dietary profiles, such as low/high-branched chain amino acids (BCAA) diet [35], high-fiber diet associated with acetate supplementation [33], biotin-rich diet [35], and low-phosphate diet [34]. Latimer et al. [36] observed that the time of day of BCAA intake influences cardiac parameters, which was associated with the circadian clock of the heart. WT mice that consumed a BCAA-rich meal at the end of the active phase (dark phase) revealed cardiac remodeling and influence on cardiometabolic parameters, with increased responsiveness of BCAA-induced mechanistic target of rapamycin (mTOR), the signaling pathway involved in cardiac hypertrophy [51]. However, cardiomyocyte-specific Bmal1 knockout (CBK) lost these time-of-day-dependent differences in BCAA-induced activation of mTOR signaling. Components of BCAA metabolism, such as cardiac amino acid levels, ribosomal RNA, and DEP domain-containing mTORinteracting protein (DEPTOR), a repressor of mTOR activation, showed significant 24-h cardiac clock-dependent oscillations in WT animals. In CBK animals, on the other hand, repression of BCAA catabolism genes was observed, leading to increased cardiac BCAA levels and, ultimately, an exacerbated activation of mTOR signaling and cardiac hypertrophy. Macro-and micronutrient content and sources have a strong impact on health and the risk of developing chronic non-communicable diseases [52,53]. For example, fruit and vegetable consumption are related to a lower incidence of diabetes, hypertension, and metabolic syndrome. In the study by Marques et al. [33], the effect of a high-fiber diet and acetate supplementation on the gut microbiota, and the cross-talk between the kidney, liver, and heart in hypertensive mice were investigated. Dietary fiber and acetate positively modified the size and population diversity of the gut microbiome, and also reduced diastolic and systolic blood pressure and renal fibrosis, which was associated with the downregulation of a marker related to cardiovascular disease, renal fibrosis, and inflammation. The authors observed an upregulation of circadian clock genes in the kidney and heart. In other words, dietary fiber and acetate would be acting as ZT for the renal and cardiac circadian clocks, and these, possibly, may be involved in molecular pathways that culminated in improved cardiovascular health and function. However, further studies are needed on the molecular mechanisms involved. Circadian clocks are important agents in modulating metabolism. Conversely, the mechanism involved is not well defined. The processes of transcription, translation, and post-translational modifications also have circadian regulation. Protein biotinylation is an important post-translational modification for protein activity [54]. Murata et al. [35] described the effect of dietary biotin intake on a brain and muscle ARNT-Like 1 (Bmal1)-BirA* knock-in (Bmal1-BioID) mouse model for protein biotinylation in various tissues. In order to build the model, the authors used a biotin ligase, BirA*. In addition, BMAL1 protein acts as a transcription factor in clock-controlled genes. Consequently, this mouse model was used to investigate the tissue-specific protein-protein interaction of BMAL1 protein. The biotin-rich diet induced protein biotinylation in the brain and in other tissues such as the heart. In another study [34], the effect of a low-phosphate diet on the circadian clock of bone tissue and the heart was evaluated. That is, would a condition of hypophosphatemia affect circadian function? The authors observed that the expression levels of clock genes, in the tissues evaluated, were higher in the mice on a low phosphate diet. In the heart, there was phase advancement and significantly higher levels for the Per1, Per2, and Per3 genes; the Bmal1 gene expression peak shifted from ZT24 to ZT21. Therefore, a diet-induced hypophosphatemia condition results in phase shifting for the clock genes. In conjunction, 1.879 genes were associated with diet and showed a circadian expression pattern; among them, there were genes involved in hypoxia signaling in the cardiovascular system and cardiac morphology. Thus, it was suggested that circulating phosphate levels modulate the heart clock and control the circadian functions of the heart. In summary, studies have linked the circadian clock to cardiovascular health and suggest that maintaining a robust circadian system may reduce the risks of cardiovascular and cardiometabolic diseases. Noteworthy is the effect of the time-of-day-dependent feeding on the modulation of the circadian rhythms of the heart clock and energy homeostasis, its deleterious effects predominantly occurring in the sleep (light) phase and/or at the end of the active phase. Another important point is the composition of the diet in macro-and/or micronutrients, such as fat, carbohydrates, protein, fiber, and minerals. Some of these nutrients may act on molecular and/or metabolic pathways by modulating the circadian clock of the heart and of the other peripheral organs. The circadian clock plays an important role in metabolic regulation, and a disrupted condition may contribute to the development of heart disorders. Limitations and Future Research As a limitation of the study, only the articles selected by the three databases were cataloged. Therefore, articles that were not included in the databases were not considered, even those articles that were cited in the selected papers. Moreover, there may be studies that have indirectly evaluated the heart, but due to the restriction of the keywords, they were not found in the databases. According to the articles evaluated, the effect of food restriction on the biological clock is notorious, and it can be harmful or beneficial, depending on the timing of the intervention. This may reflect in the experimental design and outcome of studies and may be another bias for research involving diet, clock, and metabolic parameters. More research is needed in the area in order to better understand the effects of various diets on the biological clock and metabolism. Conclusions Based on the findings of this systematic review, it highlights the important role of diet in modulating peripheral clocks, especially the heart, and cardiac metabolism. Furthermore, diet composition and feeding schedule (restricted feeding) can affect cardiac parameters and the expression of clock genes.
9,364.6
2022-12-01T00:00:00.000
[ "Biology" ]
EMF-Aware Cell Selection in Heterogeneous Cellular Networks The growing concern on the exposure of users to the electromagnetic field (EMF) has recently brought new challenges to the mobile research community. In this letter, we propose a novel cell association framework for heterogeneous cellular networks (HetNets), which aims to balance the load amongst heterogeneous cells so as to improve the resource usage and to increase the user satisfaction in terms of both data rate and EMF exposure. We model the cell selection problem as a General Assignment Problem (GAP) and we present two heuristic algorithms, which solve it with limited complexity. Our analysis shows that the proposed solutions lead to notable improvements with respect to legacy association schemes. I. INTRODUCTION Driven by the exponential increase of the mobile traffic, the wireless community has investigated solutions for enhancing the resource usage efficiency to improve the overall network performance. However, according to the latest European statistics [1], there is an increasing concern of the end-users about the potential health risks due to wireless communications. The reduction of the EMF exposure poses additional challenges to the mobile industry: new methodologies, metrics, and architectures are required. Recently, to optimize network operations with respect to the EMF exposure, a new metric named as Exposure Index (EI) has been proposed [2]. The EI goes beyond state-of-the-art methodologies, by including statistical information and profiles. Moreover, one of the most specific features of the EI is that it does not only focus on the downlink (DL) exposure. In fact, although it is usually neglected, the uplink (UL) has a rather relevant impact on the overall exposure. In the current cellular technology, a User Equipment (UE) selects the enhanced NodeB (eNB) that corresponds to the strongest Reference Signal Received Power (RSRP) [3]. Due to the power unbalance between small cell eNBs (SCeNBs) and Macro eNBs (MeNBs), this solution may prevent UEs from being served by the closest eNB. Hence, this increases the UL transmission power at the UEs, which in turns increases the user's exposure. Moreover, this approach limits the data rate, increases the UL interference, lowers the battery life, and reduces the macro cell offloading. To deal with part of these problems, a Cell Range Expansion (CRE) can be used, where a positive bias is added to the strength of the small cell control signal. This approach, jointly with interference coordination schemes, which protect range expanded UEs from MeNB interference, results in improved fairness and capacity [4]. Nevertheless, some studies have shown that, when using large bias, too many UEs may be associated with the same SCeNB, leading to overload and interference issues [4]. In contrast, when small cells operate in a dedicated band, more aggressive CRE can be implemented due to the absence of the MeNB interference [5]. Recently, several works have investigated the cell association problem in HetNets, mainly for enhancing the network DL capacity [6]. Opposed to most of the existing works, we analyze the optimum cell association by jointly considering DL and UL communications, and we study the relationship between the EMF exposure and the users' Quality of Service (QoS). Besides, we present two user centric mechanisms that jointly reduce the EMF exposure induced by the UL and improve the user satisfaction in terms of the DL throughput goal; last, considering the system perspective, the proposed solutions distribute the load in the HetNet to enhance the (access/backhaul) network utilization efficiency. II. SYSTEM MODEL Following the recent investigations in 3GPP [7], our research focuses on HetNets where SCeNBs are densely deployed and operate in a dedicated carrier (see Figure 1). We denote by U the set of UEs and by B the set of eNBs (which includes both MeNBs and SCeNBs) providing wireless services in the investigated HetNet. The average SINR between a user i and an eNB j can be modelled as where P j is the transmission power of the eNB j and I i,j is the overall interference experienced by the UE i. Moreover, σ 2 is the additive noise power and Γ i,j is the channel gain (including path loss, shadowing, and antenna gain) between the UE i and the eNB j. Note that the average SINR in (1) is due to measurements on the eNB control channels and it is independent of cell loads, fast fading, and resource allocation. We further denote • the connectivity matrix A, where a i,j equals 1 if a user i is in the coverage area of eNB j (i.e., SINR i,j is larger than a given threshold) and 0 otherwise; • a feasible assignment matrix X, where x i,j is equal to 1 if user i is served by eNB j (0 otherwise), • the set of all the possible service matrices X = {X 1 , . . . , X N }. A. DL data rate assessment For a given X, the achievable data rate related to the link between i and j can be modelled as where B i,j is the fraction of the band B that the eNB j allocates to the UE i and η i,j = log 2 (1 + SINR i,j ) is the link spectral efficiency. When the eNBs allocate more bandwidth to the UEs characterized by higher η, we have B i,j = kj ·B·ηi,j y∈U xy,j ·ηy,j , where k j = 1 if the backhaul does not limit the eNB capacity and 0 < k j < 1 otherwise. In the latter case, since y∈U x y,j ·B y,j ·η y,j = C BH j , where C BH j is the backhaul capacity, we can find the value of k j = C BH j B y∈U xy,j ·ηy,j y∈U xy,j ·η 2 y,j , leading to B. EMF assessment To evaluate the EMF radiation in HetNets, we use a simplified version of the EI. The EI is able to model the exposure of different categories of people to different mobile technologies. However, here, we only focus on the UL of cellular networks, which is more relevant than the DL exposure due to the proximity of the device to the body; moreover, to simplify our analysis, we only consider adult users. With these assumptions, the user expected EI can be computed as the sum of the contributions due to different usages u (i.e., data and voice services in indoor and outdoor scenarios) in the considered time periods p (day and night) [2] where P UL i,j is the power emitted by the UE i to communicate with the serving eNB j, t UL p,u is the time spent in the usage u during the time period p, and the ratio represents the whole body averaged specific absorption rate (SAR) that characterizes an adult during the usage u and an incident reference power P ref TX (see Table I). Finally, in 3GPP LTE, power control is used in the UL to mitigate interference and increase the device battery life. Accordingly, the power required by the UE i to communicate with the eNB j can be modelled as [8] P UL i,j = min P max , P 0 + 10 log 10 N UL where P max is the maximum transmission power (23 dBm) at the UE, P 0 is a UE-specific parameter (-78 dBm), N UL RB models the number of allotted resource blocks in the UL, and λ is a path-loss compensation factor (0.8). III. PROBLEM STATEMENT In this work, we investigate whether it is possible to reduce the EMF exposure due to UL transmissions while increasing the number of UEs that meet their DL data rate target. On the one hand, due to the vicinity of the UE to the body, the EMF exposure is mainly determined by the UL; on the other hand, in current mobile technologies, the load is strongly asymmetric and enhancing the DL capacity is the main goal of operators. For a given service matrix X, let's define the user satisfaction ratio S(X) as the function that measures the fraction of UEs for which the DL capacity requirement (C min ) is met as where s i,j is a step function whose value is 1 if C i,j ≥ C min i and 0 otherwise. Moreover, is the aggregate EMF due to UL. Then, our optimization problem is given as follows where X * = {X k ∈ X |S(X k ) = max X∈X S(X)} (9) Note that (9) ensures that X * contains all the service matrices that maximize (6). Proposition: The above defined problem is NP-hard. The GAP is a combinatorial problem in which each of n tasks is optimally assigned to m machines given the profit and the cost of each task as well as the resource available at each machine [9]. Accordingly, part of our assignment problem, i.e., finding the subset X * can be mapped as a GAP, where • the UEs and the eNBs are mapped to the tasks and to the machines, respectively; • the user satisfaction s i,j and the data rate C i,j are mapped to the profit and cost of each task, respectively; • the backhaul capacity C BH is mapped to the resource constraint at each machine. The GAP is known to be NP-hard while deciding if a feasible solution exists is NP-complete; therefore, the overall described assignment problem is NP-hard. IV. PROPOSED SOLUTION In this section, we propose two centralized algorithms (named as Max Sat. and EMF-Aware) to deal with the EMF-Aware cell selection problem presented in Section III. These schemes require coordination amongst eNBs: a distributed approach is feasible but it increases the required overhead. A practical implementation is to find a solution at the MeNB by gathering information from the nearby SCeNBs. Note that the proposed process can be seen as a self-organizing network (SON) functionality that does not require fast adaptation to e.g., mobility and fast fading [10]; in fact, SINR reporting can be exchanged on a large time scale (i.e., seconds), which limits the overhead and the latency requirements. The proposed algorithms start from a given solution of the cell selection problem (e.g., based on the RSRP), and iteratively evolve towards a more beneficial association. At each iteration, they evaluate every possible single change in the current association (first step) and then select the more beneficial change (second step). The algorithms stop after a limited number of iterations, when the achievable gain becomes lower than a small non-negative value ǫ. Let X n be the user assignment that maximizes P j · Γ ij ∀ (i, j) ∈ U × B, (1) First Step: Initialization • Calculate EI(X n ) and S(X n ); • For all (i, j), s.t. a i,j = 1, compute (6) and (7) if we change X n by associating (respectively, deassociating) the user i to (respectively, from) j; then, compute the gains ∆ S and ∆ EI with respect to the reference association, due to the possible reassignments. (2) Second Step: One-user reassignment • IF Max Sat., Find the set X * that maximizes ∆ S ; • ELSE IF EMF-Aware, Find the assignment set for which ∆ EI ≥ 0; then find its subset X * that maximizes ∆ S ; • IF max ∆ S ≤ ǫ, exit (the algorithm outputs the current user assignment); • ELSE find X k ∈ X * that maximizes ∆ EI and update the user assignment, accordingly. • Set X n =X k , then go to step (1). Proposition: In the proposed solutions the number of satisfied users is improved at each new iteration. Hence, the algorithms converge when a further improvement is not possible by a new reassignment of one single user. V. SIMULATION RESULTS In this section, we assess the effectiveness of the proposed solutions, which attempt to limit the EI while considering the side constraint of maximizing the user satisfaction. We compare their performance with respect to the approach where each UE is served by the eNB associated with the strongest RSRP and the scheme where each UE is associated with the closest eNB (Min Path Loss). We also consider a CRE bias of 6 dB to increase the macro cell offloading. Our evaluation scenario is composed by a tri-sectorial macro cell (3 MeNBs) and 60 UEs. Moreover, three small cell hotspots, each one composed by 4 neighbouring SCeNBs (see Figure 1), are deployed in the macro cell. The eNBs are characterized by a C BH of 40 Mbps. 80% of the UEs are indoor, 2/3 of them are located in the small cell hotspots, and remaining UEs are uniformly distributed in the macro cell. Other relevant parameters follow 3GPP TR 36.872 [7]. The results are averaged over 10 3 independent runs. At the beginning of each run, the clusters of SCeNBs and the UEs are randomly deployed in the macrocell area. Finally, we consider a stopping parameter ǫ = 10 −6 . Figure 2 shows the user satisfaction ratio with respect to the DL requirements. Note that S(X) can be seen as the DL rate complementary cumulative distribution function. By implementing the classic RSRP scheme, most of the UEs are associated with the MeNBs, which limits the resources allotted to the UEs with poor SINR and achieves the worst performance. CRE enhances the user satisfaction by offloading UEs from overloaded MeNBs to lightly loaded SCeNBs. Implementing CRE is particularly beneficial in the region with high rate requirements, where it achieves 2.8X the user satisfaction of the RSRP scheme. By associating the UEs to the closest eNBs, the system does not suffer from the power unbalance between SCeNBs and MeNBs, which enables to effectively share the network load and to achieve up to 6.5X the performance of the RSRP solution. It is worth to recall that, in a co-channel deployment, the Min Path Loss scheme might lead to limited performance due to the strong macro cell interference [4]. The proposed schemes further enhance the user satisfaction through an optimized load balancing that associates UEs characterized by poor SINR to eNBs with large resource availability and UEs with high SINR to loaded eNBs. Accordingly, UEs at the cell edge may meet the data rate requirements at the cost of lower throughput experienced by UEs located nearby the eNBs. As expected, the Max Sat. outperforms the EMF-Aware approach since the latter avoids those associations leading to high EI. The Max Sat. and the EMF-Aware yield up to 8.8X and 7.4X the user satisfaction of the reference scheme. Figure 3 shows the network utilization efficiency (the ratio between the achieved HetNet capacity and the aggregate backhaul capacity) related to the different approaches. When using the RSRP scheme, most of the UEs are served by the MeNB and a high number of small cells are idle, which leads to the worst performance. CRE enables to increase the usage of SCeNB resources through the macrocell offloading; however, Max Sat., EMF-Aware, and the Min Path Loss achieve up to 2.6X the resource utilization of the RSRP solution. Although these three algorithms are characterized by similar resource utilization, the proposed solutions, which fairly distribute resources to increase the user satisfaction, are characterized by slightly lower performance than the Min Path Loss, which blindly allocates resources to the UEs with larger SINR. Figure 4 shows the daily EI due to UL with respect to the DL rate requirement. In this simulations, N UL RB in (5) is computed such that each UE achieves 1 Mbit/s in the UL. As expected, associating UEs to nearby eNBs limits the required UL power per resource block; accordingly, CRE and Min Path Loss scheme are beneficial in terms of exposure with respect to the reference solution based on the RSRP. However, better performance can be achieved through the EMF-Aware solution, where the load balancing reduces the required N UL RB (e.g., by offloading to the MeNB the small cell UEs characterized by high uplink interference). On the other hand, by inspecting Figures 2 and 3, we can state that to further increase S(X), it is necessary to implement cell selection patterns that increase the EI. In fact, the Max Sat. results in the highest EI in the range of low-medium rate requirements, where it is possible to satisfy most of the UEs. On the contrary, in the high rate requirements region, only few UEs can meet the rate target and the average EI can be strongly reduced. Finally, to assess the complexity of the proposed schemes, we evaluate the average number of iterations and the associated running time. To obtain these results, we have used an Intel i5 3.2 GHz, equipped with 4 GB RAM. Table II shows the results for the Max Sat. scheme, which is more time consuming than the EMF-Aware; the results show that it has a linear dependency on the number of UEs. Moreover, they confirm that the algorithm converges in a time range satisfying the requirements of 3GPP load balancing functions [10]. VI. CONCLUSION Most of the current cell selection schemes for HetNets are based only on the radio link quality, which limits the macro cell offloading, leading to congestion situations at cells with capacity-limited backhaul, and increases the required UL power. To solve this problem, we have proposed two schemes that enhance the user satisfaction in terms of both the DL data rate and the EMF exposure; furthermore, they increase at the same time the utilization efficiency of the network resources. Our analysis underlines that, to improve the HetNet performance through load balancing, it is necessary to simultaneously take into account the users' requirements, the cell load and signal strength, the interference level, and the backhaul capacity. Moreover, our results show that satisfying the data rate requirements of UEs with poor SINR comes at the cost of increasing the average EMF exposure. To appropriately deal with this trade-off, in our future work, we will integrate dual connectivity schemes, where DL and UL traffics are served by distinct eNBs, and we will pose problem formulations to identify solutions with guaranteed performances.
4,202.8
2015-02-01T00:00:00.000
[ "Business", "Computer Science" ]
ESCAPE DYNAMICS FOR INTERVAL MAPS . We study the structure of the escape orbits for a certain class of interval maps. This structure is encoded in the escape transition matrix (cid:98) A f of an interval map f , extending the traditional matrix A f which considers the transition among the Markov subintervals. We show that the escape transition matrix is a topological conjugacy invariant. We then characterize the 0–1 matrices that can be fabricated as escape transition matrices of Markov interval maps f with escape sets. This shows the richness of this class of interval maps. 1. Introduction. In this paper we further investigate the class of open dynamical systems, see [1,3,4,13], that arise from Markov interval maps with non trivial escape set, see [10,11]. For a Markov interval map f : I → I, with non trivial escape set, we introduce a 0-1 matrix A f that encapsulates not only the transition among the Markov intervals of f (which is the usual transition matrix) but also the transitions from the Markov intervals to the escape intervals. This leads us to naturally study the influence of A f in the dynamics of (I, f ). In particular, to study and classify different interval maps, with nontrivial escape sets, whose restrictions to the respective maximal invariant Cantor sets are topological conjugated. The traditional transition matrix A f that encodes the transitions of the Markov subintervals in the partition of the domain of f was called Stefan matrix in [8] and later used as a (Markov) transition matrix, see [12,10,11]. Besides the intrinsic interest in the open dynamical systems, the other motivation to further the study of this escape matrix A f arises from the undesirable nonfaithfulness of the representations ν x of the Toeplitz algebra T A f on the orbits of points x in the underlying open dynamical systems (provided by interval maps with trivial escape sets) that we constructed in [6]. Two interval maps f and g can have the same Markov transition matrix A f = A g but different escape matrices such that the representation ν x of T A f is faithful for the open dynamical system (I, f ) but non-faithful for (I, g). In this paper we characterize the 0-1 matrices X that can be realised by interval maps f (which is non-unique), thus A f = X, and then use this in [7] as one of the key ingredients to successfully find a graph algebra for which ν x is indeed a faithful representation. The construction of these representations shows one clear advantage of considering A f instead of A f . We leave this representation theory viewpoint as it is and concentrate on the framework of the problems we tackle in this paper. More precisely, in [6] we considered the interval maps f where the orbit of a point x can be in the escape set of f -see [9] -namely, f k (x) ∈ I does not belong to the domain of f for a certain k ∈ N. Besides this, we defined a 0-1 transition matrix A f that captures not only the transition among the n Markov subintervals (the traditional transition matrix A f ) but also the transitions from the Markov subintervals to the m escape subintervals (giving rise to B f ). If we gather the Markov subintervals first and then the escape subintervals, then there exists a permutation matrix P such that where the lower blocks are matrices with all entries equal to zero. With this transition matrix A f defined, we tackle two natural problems. The first one concerns topological conjugacy where we indeed prove that two Markov interval maps f, g are topological conjugated whenever A f = P A g P T for a certain permutation matrix P , which is in principle unrelated to the permutation that appears in Eq. (1). The second problem we tackle is to characterize the 0-1 matrices in the block shape A B 0 m×n 0 m×m that can actually be realised as an escape transition matrix A f for some Markov interval map f (up to permutation as explained in Eq. (1)). If such realisation exists, then it is highly not unique. The rows of A f which are entirely null detect the positions of the escape sets of f . This is crucial for the study of the dynamics. The related problem, with no escape sets, was studied in [5], and the answer was that for every row, the entries of ones must be filled in consecutive entries, which we called interval type matrix, see Definition 2.5. Our analysis also shows that such realisations f are restrictions of Markov interval maps g without escape sets. The plan for the rest of the paper is as follows. In Sect. 2 we first review the dynamical systems background for the interval maps with escape sets, emphasizing the notion of escape transition matrix as in Definition 2.3. Then in Proposition 2.8 we prove that the escape transition matrix is a topological conjugacy invariant. With the help of the notion of m-configuration, in Subsect. 3 we characterize the block matrices A B 0 m×n 0 m×m that can be realised as escape transition matrices of interval maps (up to row and column permutations), see the main Theorem 3.7 as well as Propositions 3.4 and 3.5. We also conclude that any such interval map f (with escape points) is a restriction of a certain interval map without escape points (see Corollary 3.10). 2. Dynamics of interval maps with escape sets. Given n ∈ N, let Γ be an ordered set {c 0 , c − 1 , c + 1 , ..., c − n−1 , c + n−1 , c n } of (at most) 2n real numbers such that Given Γ as above, we define the collection of closed intervals C={I 1 , ..., I n }, with We also consider the collection of open intervals {E 1 , ..., E n−1 }, with in such a way that I := [c 0 , c n ] = ∪ n j=1 I j ∪ n−1 j=1 E j . We now consider the interval maps for which we can construct partitions of the interval I as in (2), (3) and (4). The minimal partition C satisfying the Definition 2.1 is denoted by C f . We remark that the Markov property (P2) allows us to encode the transitions between the intervals in the so-called (Markov) transition n × n matrix A f = (a ij ), defined as follows: whereJ denotes the interior of a set J. A map f ∈ M(I) uniquely determines (together with the minimal partition C f = {I 1 , ..., I n }): Note that Ω f is colloquially called the survivor set of f . Note that Ω f is the set of points that remain in dom(f ) under iteration of f , and is usually called a cookie-cutter set, see [9]. The open set is usually called the escape set. Every point in E f will eventually fall, under iteration of f , into the interior of some interval E j (where f is not defined) and the iteration process ends. We may say that x is in the escape set E f of f if and only if there is for some j, then E j = ∅ and c j is a singular point, either a critical point or a discontinuity point of f . We will consider the equivalence relation R f defined by The relation R f is a countable equivalence relation in the sense that the equivalence If x ∈ Ω f then R f (x) has a graph structure without a preferable vertex. If x ∈ E f then R f (x) has a natural structure of a rooted tree. The root of R f (x) is the point e (x) with no outgoing edge, so f −1 (e(x)) ∈ dom(f ) but e(x) / ∈ dom(f ). For every y ∈ E f there is a least natural number τ (y) such that f τ (y) (y) / ∈ dom (f ), which means that, f τ (y) (y) ∈ E j , for some j such that E j = ∅. The final escape point, for the orbit of y, is then denoted by e (y) := f τ (y) (y) and the final escape interval index is denoted by ι (y), that is, if f τ (y) (y) ∈ E j then ι (y) = j. In order to describe symbolically the escape orbits, we extend the symbol space adding a symbol for each escape interval E j , which will represent an end for the symbolic sequence. For each escape interval E j we associate a symbol j to distinguish the symbol associated with the interval partition I j . That is, we consider the symbols ordered by: If E j is not an interval, that is E j = ∅, then there is no symbol j. Moreover, we define The address map ad (see Definition 2.2) is extended to the escape set E f with ad (x) := j ∈ Σ E f if x ∈ E j . Therefore, the address map is defined for all points of the interval I except for the points of the boundary of the subintervals I 1 , ..., I n , see (2) and ( The itinerary of a point x ∈ E f is always a finite word terminating in a symbol j ∈ Σ E f . An admissible escape word is a word occurring as the itinerary of an escape point x ∈ E f . These words are formed by such that a ξiξi+1 = 1 for i = 1, 2, ..., k − 1, and terminating on an escape symbol j. 2.1. The escape transition matrix and topological conjugacy. Let f ∈ M(I). As in the last section, we thus have an index set {1, ..., n} Σ E f which is ordered as in (8) and (9). In order to deal with the possible transitions from Markov transition intervals to escape intervals we define the escape transition matrix A f as follows. For row and column labeling, the matrix A f respects the order given in (8). Example 2.4. Let f be the map, see Figure 1, The whereas the matrix A f is as follows (see the order in (8)): Note that if we use the row and column labeling order where A f is the Markov transition matrix of f and B f is the transition matrix from Markov subintervals to the escape subintervals. We write 0 p×q for the p × q matrix with zeros everywhere, whereas 1 p×q is the p × q matrix with ones everywhere. In order to understand the relation between the escape matrices A f and A g of two interval maps f, g, we put forward the following definition in the context of generic 0-1 matrices. 1. The matrix X is said to be of interval type if in every row, the entries equal to 1 are all consecutive (cf. Def. 2 in [5]). 2. If π is a permutation of the rows of an interval type matrix X and P π its permutation matrix, then we say that π preserves interval type if the matrix P π XP T π is an interval type matrix. We can write the above definition as follows. Let X = (x ij ). The follower set of If no confusion arises, we denote F X (i) by F (i). Then the matrix X is of interval type if and only if the follower set F X (i) is a set of consecutive numbers (for every Lemma 2.6. Let X be an interval type matrix. A permutation π preserves the interval type of X if and only if is a set of consecutive numbers for every i ∈ α X . Proof. Given i ∈ α X the set F X (i) corresponds to the positions of entries equal to 1 in the row i of X. The set π (F X (i)) corresponds to the row π (i) ∈ α PπXP T π of the matrix P π XP π . The matrix P π XP T π is of interval type if and only if for every row s ∈ α PπXP T π of P π XP T π the follower set F PπXP T π (s), is a set of consecutive numbers. Since F PπXP T π (s) = π F X π −1 (s) , setting i = π −1 (s) concludes the proof. Now the following result is straightforward. Corollary 2.7. Let X be an interval type matrix. If π leaves the followers sets of X invariant then it preserves interval type. We now derive the relation between the escape transition matrices for topologically conjugated interval maps, by considering an interval type matrix X as an escape matrix. Let f • and g • denote the restrictions of each map to the finite union of the interior of the intervals in the domain of f and g, respectively, see (P1) in Definition 2.1. Let f and g ∈ M(I). Let D 1 , ..., D n f and G 1 , ..., G ng be the intervals in the partition of the domains of f and g, respectively, as in Definition 2.1. We recall that f • and g • are topologically conjugated if there exists a continuous bijective map φ : Note that escape intervals for f are escape intervals for f • and there is a transition between I i and I j through f if and only if there is a transition between Note that A f and A g are necessarily of interval type. Proposition 2.8. Let f ∈ M(I f ) and g ∈ M(I g ), for certain intervals I f and I g . The maps f • , g • are topologically conjugated if and only if there is a permutation π such that A f = P π A g P T π . Proof. Let D 1 , D 2 , ..., D r , ..., D n f +m f be the intervals partitioning I f , including escape intervals, with the order of real line, and G 1 , G 2 , ..., G r , ..., G ng+mg be the intervals partitioning I g . We assume that n f are the Markov subintervals (say D m1 , ..., D mn f ) and m f are the escape intervals for f , and similarly for g. Suppose that f • and g • are topological conjugated. So there is a bijective and bicontinuous map φ : , consequently n f = n g = n and there is a permutation π ∈ S n+m so that φ( This means that a(f ) ij = 1 implies a(g) π(i)π(j) = 1. The same argument shows . Finally let us see that the permutation π preserves interval type. The image of any • D i , through f , must be the union of intervals whose labeling consist of consecutive natural numbers, that is, there are two natural numbers, r i and t i so that f ( The number r i is the position the block of 1's in the row i and t i is the length of the block. On the other hand, the fact they are consecutive means that is also an interval, and g•φ( is an interval. Consequently, the permutation preserves interval type. Conversely, let us assume that we are given Markov matrices f and g such that their escape matrices A f , A g are conjugated by a permutation matrix P π with permutation π which preserves interval type and the escape state set. Since P π preserves escape states also preserve regular states. The matrices have the same dimension and its dimension is partitioned into m + n (m the number of escape states and n the number of regular states). Next, we show that the permutation matrix determine directly an invertible map φ preserving the topological structure. Each nonzero entry in the permutation matrix is in the entry (i, π (i)). The map φ is defined as the linear map sending the interior of each interval D i to the interior of the interval G π(i) . If D i is a escape interval then the sign of φ on D i is arbitrary. On the other hand, Let t = π (j) and recall that a(g) π(i)t = a(f ) ij . Since the matrices are conjugated precisely by the permutation π, we obtain g(φ( , for every D i and therefore for every point in the interior of the domain of f . Therefore, by construction, φ is invertible and preserving continuity on the interior of the domain of f . Therefore, if we start the above construction using A g and g, then we obtained the topological conjugacy. This permutation satisfies the referred property (of preserving interval type) since The permutated transition matrix is of interval type and can be associated with a certain interval map g 1 (see the graph g 1 in Fig. 3) which is conjugated to f through a certain φ 1 with g • 1 = (φ 1 ) −1 •f • •φ 1 . The graph of φ 1 can be determined by the permutation matrix P π1 where This permutation does not preserve interval type: associated with a certain interval map g 2 which is not conjugated to f through φ 2 . This permutation preserves the followers sets (invariant sets for π 3 ): Example 2.11. Let The conjugated matrix is, naturally, of interval type Note that m(i) = m(j) for i = j, as a consequence of Definition 3.1. In view of (3) and (4), an m-configuration provides us a way to order the Markov and escape subintervals of an interval map f as follows: We note that for every i = 1, ..., m, e(i) ± 1 ∈ {m(j)} j=1,...,n so that there exists j i = 1, ..., n such that e(i)−1 = m(j i ) (thus e(i)+1 = m(j i +1). From the viewpoint of symbolic dynamics, this is the natural order for the rows and columns of the escape transition matrix A f in Definition 2.3, see also (8). When m = n − 1, there is only one m configuration of n + m, with e(i) = 2i for i = 1, ..., m, m(j) = 2j − 1 for j = 1, ..., n and n + m = 2n − 1. If m = 0, there is no configuration (there is no e(i) and we set m(j) = j for all j = 1, ..., n). Every m-configuration C m,n of n+m gives rise to a choice of the relative locations of the m escape subintervals in n Markov subintervals of I, as in (14), but it does not give the points in (2) neither the interval map f . However, if such map exists, then the escape symbols are precisely see (9) and, the interval map f is such that with P the permutation matrix determined by the following permutation π Cn,m := 1 2 · · · n n + 1 n + 2 · · · n + m m(1) m(2) · · · m(n) e(1) e(2) · · · e(m) . Of course, given an interval map f , then the ordering (8) naturally gives rise to an m-configuration of n + m and therefore the above permutation matrix P can be denoted by P f . Note that for every reducible non-negative matrix X there is a permutation matrix P so that where each X ii , i = 1, ..., n is either an irreducible block matrix or an 1 × 1 matrix equal to 0, see [2]. If X is itself irreducible we have X = X 11 and in this case P = I. We then say that P XP T is a normal form of X and we denote it by N P (X) or simply by N (X). This form is not necessarily unique. For example, if the dimension of the irreducible block X 11 is equal to n 1 greater than 1 we have n 1 ! different ways of representing X in irreducible blocks, considering the permutations which fix all the lines and columns except the lines and columns of the block X 11 . We will then have the following equivalent representation Different interval maps f and g may give rise to different permutation matrices P f and P g , but with the same normal form N P f ( A f ) = N Pg ( A g ). In this case, A f = A g and B f = B g . Now we summarize in the following result what we discussed so far. Then Proof. Given such an interval map f , (8) gives rise to an m-configuration C n,m of n + m. The row and columns labeling of A f is (14) whereas that of A f B f 0 0 is given by m(1), ..., m(n), e(1), ..., e(m). Now if we consider the permutation matrix P associated to C n,m as in (16), then we have P A f P T = A f B f 0 0 . If we fix the labeling of the matrix A f , then the normal form of A f is unique (note that A f is primitive because we assume that f is a Markov map, see Definition 2.1). Given m, n ∈ N such that 0 ≤ m < n, we now study the conditions over pairs of matrices A and B with sizes n × n and n × m (respectively) such that there exists an interval map f for which: We will simply write N ( We note that if we fix an m-configuration C m,n , and thus a permutation matrix P Cm,n as in (16), then we may find a matrix A such that N P Cm,n ( A) = A B 0 m×n 0 m×m , but this does not guarantee the existence of a solution for (17) as we will see in the sequel. • The case m = 0 (so Σ E f = ∅) was answered positively in [5,Proposition 6] for any primitive interval type matrix A. We briefly explain the construction of an interval map f such that A f = A and Σ E f = ∅. Let λ A be the Perron eigenvalue of A and µ A = (v 1 , ..., v n ) the corresponding Perron eigenvector normalised so that • We now recall here [6, Example 3.7] that realizes the above 2-configuration of 5 (that was considered in [6, Example 3.6]). The underlying interval map is The cases of realisability of 1-configurations of n + 1 from interval maps is of particular interest in the sequel as well. Given (u 1 , ..., u n ) ∈ {0, 1} n , we look for the existence of interval maps f such that for some A (which is the Markov transition matrix A f of f ). To keep it simpler, we work with e(1) = 2 (and so m(1) = 1, m(2) = 3, ..., m(n) = n + 1). Let now s be the least integer so that u s = 1 and u i = 0 for i < s. If k i2 = 0 with i = 1, 3, 4, ..., n + 1 then k ij = 0 for every j, 1 ≤ j ≤ s and k ij = 1 for every j, s + 1 ≤ j ≤ n + 1. This implies that K 2 > 0, that is, every entry of K 2 is positive and therefore K is a primitive matrix. Indeed, if the row i of K is a row of 1's then the clearly (K 2 ) ij > 1 since every column of K is non-zero. If the row i of K is in the form 0...01...1 (s is the position of the last 0) then the entry (K 2 ) ij has the term K i,s+1 + K s+1,j = 1. Therefore (K 2 ) ij ≥ 1 for all i, j. We thus obtain an interval type matrix K such that K 2 > 0. Let λ K be the Perron-Frobenius eigenvalue of K and µ = (µ 1 , ..., µ n+1 ) the corresponding Perron-Frobenius eigenvector normalised so that [5,Proposition 6] shows that there is a piecewise linear map g ∈ PL(I) with slope λ K (which is greater than 1 because K is primitive) such that A g = K and J 1 = [0, c 1 ], ..., J i = [c i−1 , c i ], ..., J n+1 = [c n , 1] is the Markov partition of g (so g| Ji is a piecewise linear map such that g(J i ) = j b ij J j ). Now, we define E 1 :=J 2 , I 1 := J 1 , I 2 := J 3 ,..., I n := J n+1 , and f := g| ∪ n i=1 Ii . Then E 1 is the only escape subinterval of f so that E f = E 1 , and f does fulfill the requested property that there is a transition from I 1 to E 1 if u 1 = 1, and for i > 1, there is a transition from I i to E 1 if and only if u i+1 = 1. Note that if all the u i 's are one in the last proof, then we get the matrix K as a particular case (with m = 1) of Proposition 3.4. We illustrate the above constructive proof. For any 0-1 matrix K = (k ij ), let K i = {j : k ij = 1} (using the associated calligraphic letter). If we have a 0-1 matrix A B 0 m×n 0 m×m of type (n + m) × (n + m) and an m-configuration C m,n of m + n, then for every i = 1, ..., n we set where A i is the follower set as defined in Eq. (13). In that case, A and B must then be interval type matrices. Since A is primitive and every column of B is non-zero, the matrix K is also primitive because (using the definition of K) (K p ) m(i)m(i ) ≥ (A p ) ii , (K p ) m(i)e(j) ≥ (A p−1 B) ij , (K p ) e(j)m(j ) ≥ 1 and (K p ) e(j)e(j ) ≥ 1 for every p ∈ N, i, i = 1, ..., n and j, j = 1, ..., m. Note that if s such that A s > 0, then the above shows that K s+1 > 0, therefore K is a primitive matrix. Therefore we can apply [5, Proposition 6] and find a piecewise linear map g ∈ M(I) with slope the Perron-Frobenius eigenvalue λ K of K, the escape set Σ Eg = ∅, and the Markov partition C g = {J 1 , ..., J m+n } such that A g = K. Now we define E e(i)−i =J e(i) and I j = J m(j) for i = 1, ..., m and j = 1, ...n and f = g| ∪ n j=1 Ij . Then it is clear that f does what we want, namely A f = A, A f = K 0 and N ( A f ) = A B 0 m×n 0 m×m as required. If A and B are interval type matrices, then A B 0 m×n 0 m×m might not be realisable by an escape transition matrix. This depends on the chosen configuration, as the following example shows. Corollary 3. 10. Let A f be an escape transition matrix of an interval map f . Then f is the restriction of an interval map g with escape set E g = ∅. Then we consider the matrix K as in the proof of Theorem 3.7, and thus [5] guaranties the existence of an interval map g ∈ M(I) such that A g = K and f is indeed the restriction of g to the union of all the subintervals labelled by m(1), ..., m(n).
6,734
2019-08-22T00:00:00.000
[ "Computer Science", "Mathematics" ]
Property income from-whom-to-whom matrices: A dataset based on financial assets–liabilities stocks of financial instrument for Spain A common problem in compiling and updating Social Accounting Matrices (SAM) or Input-Output tables is that of incomplete information. In the case of the submatrix ‘Property Income of the Account Allocation of Primary Income’, the information published by the National Bureau of Statistics of Spain (INE) is limited because it is not possible to build the set of from-whom-to-whom sub-matrix on income interest, dividends, securities and rents with only the subtotals presented in the Integrated Economic Accounts (IEA). This because the income distribution received and paid for by each institutional sector required for a financial SAM is not available, i.e. the INE does not break down the data by institutional destination and source. In this sense, our contribution rely on estimating a complete series of from-whom-to-whom matrices of Property Income for the Spanish economy between 1999 and 2016, in which we have devoted special attention to staying in line with the Data Gaps Initiative (DGI-2) recommendation released by the Financial Stability Board (FSB) and the International Monetary Fund (IMF), claiming that more focus is needed on data sets that support the monitoring of risks in the financial sector in response to regulatory and macro-financial emerging policy needs). Subject area Economics More specific subject area Financial Macroeconomic, Flow of Funds, System of National Accounts Type of data Value of the data The dataset provides accurate estimates of the allocation of primary income accountproperty incomein a from-whom-to-whom matrix scheme. This representation allows the question 'who' is financing 'whom' to be answered, which allows a more detailed and complex analysis of the financial flows between sectors and their role in the economy. The novel approach to provide the stocks of Asset and Liabilities Matrices in a from-whom-to-whom framework by financial instruments turns out to be very useful for analyzing the real-financial interconnectedness of the Spanish economy. The dataset provides the necessary elements to estimate the breakout of the total income return, resulting in an outstanding sources of information for investment analysis and impact analysis of public policies. The set of submatrices results in a consistent accounting framework useful for improving and extending Social Accounting Matrices, sectorial-financial linkage analysis, macroeconomic forecasting and improve and enrich the scope of real-financial computable general equilibrium (CGE) models. Data The real side information concerning the allocation of primary income accountproperty income was obtained from the statistics of the Integrated Economic Accounts (IEA) provided by the National Bureau of Statistics of Spain (INE). The financial side information was retrieved from the financial statistics of the Flow of Funds (FoF) provided by the Bank of Spain (BdE). Both INE and BdE data sets correspond to the yearly series 1999 to 2016 due to the constraints to using the more recent official data set available to build a Property Incomes matrix for Spain. Given that both the real statistics of the INE and the financial statistics of the BdE shape the entire System of National Accounts (SNA93) [1], the estimation procedure proposed in this data research maintain and respect the statistical data provided by both agencies. In this sense, the statistical compilation procedures follow the UN Manual of SNA93 for the construction of the matrices of Property Incomes, while contemplating the recommendations made by Shrestha et al. [2] to expand the statistical information within an integrated framework for financial stocks positions and flows on a from-whom-to-whom basis, and the compilation guides suggested by Tsujimura and Mizoshita [3] and Jellema et al. [4] to integrate the financial matrices accounting used as baseline. The statistical information from the INE is available in integrated structured tables separated by years, while the database from the BdE is expressed in quarterly time series. Since the data from the BdE are in quarterly series, different treatments among figures expressed in flows from those expressed in balances were required, adding the former to form flows for the year and considering the figures of the last quarter as the closing balances of the respective year. Experimental design, materials and methods In the wake of the 2008 financial and economic crisis, the Group of Twenty economies (G-20) asked the Financial Stability Board (FSB) and the International Monetary Fund (IMF) "to explore gaps and provide appropriate proposals for strengthening data collection before the next meeting of the G-20 Finance Ministers and Central Bank Governors." In its Spring Meeting in April 2009, the FSB-IMF came up with 20 recommendations [5], known now as the Data Gaps Initiative (DGI-1), to address information gaps revealed by the global financial crisis. Recently, the FSB-IMF concluded the Second Phase of the G-20 Data Gaps Initiative (DGI-2) in September 2017 [6]. The DGI-2 recommendations maintain the continuity of DGI-1 but claim that more focus is needed on data sets that support the monitoring of risks in the financial sector and the analysis of the interlinkages across economic and financial systems. This data article focuses on two of these recommendations, both of which state that G-20 member economies should extend their national accounts by compiling financial and nonfinancial stocks and flows in the economic sector. Integrated approach for property income and financial instruments on a from-whom-to-whom basis The integrated system of sector accounts in a from-whom-to-whom (or debtor/creditor) framework, correspond to the matrix form representation that allow the analysis of the financial connections among institutional sectors in a national economy and abroad. As have been pointed out by Shrestha et al. [2] the integrated from-whom-to-whom representation of statistical information allows answering questions like "Who is financing whom, in what amount, and with which type of financial instrument?". As regards of property income, it also permits tracing who is paying/receiving income (e.g., interest) to/from whom. The from-whom-to-whom compilation approach also enhances the quality and consistency of data by providing more cross-checking and balancing opportunities. The System of National Accounts 2008 (SNA2008) [7] presents from-whom-to-whom matrices as three dimensional tables where the flows from one sector to another sector for each type of financial instruments are showed. In this regard, to estimate the matrix Property Income, we based our approach on the definition provided by the SNA2008 manual, which states: "7.107 Property income accrues when the owners of financial assets and natural resources put them at the disposal of other institutional units. The income payable for the use of financial assets is called investment income while that payable for the use of a natural resource is called rent. Property income is the sum of investment income and rent. 7.108 Investment income is the income receivable by the owner of a financial asset in return for providing funds to another institutional unit…". Hence, we can use the balances account of the financial account relating to financial assets and liabilities across the institutional sectors as estimators of the shares of income received and paid by each institutional sector. Intuition suggests that the property income received and/or paid for each institutional sector should be directly proportional to its levels of assets and/or liabilities. Thus, we used the balance account of the financial account relating to financial assets and liabilities across the institutional sectors as estimators of the shares of income received and paid by each of them. Property income from-whom-to-whom matrix The property income from-whom-to-whom matrix show how the income is received by the owner of a financial asset in return for providing funds to another institutional sector. The income payable for the use of financial assets is called investment income while that payable for the use of a natural resource is called rent [7]. Formally, the property income as the total sum of investment income and rent, can be expressed as follow: where PI mxmxp correspond to the Property income matrix in a double and quadruple entry matrix form, the subscript m denotes institutional sectors in the economy, and the subscript p denotes income type of transactions. In this data article, we consider m equal to 5 institutional sectors, 1 Table 1 for more details). From expression (1) we can get from each row the total vector v jp of total property income paid by each m institutional sector: Similarly, from each column the total vectors u jp of total property income received by each m institutional sector: In this sense, the integrated framework on a from-whom-to-whom scheme allows answering questions like "Who is paying/receiving income (e.g., interest) to/from whom, in what amount, and with which type of transaction?". Also, as have been pointed out by Shrestha et al. [2] this matrix Table 1 Property income and type of payments.Source: System of national Accounts 2008 and own elaboration. Property income Type of payments Code Description Variable Description D41 -Interest payable on loans and deposits. Interest Income receivable by the owners of certain kind of financial assets at the disposal of another institutional unit. -Interest payable on debt securities. D42 -Distributed income of corporations. Dividends Investment incomes to which shareholders become entitled as a result of placing funds at the disposal of corporations. -Withdrawals from incomes of quasicorporations, Investment funds shareholders. D43 -Reinvested earnings of foreign direct investment. D44 -Income payable on pension entitlements, insurance policyholders. Securities Other investments incomes and rents. representation approach enhances the quality and consistency of data by providing more crosschecking and balancing requirements, given that the following condition should be hold where the total paid must be equal to the total received by the economy. Financial Instruments from-whom-to-whom matrices Financial instruments include the full range of financial contracts made between institutional sectors. These contracts are the basis of creditor/debtor relationships through which asset owners acquire unconditional claims on economic resources of other institutional sectors [7]. In this sense, the financial instrument matrix defined as A mxmxq denotes a from-whom-to-whom representation of the net worth of this economy in terms of stocks: where A mxmxp correspond to the Assets-Liabilities matrix of Stocks of Financial Instrument in a double and quadruple matrix form. The financial instrument from-whom-to-whom matrices in expression (5) comprise the financial acquisition in both claims (described as assets) and obligations (described as liabilities) by institutional sector. As before, the subscript m denotes institutional sectors in the economy, and the subscript q denotes financial instruments. The availability of information provided by the Bank of Spain allow to consider seven (q ¼ 7) financial instruments: AF.1 Monetary gold and Special Drawing Rights, AF.2 Currency and deposits, AF.3 Debt securities, AF.4 Loans, AF.5 Equity and investment fund shares, AF.6 Insurance, pension, and standardized guarantee schemes, and AF.7/8 Other Assets (see Table 2 for more details). GRAS estimation approach Like Leung and Secrieru [8] and Aray, Pedauga and Velázquez [9] we estimated the Property Income matrix (PI mxmxq ) breakdown by institutional sector and type of instruments defined in Eq. (1) by using the information embedded on the assets and liabilities matrix of each institutional sector compiled in the financial instruments matrix (A mxmxq ) represented in Eq. (2). In this sense, let A and PI be, respectively, the observed (prior) and the estimated (target) matrix, with their typical elements a ijq (each financial instrument) and x ijp (each property income component). Under the GRAS algorithm [10], the prior matrix A is used as baseline to estimate the target matrix PI, satisfying simultaneously the row sums v jp defined in Eq. (2) and column sums u jp expressed in Eq. (3). Thus, the programming model following the information loss problem is such that: X i x ijp ¼ v jp for all j; and ð8Þ Eq. (6) represents the objective function. Constraints (7) and (8) imply that the adjusted matrix PI should be consistent with an exogenously specified row and column totals. Moreover, constraint (9) introduces parameters α i and β j which make all cells 0 in a row i or a column j of matrix PI, when the corresponding cell in u ip or v jp is 0. These new constraint is set to remove Financial Assets/Liabilities in matrix A which do not produce payments in matrix PI. In this sense, we are capable to derive the breakdown of the Property Income by types of financial transactions for the Spanish economy, in which we have devoted special attention to staying in line with DGI-2 recommendation II.8, referring to the compilation of sectorial account flows and balance sheet data, based on from-whom-to-whom matrices expressed in transactions and stocks to support balance sheet analysis, and recommendation II.9, which encourages the development and dissemination of distributional information on income allocation. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Transparency document. Supporting information Transparency data associated with this article can be found in the online version at https://doi.org/ 10.1016/j.dib.2018.05.018. Table 2 Financial Instruments and type of assets.Source: System of National Accounts 2008 and own elaboration. Financial Instrument Type of asset/liability Code Description Variable Description AF.1 -Gold and reserves. -Special Drawing rights. Monetary gold and special Drawing rights. Titles held as a reserve assets, comprising gold bullion and supplement reserve only produced by IFM. AF.2 -Currency, Notes and coins issued or authorized by central bank or government. Currency and Deposits. Amount of money in national or foreign currency that economic agents own as assets and liabilities. Saving deposits, fixed-term deposits and nonnegotiable certificates of deposits. Debt securities. Negotiable instruments that can serve as evidence of a debt. AF.4 -Short-term and long-term Loan. Loans. Financial assets created when a creditor lends fund directly to a debtor and evidenced by a document that they are not negotiable. AF. 5 -Listed and investment fund shares. -Unlisted and other Equity shares. Equity and investments fund share. Assets whit the particular feature that their holders obtain a residual claim of the institutional unit who issued the instrument. AF.6 -Life and non-life insurance. Insurance, pensions and standardized guarantees. All function as a form of redistribution of income or wealth mediated by financial institution. -Trade credits, other accounts receivable and advances. Other assets. All other kind of financial assets linked to a specific financial instrument or destined to goods and services.
3,271.8
2018-05-22T00:00:00.000
[ "Economics" ]
Where do winds come from? A new theory on how water vapor condensation influences atmospheric pressure and dynamics Phase transitions of atmospheric water play a ubiquitous role in the Earth's climate system, but their direct impact on atmospheric dynamics has escaped wide attention. Here we examine and advance a theory as to how condensation influences atmospheric pressure through the mass removal of water from the gas phase with a simultaneous account of the latent heat release. Building from the fundamental physical principles we show that condensation is associated with a decline in air pressure in the lower atmosphere. This decline occurs up to a certain height, which ranges from 3 to 4 km for surface temperatures from 10 to 30 deg C. We then estimate the horizontal pressure differences associated with water vapor condensation and find that these are comparable in magnitude with the pressure differences driving observed circulation patterns. The water vapor delivered to the atmosphere via evaporation represents a store of potential energy available to accelerate air and thus drive winds. Our estimates suggest that the global mean power at which this potential energy is released by condensation is around one per cent of the global solar power -- this is similar to the known stationary dissipative power of general atmospheric circulation. We conclude that condensation and evaporation merit attention as major, if previously overlooked, factors in driving atmospheric dynamics. Introduction Phase transitions of water are among the major physical processes that shape the Earth's climate.But such processes have not been well characterized.This shortfall is recognized both as a challenge and a prospect for advancing our understanding of atmospheric circulation (e.g., Lorenz, 1983; Correspondence to: Anastassia M. Makarieva (elba@peterlink.ru)Schneider, 2006).In A History of Prevailing Ideas about the General Circulation of the Atmosphere Lorenz (1983) wrote: "We may therefore pause and ask ourselves whether this step will be completed in the manner of the last three.Will the next decade see new observational data that will disprove our present ideas?It would be difficult to show that this cannot happen. Our current knowledge of the role of the various phases of water in the atmosphere is somewhat incomplete: eventually it must encompass both thermodynamic and radiational effects.We do not fully understand the interconnections between the tropics, which contain the bulk of water, and the remaining latitudes. . . .Perhaps near the end of the 20th century we shall suddenly discover that we are beginning the fifth step."Lorenz (1967, Eq. 86), as well as several other authors after him (Trenberth et al., 1987;Trenberth, 1991;Gu and Qian, 1991;Ooyama, 2001;Schubert et al., 2001;Wacker and Herbert, 2003;Wacker et al., 2006), recognized that local pressure is reduced by precipitation and increased by evaporation.Qiu et al. (1993) noted that "the mass depletion due to precipitation tends to reduce surface pressure, which may in turn enhance the low-level moisture convergence and give a positive feedback to precipitation".Van den Dool and Saha (1993) labeled the effect as a physically distinct "water vapor forcing".Lackmann and Yablonsky (2004) investigated the precipitation mass sink for the case of Hurricane Lili (2002) and made an important observation that "the amount of atmospheric mass removed via precipitation exceeded that needed to explain the model sea level pressure decrease". Although the pressure changes associated with evaporation and condensation have received some attention, the investigations have been limited: the effects remain poorly characterized in both theory and observations.Previous investigations focused on temporal pressure changes not spatial gradients.Even some very basic relationships remain subject to confusion.For example, there is doubt as to whether condensation leads to reduced or to increased atmo-spheric pressure (Pöschl, 2009, p. S12436).Opining that the status of the issue in the meteorological literature is unclear, Haynes (2009) suggested that to justify the claim of pressure reduction one would need to show that "the standard approaches (e.g., set out in textbooks such as "Thermodynamics of Atmospheres and Oceans" by Curry and Webster (1999)) imply a drop in pressure associated with condensation". Here we aim to clarify and describe, building from basic and established physical principles, the pressure changes associated with condensation.We will argue that atmospheric water vapor represents a store of potential energy that becomes available to accelerate air as the vapor condenses.Evaporation, driven by the sun, continuously replenishes the store of this energy in the atmosphere. The paper is structured as follows.In Section 2 we analyze the process of adiabatic condensation to show that it is always accompanied by a local decrease of air pressure.In Section 3 we evaluate the effects of water mass removal and lapse rate change upon condensation in a vertical air column in approximate hydrostatic equilibrium.In Section 4 we estimate the horizontal pressure gradients induced by water vapor condensation to show that these are sufficient enough to drive the major circulation patterns on Earth (Section 4.1).We examine why the key relationships have remained unknown until recently (Section 4.2).We evaluate the mean global power available from condensation to drive the general atmospheric circulation (Secton 4.3).Finally, we discuss the interplay between evaporation and condensation and the essentially different implications of their physics for atmospheric dynamics (Section 4.4).In the concluding section we discuss the importance of condensation as compared to differential heating as the major driver of atmospheric circulation.Our theoretical investigations strongly suggest that the phase transitions of water vapor play a far greater role in driving atmospheric dynamics than is currently recognized. Adiabatic condensation We will first show that adiabatic condensation is always accompanied by a decrease of air pressure in the local volume where it occurs.The first law of thermodynamics for moist air saturated with water vapor reads (Gill, 1982;Curry and Webster, 1999) Here p v is partial pressure of saturated water vapor, p is air pressure, T is absolute temperature, Q (J mol −1 ) is molar heat, V (m 3 mol −1 ) is molar volume, L ≈ 45 kJ mol −1 is the molar heat of vaporization, c V = 5 2 R is molar heat capacity of air at constant volume (J mol −1 K −1 ), R = 8.3 J mol −1 K −1 is the universal gas constant.In processes not involving phase transitions the third term in (1) is zero. In such processes partial pressure p v changes proportionally to air pressure p, so that function γ (2) does not change.The small value of γ < 0.1 under terrestrial conditions allows us to neglect the influence made by the heat capacity of liquid water in Eq. ( 1). The partial pressure of saturated water vapor obeys the Clausius-Clapeyron equation: where p v0 and ξ 0 correspond to some reference temperature T 0 .Below we use T 0 = 303 K and p v0 = 42 hPa (Bolton, 1980) and neglect the dependence of L on temperature.We will also use the ideal gas law as the equation of state for atmospheric air: Using Eq. ( 6) the first two terms in Eq. ( 1) can be written in the following form Writing dγ in (1) with use of ( 2) and (3) as and using the definition of ξ (3) we arrive at the following form for the first law of thermodynamics (1): In adiabatic processes dQ = 0, and the expression in braces in (9) turns to zero, which implies: Note that µ, γ and ξ are all dimensionless; γ and ξ are variables and µ is a constant, ϕ(0, 0) = µ.This is a general dependence of temperature on pressure in an adiabatic atmospheric process that involves phase transitions of water vapor (evaporation or condensation), i.e. change of γ.At the same time γ itself is a function of temperature as determined by Eq. ( 8): One can see from Eqs. ( 10) and ( 11) that the adiabatic phase transitions of water vapor are fully described by the relative change of either pressure dp/p or temperature dT /T .For the temperature range relevant for Earth we have ξ ≡ L/RT ≈ 18 so that ξµ − 1 ≈ 4.3. Noting that µ, γ, ξ are all positive, from ( 10), ( 11) and ( 12) we obtain Condensation of water vapor corresponds to a decrease of γ, dγ < 0. It follows unambiguously from Eqs. ( 11) and ( 13) that if dγ is negative, then dp is negative too.This proves that water vapor condensation in any adiabatic process is necessarily accompanied by reduced air pressure. Adiabatic condensation cannot occur at constant volume Our previous result refutes the proposition that adiabatic condensation can lead to a pressure rise due to the release of latent heat (Pöschl, 2009, p. S12436).Next, we show that while such a pressure rise was implied by calculations assuming adiabatic condensation at constant volume, in fact such a process is prohibited by the laws of thermodynamics and thus cannot occur.Using ( 6) and ( 10) we can express the relative change of molar volume dV /V in terms of dγ/γ: Putting dV = 0 in (14) we obtain The denominator in ( 15) is greater than zero, see Eq. ( 13). In the numerator we note from the definition of ϕ (10) that . The expression in square brackets lacks real roots: In consequence, Eq. ( 15) has a single solution dγ = 0.This proves that condensation cannot occur adiabatically at constant volume. Non-adiabatic condensation To conclude this section, we show that for any process where entropy increases, dS = dQ/T > 0, water vapor condensation (dγ < 0) is accompanied by drop of air pressure (i.e., dp < 0).We write the first law of thermodynamics (9) and Eq. ( 11) as Excluding dT /T from Eqs. ( 17) we obtain The term in round brackets in Eq. ( 18) is positive, see (13), the multiplier at dS is also positive.Therefore, when condensation occurs, i.e., when dγ/γ < 0, and dS > 0, the lefthand side of Eq. ( 18) is negative.This means that dp/p < 0, i.e., air pressure decreases. Condensation can be accompanied by a pressure increase only if dS < 0. This requires that work is performed on the gas such as occurs if it is isothermally compressed.(We note too, that if pure saturated water vapor is isothermally compressed condensation occurs, but the Clausius-Clapeyron equation (3) shows that the vapor pressure remains unchanged being purely a function of temperature.) 3 Adiabatic condensation in the gravitational field 3.1 Difference in the effects of mass removal and temperature change on gas pressure in hydrostatic equilibrium We have shown that adiabatic condensation in any local volume is always accompanied by a drop of air pressure.We will now explore the consequences of condensation for the vertical air column.Most circulation patterns on Earth are much wider than high, with the ratio height/length being in the order of 10 −2 for hurricanes and down to 10 −3 and below in larger regional circulations.As a consequence of mass balance, vertical velocity is smaller than horizontal velocities by a similar ratio.Accordingly, the local pressure imbalances and resulting atmospheric accelerations are much smaller in the vertical orientation than in the horizontal plane, the result being an atmosphere in approximate hydrostatic equilibrium (Gill, 1982).Air pressure then conforms to the equation Applying the ideal gas equation of state (5) we have from ( 19) This solves as Here M is air molar mass (kg mol −1 ), which, as well as temperature T (z), in the general case also depends on z. The value of p s (19), air pressure at the surface, appears as the constant of integration after Eq. ( 19) is integrated over z.It is equal to the weight of air molecules in the atmospheric column.It is important to bear in mind that p s does not depend on temperature, but only on the amount of gas molecules in the column.It follows from this observation that any reduction of gas content in the column reduces surface pressure. Latent heat released when water condenses means that more energy has to be removed from a given volume of saturated air for a similar decline in temperature when compared to dry air.This is why the moist adiabatic lapse rate is smaller than the dry adiabatic lapse rate.Accordingly, given one and the same surface temperature T s in a column with rising air, the temperature at some distance above the surface will be on average higher in a column of moist saturated air than in a dry one. However, this does not mean that at a given height air pressure in the warmer column is greater than air pressure in the colder column (cf.Meesters et al., 2009;Makarieva and Gorshkov, 2009c), because air pressure p(z) (21) depends on two parameters, temperature T (z) and surface air pressure (i.e., the total amount of air in the column).If the total amount of air in the warmer column is smaller than in the colder column, air pressure in the surface layer will be lower in the warmer column despite its higher temperature. In the following we estimate the cumulative effect of gas content and lapse rate changes upon condensation. Moist adiabatic temperature profile Relative water vapor content (2) and temperature T depend on height z.From Eqs. (10), ( 11) and (20) we have Eq. ( 22) represents the well-known formula for moist adiabatic gradient as given in Glickman (2000) for small γ < 0.1.At γ = 0 we have ϕ(γ, ξ) = µ and Γ d = M d g/c p = 9.8 K km −1 , which is the dry adiabatic lapse rate that is independent of height z, M d = 29 g mol −1 .For moist saturated air the change of temperature T and relative partial pressure γ of water vapor with height is determined by the system of differential equations ( 22), ( 23). Differentiating both parts of Clapeyron-Clausius equation (3) over z we have, see ( 22): The value of h v represents a fundamental scale height for the vertical distribution of saturated water vapor.At T s = 300 K this height h v is approximately 4.5 km.Differentiating both parts of Eq. ( 2) over z with use of ( 20) and ( 24) and noticing that h v = h/(ξϕ) we have (25) This equation is equivalent to Eq. ( 23) when Eqs. ( 22) and ( 24) are taken into account.Height h γ represents the vertical scale of the condensation process.Height scales h v (24) and h γ (25) depend on ϕ(γ, ξ) (22) and, consequently, on γ.At T s = 300 K height h γ ≈ 9 km, in close proximity to the water vapor scale height described by Mapes (2001). Pressure profiles in moist versus dry air columns We start by considering two static vertically isothermal atmospheric columns of unit area, A and B, with temperature T (z) = T s independent of height.Column A contains moist air with water vapor saturated at the surface, column B contains dry air only.Surface temperatures and surface pressures in the two columns are equal.In static air Eq. ( 19) is exact and applies to each component of the gas mixture as well as to the mixture as a whole.At equal surface pressures, the total air mass and air weight are therefore the same in both columns.Water vapor in column A is saturated at the surface (i.e., at z = 0) but non-saturated above it (at z > 0).The saturated partial pressure of water vapor at the surface p v (T s ) (4) is determined by surface temperature and, as it is in hydrostatic equilibrium, equals the weight of water vapor in the static column. We now introduce a non-zero lapse rate to both columns: the moist adiabatic Γ (22) to column A and the dry adiabatic Γ d in column B. (Now the columns cannot be static: the adiabatic lapse rates are maintained by the adiabatically ascending air.)Due to the decrease of temperature with height, some water vapor in column A undergoes condensation.Water vapor becomes saturated everywhere in the column (i.e., at z ≥ 0), with pressure p v (z) following Eq.( 24) and density Here h n (z) is the scale height of the hydrostatic distribution of water vapor in the isothermal atmosphere with T s = T (z). The change in pressure δp s in column A due to water vapor condensation is equal to the difference between the initial weight of water vapor p v (T s ) and the weight of saturated , km , K , km water vapor: The inequality in Eq. ( 27) represents a conservative estimate of δp s due to the approximation h v (z) = h v (T s ) made while integrating ρ v (z) (26).As far as h v (z) declines with height more rapidly than h n (z), Fig. 1a, the exact magnitude of this integral is smaller, while the value of δp s is larger.The physical meaning of estimate ( 27) consists in the fact that the drop of temperature with height compresses the water vapor distribution h ns /h vs -fold compared to the hydrostatic distribution (Makarieva andGorshkov, 2007, 2009a). The value of δp s (27) was calculated as the difference between the weight per unit surface area of vapor in the isothermal hydrostatic column and the weight of water vapor that condensed when a moist adiabatic lapse rate was applied.This derivation can also be understood in terms of the variable conventionally called the adiabatic liquid water content (e.g., Curry and Webster, 1999, Eq. 6.41).We can represent the total mixing ratio of moisture (by mass) as q t ≡ q v + q l = ρ v /ρ + ρ l /ρ, where ρ v is the mass of vapor q t 1 and ρ l is the mass of liquid water per unit air volume; q t 1.The total adiabatic liquid water content in the column equals the integral of q l ρ over z at constant q t , q l ρ = q t ρ − q v ρ = q t ρ − ρ v .The value of δp s (27) is equal to this integral (mass per unit area) multiplied by the gravitational acceleration (giving weight per unit area): The first integral in the right-hand part of this equation gives the mass of vapor in the considered atmospheric column if water vapor were a non-condensable gas, q v = q t = const.This term is analagous to the first term, p v (T s ), in the righthand side of Eq. ( 27), where a static isothermal column was considered.The second term is identical to the second term, g ∞ 0 ρ v dz, in Eq. ( 27).Using the definition of h v (T s ) (24), h n (T s ) (26) and recalling that M v /M d = 0.62 and p v (T s ) = γ s p s , see (4), we obtain the following expression for the δp s estimate (27), Fig. 1b: Note that δp s /p s is proportional to γ s and increases exponentially with the rise of temperature.After an approximate hydrostatic equilibrium is established, the vertical pressure profiles for columns A and B become, cf.(21): Here and T (z) obey Eqs. ( 22) and ( 23), In Fig. 1c the difference p A (z) − p B (z) is plotted for three surface temperatures, T s = 10 o , 20 o and 30 o C. In all three cases condensation has resulted in a lower air pressure in column A compared to column B everywhere below z c ≈ 2.9, 3.4 and 4.1 km, respectively.It is only above that height that the difference in lapse rates makes pressure in the moist column higher than in the dry column. 4 Relevance of the condensation-induced pressure changes for atmospheric processes Horizontal pressure gradients associated with vapor condensation We have shown that condensation of water vapor produces a drop of air pressure in the lower atmosphere up to an altitude of a few kilometers, Fig. 1c, in a moist saturated hydrostatically adjusted column.In the dynamic atmospheric context the vapor condenses and latent heat is released during the ascent of moist air.The vertical displacement of air is inevitably accompanied by its horizontal displacement.This translates much of the condensation-induced pressure difference to a horizontal pressure gradient.Indeed, as the upwelling air loses its water vapor, the surface pressure diminishes via hydrostatic adjustment producing a surface gradient of total air pressure between the areas of ascent and descent.The resulting horizontal pressure gradient is proportional to the the ratio of vertical to horizontal velocity w/u (Makarieva and Gorshkov, 2009b).We will illustrate this point regarding the magnitude of the resulting atmospheric pressure gradient for the case of a stationary axis-symmetric circulation developing above a horizontally isothermal oceanic surface.In cylindrical coordinates the continuity equation for the mixture of condensable (vapor) and non-condensable (dry air) gases can be written as 1 r (34) Here N d and N v are molar densities of dry air and water vapor, respectively; γ ≡ N v /N , see (2), r is the distance from the center of the area where condensation takes place, S(r, z) is the sink term describing the non-conservation of the condensable component (water vapor).Saturated pressure of water vapor depends on temperature alone.Assuming that vapor is saturated at the isothermal surface we have ∂N v /∂r = 0, so N v only depends on z. (Note that this condition necessitates either that there is an influx of water vapor via evaporation from the surface (if the circulation pattern is immobile), or that the pressure field moves as vapor is locally depleted.The second case occurs in compact circulation patterns like hurricanes and tornadoes1 .)As the air ascends with vertical velocity w, vapor molar density decreases due to condensation and due to the expansion of the gas along the vertical gradient of decreasing pressure.The latter effect equally influences all gases, both condensable and noncondensable.Therefore, the volume-specific rate S(r, z) at which vapor molecules are locally removed from the gaseous phase is equal to w(∂N v /∂z − (N v /N )∂N/∂z), see (1), (2).The second term describes the expansion of vapor at a constant mixing ratio which would have occurred if vapor were non-condensable as the other gases.(If vapor did not condense, its density would decrease with height as a constant proportion of the total molar density of moist air as with any other atmospheric gas.) The mass of dry air is conserved, Eq. ( 32).Using this fact, Eq. ( 34) and ∂N v /∂r = 0 one can see that Now expressing ∂N/∂r = ∂N d /∂r + ∂N v /∂r from Eqs. ( 32) and ( 33) with use of Eq. ( 35) we obtain Using the equation of state for moist air p = N RT and water vapor p v = N v RT we obtain from Eqs. ( 36) and ( 25): Here velocities w and u represent vertical and radial velocities of the ascending air flow, respectively.The ascending air converges towards the center of the area where condensation occurs.Scale height h γ is defined in Eq. ( 25).A closely related formula for horizontal pressure gradient can be applied to a linear two-dimensional air flow, with ∂p/∂r replaced by ∂p/∂x.Equation (37) shows that the difference between the scale heights h v and h (25) of the vertical pressure distributions for water vapor and moist air leads to the appearance of a horizontal pressure gradient of moist air as a whole.This equation contains the ratio of vertical to horizontal velocity.Estimating this ratio it is possible to evaluate, for a given circulation, what sorts of horizontal pressure gradients are produced by condensation and whether these gradients are large enough to maintain the observed velocities via the positive physical feedback described by Eq. (37). For example, for Hadley cells at T = 300 K, h γ = 9 km, γ = 0.04 and a typical ratio of w/u ∼ 10 −3 we obtain from Eq. ( 37) a pressure gradient of about 0.4 Pa km −1 .On a distance of 3000 km such a gradient would correspond to a pressure difference of 12 hPa, which is close to the upper range of the actually observed pressure differences in the region (e.g., Murphree and Van den Dool, 1988, Fig. 1).This estimate illustrates our proposal that condensation should be considered one of the main determinants of atmospheric pressure gradients and, hence, air circulation. Similar pressure differences and gradients, also comparable in magnitude to δp s (27) and ∂p/∂r (37) are observed within cyclones, both tropical and extratropical, and persistent atmospheric patterns in the low latitudes (Holland, 1980;Zhou and Lau, 1998;Brümmer et al., 2000;Nicholson, 2000;Simmonds et al., 2008).For example, the mean depth of Arctic cyclones, 5 hPa (Simmonds et al., 2008), is about ten times smaller than the mean depth of a typical tropical cyclone (Holland, 1980).This pattern agrees well with the Clausius-Clapeyron dependence of δp s , Fig. 1b, which would predict an 8 to 16-fold decrease with mean oceanic temperature dropping by 30-40 degrees Celsius.The exact magnitude of pressure gradient and the resulting velocities will depend on the horizontal size of the circulation pattern, the magnitude of friction and degree of the radial (Makarieva and Gorshkov, 2009a,b) 1 . Regarding previous oversight of the effect For many readers a major barrier to acceptance of our propositions may be to understand how such a fundamental physical mechanism has been overlooked until now.Why has this theory come to light only now in what is widely regarded as a mature field?We can offer a few thoughts based on our readings and discussions with colleagues. The condensation-induced pressure gradients that we have been examining are associated with density gradients that have been conventionally considered as minor and thus ignored in the continuity equation (e.g., Sabato, 2008).For example, a typical ∆p = 50 hPa pressure difference observed along the horizontally isothermal surface between the outer environment and the hurricane center (e.g., Holland, 1980) is associated with a density difference of only around 5%.This density difference can be safely neglected when estimating the resulting air velocity u from the known pressure differences ∆p.Here the basic scale relation is given by Bernoulli's equation, ρu 2 /2 = ∆p.The point is that a 5% change in ρ does not significantly impact the magnitude of the estimated air velocity at a given ∆p.But, as we have shown in the previous section, for the determination of the pressure gradient (37) the density difference and gradient (36) are key. Considering the equation of state (5) for the horizontally isothermal surface we have p = Cρ, where C ≡ RT /M = const.Irrespective of why the considered pressure difference arises, from Bernoulli's equation we know that u 2 = 2∆p/ρ = 2C∆ρ/ρ, ∆ρ = ρ 0 − ρ.Thus, if one puts ∆ρ/ρ = ∆p/p equal to zero, no velocity forms and there is no circulation.Indeed, we have u 2 = 2∆p/ρ = 2C∆ρ/ρ = 2C(∆ρ/ρ 0 )(1 + ∆ρ/ρ 0 + ...).As one can see, discarding ∆ρ compared to ρ does indeed correspond to discarding the higher order term of the smallness parameter ∆ρ/ρ.But with respect to the pressure gradient, the main effect is proportional to the smallness parameter ∆ρ/ρ 0 itself.If the latter is assumed to be zero, the effect is overlooked.We suggest that this dual aspect of the magnitude of condensation-related density changes has not been recognized and this has contributed to the neglect of condensation-associated pressure gradients in the Earth's atmosphere. Furthermore, the consideration of air flow associated with phase transitions of water vapor has been conventionally reduced to the consideration of the net fluxes of matter.Suppose we have a linear circulation pattern divided into the ascending and descending parts, with similar evaporation rates E (kg H 2 O m −2 s −1 ) in both regions.In the region of ascent the water vapor precipitates at a rate P .This creates a mass sink E − P , which has to be balanced by water vapor import from the region of descent.Approximating the two regions as boxes of height h, length l and width d, the horizontal velocity u t associated with this mass transport can be estimated from the mass balance equation Equation ( 38) says that the depletion of air mass in the region of ascent at a total rate of (P −E)ld is compensated for by the horizontal air influx from the region of descent that goes with velocity u t via vertical cross-section of area hd.For typical values in the tropics with P − E ∼ 5 mm day −1 = 5.8 × 10 −5 kg H 2 O m −2 s −1 and l/h ∼ 2 × 10 3 we obtain u t ∼ 1 cm s −1 .For regions where precipitation and evaporation are smaller, the value of u t will be smaller too.For example, Lorenz (1967, p. 51) estimated u t to be ∼ 0.3 cm s −1 for the air flow across latitude 40 o S. With ρ ≈ ρ d the value of u t can be understood as the mass-weighted horizontal velocity of the dry air + water vapor mixture, which is the so-called barycentric velocity, see, e.g., (Wacker and Herbert, 2003;Wacker et al., 2006).There is no net flux of dry air between the regions of ascent and descent, but there is a net flux of water vapor from the region of descent to the region of ascent.This leads to the appearance of a non-zero horizontal velocity u t directed towards the region of ascent.Similarly, vertical barycentric velocity at the surface is w t ≈ (E − P )/ρ (Wacker and Herbert, 2003), which reflects the fact that there is no net flux of dry air via the Earth's surface, while water vapor is added via evaporation or removed through precipitation.The absolute magnitude of vertical barycentric velocity w t for the calculated tropical means is vanishingly small, w t ∼ 0.05 mm s −1 . We speculate that the low magnitude of barycentric velocities has contributed to the judgement that water's phase transitions cannot be a major driver of atmospheric dynamics.However, barycentric velocities should not be confused (e.g., Meesters et al., 2009) with the actual air velocities.Unlike the former, the latter cannot be estimated without considering atmospheric pressure gradients (Makarieva and Gorshkov, 2009c).For example, in the absence of friction, the maximum linear velocity u c that could be produced by condensation in a linear circulation pattern in the tropics constitutes Here ∆p was taken equal to 12 hPa as estimated from Eq. ( 37) for Hadley cell in Section 4.1.As one can see, u c (39) is much greater than u t (38).As some part of potential energy associated with the condensation-induced pressure gradient is lost to friction (Makarieva and Gorshkov, 2009a), real air velocities observed in large-scale circulation are an order of magnitude smaller than u c , but still nearly three orders of magnitude greater than u t . The dynamic efficiency of the atmosphere We will now present another line of evidence for the importance of condensation-induced dynamics: we shall show that it offers an improved understanding of the efficiency with which the Earth's atmosphere can convert solar energy into kinetic energy of air circulation.While the Earth on average absorbs about I ≈ 2.4 × 10 2 W m −2 of solar radiation (Raval and Ramanathan, 1989), only a minor part η ∼ 10 −2 of this energy is converted to the kinetic power of atmospheric and oceanic movement.Lorenz (1967, p. 97) notes, "the determination and explanation of efficiency η constitute the fundamental observational and theoretical problems of atmospheric energetics".Here the condensation-induced dynamics yields a relationship that is quantitative in nature and can be estimated directly from fundamental atmospheric parameters. A pressure gradient is associated with a store of potential energy.The physical dimension of pressure gradient coincides with the dimension of force per unit air volume, i.e. 1 Pa m −1 = 1 N m −3 .When an air parcel moves along the pressure gradient, the potential energy of the pressure field is converted to the kinetic energy.The dimension of pressure is identical to the dimension of energy density: 1 Pa = 1 N m −2 = 1 J m −3 .As the moist air in the lower part of the atmospheric column rises to height h γ where most part of its water vapor condenses, the potential energy released amounts to approximately δp s (27).The potential energy released π v per unit mass of water vapor condensed, dimension J (kg H 2 O ) −1 , thus becomes The global mean precipitation rate is 'vovitch, 1979), global mean surface temperature is T s = 288 K and the observed mean tropospheric lapse rate Γ o = 6.5 K km −1 (Glickman, 2000).Using these values and putting Γ o instead of the moist adiabatic lapse rate Γ s in (40), we can estimate the global mean rate Π v = P π v at which the condensation-related potential energy is available for conversion into kinetic energy.At the same time we also estimate the efficiency η = Π v /I of atmospheric circulation that can be generated by solar energy via the condensation-induced pressure gradients: Thus, the proposed approach not only clarifies the dynamics of solar energy conversion to the kinetic power of air movement (solar power spent on evaporation → condensationrelated release of potential power → kinetic power generation).It does so in a quantiatively tractable manner explaining the magnitude of the dissipative power associated with maintaining the kinetic energy of the Earth's atmosphere. Our estimate of atmospheric efficiency differs fundamentally from a thermodynamic approach based on calculating the entropy budgets under the assumption that the atmosphere works as a heat engine, e.g., (Pauluis and Held, 2002a,b), see also (Makarieva et al., 2010).The principal limitation of the entropy-budget approach is that while the upper bounds on the amount of work that could be produced are clarified, there is no indication regarding the degree to which such work is actually performed.In other words, the presence of an atmospheric temperature gradient is insufficient to guarantee that mechanical work is produced.In contrast, our estimate (41) is based on an explicit calculation of mechanical work derived from a defined atmospheric pressure gradient.It is, to our knowledge, the only available estimate of efficiency η made from the basic physical parameters that characterize the atmosphere. Evaporation and condensation While condensation releases the potential energy of atmospheric water vapor, evaporation, conversely, replenishes it. Here we briefly dwell on some salient differences between evaporation and condensation to complete our picture regarding how the phase transitions of water vapor generate pressure gradients. Evaporation requires an input of energy to overcome the intermolecular forces of attraction in the liquid water to free the water molecule to the gaseous phase, as well as to compress the air.That is, work is performed against local atmospheric pressure to make space for vapor molecules that are being added to the atmosphere via evaporation.This work, associated with evaporation, is the source of potential energy for the condensation-induced air circulation.Upon condensation, two distinct forms of potential energy arise.One is associated with the potential energy of raised liquid dropsthis potential energy dissipates to friction as the drops fall.The second form of potential energy is associated with the formation of a non-equilibrium pressure gradient, as the removal of vapor from the gas phase creates a pressure shortage of moist air aloft.This pressure gradient produces air movement.In the stationary case total frictional dissipation in the resulting circulation is balanced by the fraction of solar power spent on the work associated with evaporation. Evaporation is a surface-specific process.It is predominantly anchored to the Earth's surface.In the stationary case, as long there is a supply of energy and the relative humidity is less than unity, evaporation is adding water vapor to the atmospheric column without changing its temperature.The rate of evaporation is affected by turbulent mixing and is usually related to the horizontal wind speed at the surface.The global mean power of evaporation cannot exceed the power of solar radiation. In contrast, condensation is a volume-specific, rather than an area-specific, process that affects the entire atmospheric column.The primary cause of condensation is the cooling of air masses as the moist air ascends and its temperature drops.Provided there is enough water vapor in the ascending air, at a local and short-term scale condensation is not governed by solar power but by stored energy and can occur at an arbitrarily high rate dictated by the vertical velocity of the ascending flow, see (34). Any circulation pattern includes areas of lower pressure where air ascends, as well as higher pressure areas where it descends.Condensation rates are non-uniform across these areas -being greater in areas of ascent.Importantly, in such areas of ascent condensation involves water vapor that is locally evaporated along with often substantial amounts of additional water vapor transported from elsewhere.Therefore, the mean rate of condensation in the ascending region of any circulation pattern is always higher than the local rate of evaporation.This inherent spatial non-uniformity of the condensation process determines horizontal pressure gradients. Consider a large-scale stationary circulation where the regions of ascent and descent are of comparable size.A relevant example would be the annually averaged circulation between the Amazon river basin (the area of ascent) and the region of Atlantic ocean where the air returns from the Amazon to descend depleted of moisture.Assuming that the relative humidity at the surface, horizontal wind speed and solar power are approximately the same in the two regions, mean evaporation rates should be roughly similar as well (i.e., coincide at least in the order of magnitude).However, the condensation (and precipitation) rates in the two regions will be consistently different.In accordance with the picture outlined above, the average precipitation rate P a in the area of ascent should be approximately double the average value of regional evaporation rate E a .The pressure drop caused by condensation cannot be compensated by local evaporation to produce a net zero effect on air pressure.This is because in the region of ascent both the local water vapor evaporated from the forest canopy of the Amazon forest at a rate E a ∼ E d as well as imported water vapor evaporated from the ocean surface at a rate E d precipitate, P a = E d + E a .This is confirmed by observations: precipitation in the Amazon river basin is approximately double the regional evaporation, P a ≈ 2E a (Marengo, 2004).The difference between regional rates of precipitation and evaporation on land, R = P a − E a ∼ E a , is equal to regional runoff.Note that in the region of descent the runoff thus defined is negative and corresponds to the flux of water vapor that is exported away from the region with the air flow.Where runoff is positive, it represent the flux of liquid water that leaves the region of ascent to the ocean. The fact that the climatological means of evaporation and precipitation are not commonly observed to be equal has been recognized in the literature (e.g., Wacker and Herbert, 2003), as has the fact that local mean precipitation values are consistently larger than those for evaporation (e.g., Trenberth et al., 2003). The inherent spatial non-uniformity of the condensation process explains why it is condensation that principally determines the pressure gradients associated with water vapor.So, while evaporation is adding vapor to the atmosphere and thus increasing local air pressure, while condensation in contrast decreases it, the evaporation process is significantly more even and uniform spatially than is condensation.Roughly speaking, in the considered example evaporation increases pressure near equally in the regions of ascent and descent, while condensation decreases pressure only in the region of ascent.Moreover, as discussed above, the rate at which the air pressure is decreased by condensation in the region of ascent is always higher than the rate at which local evaporation would increase air pressure.The difference between the two rates is particularly marked in heavily precipitating systems like hurricanes, where precipitation rates associated with strong updrafts can exceed local evaporation rates by more than an order of magnitude (e.g., Trenberth and Fasullo, 2007). We have so far discussed the magnitude of pressure gradients that are produced and maintained by condensation in the regions where the moist air ascends.This analysis is applicable to observed condensation processes that occur on different spatial scales, as we illustrated on the example of Hadley Cell.We emphasize that to determine where the ascending air flow and condensation will predominantly occur is a separate physical problem.For example, why the updrafts are located over the Amazon and the downdrafts are located over the Atlantic ocean and not vice versa.Here regional evaporation patterns play a crucial role.In Section 4.1 we have shown that constant relative humidity associated with surface evaporation, which ensures that ∂N v /∂r = 0, is necessary for the condensation to take place.Using the definition of γ (2) equation ( 37) can be re-written as follows: This equation shows that the decrease of γ with height and, hence, condensation is only possible when γ grows in the horizontal direction, ∂ ln γ/∂r > 0. Indeed, surface pressure is lower in the region of ascent.As the air moves towards the region of low pressure, it expands.In the absence of evaporation, this expansion would make the water vapor contained in the converging air unsaturated.Condensation at a given height would stop.Evaporation adds water vapor to the moving air to keep water vapor saturated and sustain condensation.The higher the rate of evaporation, the larger the ratio w/u at a given ∂γ/∂z and, hence, the larger the pressure gradient (37) that can be maintained between the regions of ascent and descent.A small, but persistent difference in mean evaporation ∆E < E between two adjacent regions, determines the predominant direction of the air flow.This explains the role of the high leaf area index of the natural forests in keeping evaporation higher than evaporation from the open water surface of the ocean, for the forests to become the regions of low pressure to draw moist air from the oceans and not vice versa (Makarieva and Gorshkov, 2007).On the other hand, where the surface is relatively homogeneous with respect to evaporation (e.g., the oceanic surface), the spatial and temporal localization of condensation events can be of random nature. 5 Discussion: Condensation dynamics versus differential heating in the generation of atmospheric circulation In Section 2 we argued that condensation cannot occur adiabatically at constant volume but is always accompanied by a pressure drop in the local air volume where it occurs.We concluded that the statement that "the pressure drop by adiabatic condensation is overcompensated by latent heat induced pressure rise of the air" (Pöschl, 2009, p. S12437) was not correct.In Section 3 we quantified the pressure change produced by condensation as dependent on altitude in a column in hydrostatic balance, to show that in such a column the pressure drops upon condensation everywhere in the lower atmosphere up to several kilometers altitude, Fig. 1c.The estimated pressure drop at the surface increases exponentially with growing temperature and amounts to over 20 hPa at 300 K, Fig. 1b. In Section 4 we discussed the implications of the condensation-induced pressure drop for atmospheric dynamics.We calculated the horizontal pressure gradients produced by condensation and the efficiency of the atmosphere as a dynamic machine driven by condensation.Our aim throughout has been to persuade the reader that these implications are significant in numerical terms and deserve a serious discussion and further analysis.We will now conclude our consideration by discussing the condensation-induced dynamics at the background of differential heating, a physical mechanism that, in contrast to condensation, has received much attention as an air circulation driver. Atmospheric circulation is only maintained if, in agreement with the energy conservation law, there is a pressure gradient to accelerate the air masses and sustain the existing kinetic energy of air motion against dissipative losses.For centuries, starting from the works of Hadley and his predecessors, the air pressure gradient has been qualitatively associated with the differential heating of the Earth's surface and the Archimedes force (buoyancy) which makes the warm and light air rise, and the cold and heavy air sink.This idea can be illustrated by Fig. 1c, where the warmer atmospheric column appears to have higher air pressure at some heights than the colder column.In the conventional paradigm, this is expected to cause air divergence aloft away from the warmer column, which, in its turn, will cause a drop of air pressure at the surface and the resulting surface flow from the cold to the warm areas.Despite the physics of this differential heating effect being straightforward in qualitative terms, the quantitative problem of predicting observed wind velocities from the fundamental physical parameters has posed enduring difficulties.Slightly more than a decade before the first significant efforts in computer climate modelling, Brunt (1944) as cited by Lewis (1998) wrote: "It has been pointed out by many writers that it is impossible to derive a theory of the general circulation based on the known value of the solar constant, the constitution of the atmosphere, and the distribution of land and sea . . .It is only possible to begin by assuming the known temperature distribution, then deriving the corresponding pressure distribution, and finally the corresponding wind circulation". Brunt's difficulty relates to the realization that pressure differences associated with atmospheric temperature gradients cannot be fully transformed into kinetic energy.Some energy is lost to thermal conductivity without generating mechanical work.This fraction could not be easily estimated by theory in his era -and thus it has remained to the present.The development of computers and appearance of rich satellite observations have facilitated empirical parameterizations to replicate circulation in numerical models.However, while these models provide reasonable replication of the quantitative features of the general circulation they do not constitute a quantitative physical proof that the the observed circulation is driven by pressure gradients associated with differential heating.As Lorenz (1967, p. 48) emphasized, although "it is sometimes possible to evaluate the long-term influence of each process affecting some feature of the circulation by recourse to the observational data", such knowledge "will not by itself constitute an explanation of the circulation, since it will not reveal why each process assumes the value which it does". In comparison to temperature-associated pressure difference, the pressure difference associated with water vapor removal from the gas phase can develop over a surface of uniform temperature.In addition, this pressure difference is physically anchored to the lower atmosphere.Unlike the temperature-related pressure difference, it does not demand the existence of some downward transport of the pressure gradient from the upper to the lower atmosphere (i.e., the divergence aloft from the warmer to the colder column as discussed above) to explain the appearance of low altitude pressure gradients and the generation of surface winds. Furthermore, as the condensation-related pressure difference δp s is not associated with a temperature difference, the potential energy stored in the pressure gradient can be nearly fully converted to the kinetic energy of air masses in the lower atmosphere without losses to heat conductivity.This fundamental difference between the two mechanisms of pressure difference generation can be traced in hurricanes.Within the hurricane there is a marked pressure gradient at the surface.This difference is quantitatively accountable by the condensation process (Makarieva and Gorshkov, 2009b) 1 .In the meantime, the possible temperature difference in the upper atmosphere that might have been caused by the difference in moist versus dry lapse rates between the regions of ascent and descent is cancelled by the strong horizontal mixing (Montgomery et al., 2006).Above approximately 1.5 km the atmosphere within and outside the hurricane is approximately isothermal in the horizontal direction (Montgomery et al., 2006, Fig. 4).Therefore, while the temperature-associated pressure difference above height z c , Fig. 1c, is not realized in the atmosphere, the condensationassociated pressure difference below height z c apparently is.Some hints on the relative strengths of the circulation driven by differential heating compared to condensationinduced circulation can be gained from evaluating wind velocities in those real processes that develop in the lower atmosphere without condensation.These are represented by dry (precipitation-free) breezes (such as diurnal wind patterns driven by the differential heating of land versus sea surfaces) and dust devils.While both demand very large temperature gradients (vertical or horizontal) to arise as compared to the global mean values, both circulation types are of comparatively low intensity and have negligible significance to the global circulation.For example, dust devils do not involve precipitation and are typically characterized by wind velocities of several meters per second (Sinclair, 1973).The other type of similarly compact rotating vortexes -tornadoes -that are always accompanied by phase transitions of water -develop wind velocities that are at least an order of magnitude higher (Wurman et al., 1996).More refined analyses of Hadley circulation (Held and Hou, 1980) point towards the same conclusion: theoretically described Hadley cells driven by differential heating appear to be one order of magnitude weaker than the observed circulation (Held and Hou, 1980;Schneider, 2006), see also (Caballero et al., 2008).While the theoretical description of the general atmospheric circulation remains unresolved, condensation-induced dynamics offers a possible solution (as shown in Section 4.1). Our approach and theory have other significant impli-cations.Some have been documented in previous papers, for example with regard to the development of hurricanes (Makarieva and Gorshkov, 2009a,b) and the significance of vegetation and terrestrial evaporation fluxes in determining large scale continental weather patterns (Makarieva et al., 2006;Makarieva and Gorshkov, 2007;Sheil and Murdiyarso, 2009;Makarieva et al., 2009).Other implications are likely to be important in predicting the global and local nature of climate change -a subject of considerable concern and debate at the present time (Pielke et al., 2009;Schiermeier, 2010).In summary, although the formation of air pressure gradients via condensation has not received detailed fundamental consideration in climatological and meteorological sciences, here we have argued that this lack of attention has been undeserved.Condensation-induced dynamics emerges as a new field of investigations that can significantly enrich our understanding of atmospheric processes and climate change.We very much hope that our present account will provide a spur for further investigations both theoretical and empirical into these important, but as yet imperfectly characterized, phenomena. Fig. 1 . Fig. 1. (a): scale height of saturated water vapor hv(z) (24), hydrostatic scale height of water vapor hn(z) (26), and scale height of moist air h(z) (20) in the column with moist adiabatic lapse rate (22) for three values of surface temperature Ts; (b): condensation-induced drop of air pressure at the surface (27) as dependent on surface temperature Ts; (c): pressure difference versus altitude z between atmospheric columns A and B with moist and dry adiabatic lapse rates, Eqs.(30), (31), respectively, for three values of surface temperature Ts.Height zc at which pA(zc) − pB(zc) = 0 is 2.9, 3.4 and 4.1 km for 283, 293 and 303 K, respectively.Due to condensation, at altitudes below zc the air pressure is lower in column A despite it being warmer than column B.
11,880.4
2010-04-02T00:00:00.000
[ "Environmental Science", "Physics" ]
Characterization of Discrete Phosphopantetheinyl Transferases in Streptomyces tsukubaensis L19 Unveils a Complicate Phosphopantetheinylation Network Phosphopantetheinyl transferases (PPTases) play essential roles in both primary metabolisms and secondary metabolisms via post-translational modification of acyl carrier proteins (ACPs) and peptidyl carrier proteins (PCPs). In this study, an industrial FK506 producing strain Streptomyces tsukubaensis L19, together with Streptomyces avermitilis, was identified to contain the highest number (five) of discrete PPTases known among any species thus far examined. Characterization of the five PPTases in S. tsukubaensis L19 unveiled that stw ACP, an ACP in a type II PKS, was phosphopantetheinylated by three PPTases FKPPT1, FKPPT3, and FKACPS; sts FAS ACP, the ACP in fatty acid synthase (FAS), was phosphopantetheinylated by three PPTases FKPPT2, FKPPT3, and FKACPS; TcsA-ACP, an ACP involved in FK506 biosynthesis, was phosphopantetheinylated by two PPTases FKPPT3 and FKACPS; FkbP-PCP, an PCP involved in FK506 biosynthesis, was phosphopantetheinylated by all of these five PPTases FKPPT1-4 and FKACPS. Our results here indicate that the functions of these PPTases complement each other for ACPs/PCPs substrates, suggesting a complicate phosphopantetheinylation network in S. tsukubaensis L19. Engineering of these PPTases in S. tsukubaensis L19 resulted in a mutant strain that can improve FK506 production. influenza [17][18][19][20] . Most bacteria have one group I PPTase and no less than one group II PPTases [12][13][14][15] . In Streptomyces coelicolor and Streptomyces chattanoogensis L10, it has been reported that group I PPTases prefer to phosphopantetheinylate ACPs in type II FASs and type II PKSs, while group II PPTases prefer to phosphopantetheinylate ACPs in type I PKSs 12,13 . Recently, antibiotic production has been improved by engineering of PPTases 13 . FK506 (tacrolimus) is a clinical immunosuppressant widely used after allogeneic kidney, liver, and heart transplantations [21][22][23][24] . Industrial production of FK506 is made by fermentation of some Streptomyces strains. Although FK506 is known to be biosynthesized by a PKS/NRPS hybrid and partial biosynthetic pathway of FK506 has been elucidated [25][26][27][28] , phosphopantetheinylation of ACPs/PCPs in FK506 biosynthetic PKS/NRPS has never been studied to date. An FK506 producing strain, Streptomyces tsukubaensis L19 was isolated from Yunnan, China by our group and was used for industrial production of FK506. In this study, we identified one group I PPTase and four group II PPTases in an industrial FK506 producing strain S. tsukubaensis L19. Characterization of these PPTases unveiled that the functions of these PPTases complement each other, suggesting a complicate phosphopantetheinylation network in S. tsukubaensis L19. A mutant strain with higher FK506 production and shorter fermentation time was also constructed by engineering of these PPTases. Results and Discussion Analysis of discrete PPTase genes and PKS gene clusters in Streptomyces tsukubaensis L19. The whole genomic DNA of S. tsukubaensis L19 was recently sequenced to contain more than 7.9 M base pairs and 7,000 open reading frames (ORFs). Analysis of the genomic sequence revealed five discrete PPTase genes, FKPPT1, FKPPT2, FKPPT3, FKPPT4, and FKACPS (GenBank accession numbers KT582112-KT582116). Neither of these genes is clustered with any secondary metabolite biosynthesis gene clusters. Alignment of these PPTases with known PPTases showed that FKPPT1, FKPPT2, FKPPT3, FKPPT4 contains three conserved motifs, PRWP, GID and FSAKESVYK, found in the Sfp-type PPTase motifs P1, P2 and P3, while FKACPS contains just last two conserved motifs found in ACPS-type PPTase. Thus, FKPPT1, FKPPT2, FKPPT3, and FKPPT4 belong to the Sfp-type PPTase group, and FKACPS belongs to the ACPS-type PPTase group, respectively (Fig. 1). Most bacteria contain two to three discrete PPTases, such as E. coli, S. coelicolor, S. chattanoogensis L10 12,13 . To date, Streptomyces avermitilis is the known to harbor the highest number (five) of discrete PPTases 1,29 . Together with S. avermitilis, S. tsukubaensis L19 contains the highest number of discrete PPTases known among any species thus far examined. Analysis of the genomic sequence of S. tsukubaensis L19 revealed about thirty proposed PKS, NRPS, and PKS-NRPS hybrid gene clusters. The FK506 biosynthetic gene cluster in S. tsukubaensis L19 showed 100% DNA identity with that in Streptomyces tsukubaensis YN06 27,28 . A type II PKS was named as stw PKS (Streptomyces tsukubaensis whiE), since its gene cluster contains homologous genes in whiE PKS gene clusters, which are involved in spore pigment biosynthesis in several Streptomyces strains (Fig. 2) [30][31][32] . More than one hundred proposed ACPs and PCPs within these PKSs, NRPSs, and PKS-NRPS hybrids were potential substrates of these five PPTases. In vitro phosphopantetheinylation system. To characterize if these five PPTases have activities, an in vitro co-expression system was built up. TcsA-ACP (the ACP domain in the allylmalonyl unit biosynthetic module in FK506 biosynthetic PKS/NRPS), FkbP-PCP (the PCP domain in the NRPS module in FK506 biosynthetic PKS/NRPS), stw ACP (the ACP in stw PKS), and sts FAS ACP (the ACP in S. tsukubaensis L19 FAS) were selected as substrates of PPTases. First, tcsA-ACP, fkbP-PCP (fused with SUMO gene), stw ACP, and sts FAS ACP were individually cloned into pET28a, resulting in four ACP/PCP-containing-plasmids pET-AACP, pYY0081, pYY0082, and pYY0098, respectively. Second, FKPPT1, FKPPT2, FKPPT3, FKPPT4, and FKACPS were individually cloned into the NdeI/HindIII sites of pYY0040 16 , in which both His-tag gene and Nus-tag genes were deleted from pET44a, resulting in five PPTase-containing-plasmids, pYY0072, pYY0078, pYY0073, pYY0077, and pYY0074, respectively. Finally, four E. coli BL21(DE3) harboring both an ACP/PCP-containing-plasmid and pYY0040 were induced with IPTG to overproduce His-tagged ACPs/PCP. Twenty E. coli strains harboring both an ACP/PCP-containing-plasmid and a PPTase-containing-plasmid were induced with IPTG to overproduce His-tagged ACPs/PCP with intact PPTases. His-tagged ACPs/PCP were then purified to homogeneity by affinity chromatograph. Phosphopantetheinylation of the ACP/PCP involved in FK506 biosynthesis. Regarding TcsA-ACP, when tcsA-ACP was co-expressed with pYY0040 in E. coli, HPLC data showed that purified TcsA-ACP were eluted as two peaks with the area ratio of the small peak with shorter retention time and the large peak with longer retention time as 0.67. MS data revealed that the large peak represented apo-proteins and the small peak represented holo-proteins, indicating that E. coli ACPS could convert it from the apo-form to holo-form incompletely. After co-expression of tcsA-ACP with FKPPT3 or FKACPS, all purified TcsA-ACP were holo-form by HPLC analysis. While co-expression of tcsA-ACP with FKPPT1, FKPPT2 or FKPPT4, the ratio of the apo-form and the holo-form did not change compared with co-expression of it with pYY0040 (Fig. 3A). These results indicated that FKPPT3 and FKACPS could phosphopantetheinylate TcsA-ACP, but FKPPT1, FKPPT2, and FKPPT4 could not under these conditions. Regarding FkbP-PCP (fused with SUMO protein), when FkbP-PCP (fused with SUMO gene) was co-expressed with pYY0040 in E. coli, HPLC data showed that purified FkbP-PCP were eluted as a single peak. MS analysis showed that this peak represented both apo-proteins and holo-proteins with the ratio of the peak height of holo-proteins and the peak height of apo-proteins as about 0.40. After co-expression of FkbP-PCP with each of PPTase genes, HPLC data showed that purified FkbP-PCP were still eluted as a single peak. While co-expression of FkbP-PCP with FKPPT1 and FKPPT2, MS data showed that the ratio of the peak height of holo-proteins and the peak height of apo-proteins increased significantly to 2.58 and 2.34, respectively. While co-expression of FkbP-PCP with FKPPT3, FKPPT4, and FKACPS, MS data showed that only the peak of holo-proteins remained but the peak of apo-proteins disappeared (Fig. 4). These results supported that all of these five PPTases could phosphopantetheinylate FkbP-PCP. Phosphopantetheinylation of the ACP in a type II PKS. When stw ACP was co-expressed with pYY0040 in E. coli, HPLC data showed that purified stw ACP were eluted as a single peak, which represented apo-proteins by MS analysis. After co-expression of stw ACP with FKPPT1, FKPPT3, or FKACPS, a new peak, whose area is about 21%, 20%, and 34% of that of apo-proteins respectively, with shorter retention time compared with that of apo-proteins appeared in HPLC data, which represented holo-proteins by MS analysis. After co-expression of stw ACP with FKPPT2 or FKPPT4, HPLC data didn't show any holo-proteins (Fig. 3B). These results suggested that stw ACP could be phosphopantetheinylated by FKPPT1, FKPPT3, and FKACPS but not FKPPT2 and FKPPT4. Phosphopantetheinylation of the ACP in FAS. When sts FAS ACP was co-expressed with pYY0040 in E. coli, HPLC data showed that purified sts FAS ACP were eluted as two peaks with the area ratio of the peak with shorter retention time and the peak with longer retention time as 1.20. MS data revealed that the peak with longer retention time represented apo-proteins and the peak with shorter retention time represented holo-proteins. After co-expression of sts FAS ACP with FKPPT2, FKPPT3, or FKACPS, the ratio of holo-proteins and apo-proteins increased significantly to about 2.93, 2.17, and 1.67, respectively. While co-expression of sts FAS ACP with FKPPT4, this ratio almost did not change (about 1.01). Surprisingly, while co-expression of sts FAS ACP with FKPPT1, this ratio decreased significantly to about 0.28 (Fig. 3C). These results supported that FKPPT2, FKPPT3, and FKACPS could phosphopantetheinylate sts FAS ACP, but FKPPT1 and FKPPT4 could not. The above results strongly supported that these five PPTases are indeed PPTases. Their functions should complement each other since each of ACPs/PCP could be phosphopantetheinylated by more than one PPTases, suggesting a complicate phosphopantetheinylation network in S. tsukubaensis L19 (Fig. 5). PCP could be phosphopantetheinylated by all five PPTases, suggesting that the flexibility to PPTases of PCP is more than that of ACPs. Inactivation of FKPPT1 or FKPPT3 decreased the FK506 yield in S. tsukubaensis L19. To determine the influence of the activities of the group II PPTases on FK506 production in vivo, FKPPT1, FKPPT2, FKPPT3, and FKPPT4 were individually replaced with the apramycin resistance gene aac(3)IV in S. tsukubaensis L19 by using Redirect technology, resulting in four mutant strains sHJ0015-sHJ0018, respectively. Each of mutant strains sHJ0015-sHJ0018 was fermented in triplicate fermentation medium in flasks for 72 h by using S. tsukubaensis L19 as control. FK506 production was monitored by HPLC analyses. The FK506 yields of sHJ0015 (ΔFKPPT1) and sHJ0017 (ΔFKPPT3) decreased ~33% and ~22% compared to that of S. tsukubaensis L19, respectively, suggesting that FKPPT1 and FKPPT3 play important roles in the FK506 biosynthesis. The FK506 yields of sHJ0016 (ΔFKPPT2) and sHJ0018 (ΔFKPPT4) had no obvious change compared to that of S. tsukubaensis L19, supporting that lack of FKPPT2 and FKPPT4 in the biosynthesis of FK506 could be complemented by other PPTases (Fig. 6A). The above in vivo results showed that none of these four group II PPTases is indispensable to the FK506 biosynthesis. These results are consistent with the in vitro results that FK506 biosynthetic ACP and PCP could be phosphopantetheinylated by more than one PPTases. Overexpression of FKPPT3 increased the FK506 production and decreased the fermentation time in S. tsukubaensis L19. To study the effect of the expression level of the group II PPTase genes on FK506 production, four PPTase gene overexpression mutant strains were constructed. FKPPT1, FKPPT2, FKPPT3, and FKPPT4 under the control of a strong promoter ermEp* were individually introduced into S. tsukubaensis L19, resulting in four PPTase gene overexpression mutant strains sHJ0019-22, respectively. Each of mutant strains sHJ0019-sHJ0022 in triplicate cultures was fermented in flasks for 72 h by using S. tsukubaensis L19 as control. The FK506 yields of sHJ0020 (ermEp*-FKPPT2) and sHJ0022 (ermEp*-FKPPT4) had no obvious change compared to that of S. tsukubaensis L19, suggesting again that neither FKPPT2 nor FKPPT4 is crucial to the FK506 biosynthesis. However, the FK506 yields of sHJ0019 (ermEp*-FKPPT1) and sHJ0021 (ermEp*-FKPPT3) increased ~20% and ~25% compared to that of S. tsukubaensis L19, respectively, supporting again that FKPPT1 and FKPPT3 play important roles in FK506 biosynthesis (Fig. 6A). To confirm sHJ0019 (ermEp*-FKPPT1) and sHJ0021 (ermEp*-FKPPT3) enhance the abilities of FK506 production, these two strains were fermented in triplicate fermentation medium in 100 L fermentors under rigorously identical conditions by using S. tsukubaensis L19 as control. The curve of FK506 production in S. tsukubaensis L19 showed that the FK506 yield reached the highest level at 154 ± 34 mg/L at 72 h. Notably, the FK506 yields of sHJ0019 (ermEp*-FKPPT1) and sHJ0021 (ermEp*-FKPPT3) reached the highest level at 171 ± 17 mg/L at 72 h and 183 ± 13 mg/L at 48 h, respectively (Fig. 6B). Thus, sHJ0021 (ermEp*-FKPPT3) not only increased FK506 production by ~19% but also decreased the fermentation time by 24 h in the fermentor scale fermentation, which may be beneficial for the industrial FK506 production. Strain sHJ0021 was deposited in China General Microbiological Culture Collection Center (CGMCC) as name Streptomyces tsukubaensis L20 with CGMCC number 11252. Additionally, All of eight mutant strains didn't show obvious morphological different comparing with S. tsukubaensis L19 during growth in solid media and liquid media, including sporulation, cell color, and the length of mycelium. In summary, we identified that S. tsukubaensis L19 contains five discrete PPTases. Characterization of these PPTases showed that their functions complement each other, suggesting a complicate phosphopantetheinylation network in S. tsukubaensis L19. We also provided an example to improve the antibiotic production by engineering of PPTases. Methods Bacterial Strains, Plasmids, growth, and culture conditions. Bacterial strains and plasmids used in the present study are listed in Table 1 and primers are listed in Table 2. The spore preparation of Streptomyces tsukubaensis L19 was done on ISP4 agar after 10 days at 26 °C. DNA manipulations in S. tsukubaensis L19 and E. coli-S. tsukubaensis L19 conjugation were carried out according to standard procedures 33 . For fermentation of strains in fermentors, the 20 mL of seed cultures were inoculated into 10 L of seed medium containing 0.03% defoamer in 20 L fermentors and grown at 28 °C for 1 day. Then the 6 L of secondary seed cultures were inoculated into 60 L of fermentation medium containing 0.2% defoamer in 100 L fermentors and grown at 28 °C for 5 days. Fermentor cultivations were carried out by using the same bioreactor system with pH, pO 2 , and temperature control. All flask and fermentor experiments were performed at least triplicates. Analysis of FK506 production. To assay for FK506 production in the culture broths, a sample was withdrawn and ultrasonic extracted with the same volume of methanol. Methanol layer was recovered by centrifugation at 12,000 rpm for 15 min. The concentration of FK506 was determined using an HPLC system (Agilent Series 1100, Agilent) equipped with a SB-C18 column (150 × 2.1 mm, Agilent). The column temperature was maintained at 60 °C and UV detector was set at 215 nm. The mobile phase, which had a flow rate of 1.0 mL/min, contained 0.02 M KH 2 PO 4 (pH 3.5) solution and acetonitrile in the ratio of 40:60. Genome sequencing and genome annotation. The nucleotide sequence of S. tsukubaensis L19 genome was determined by using a massively parallel pyrosequencing technology (Roche 454 GS FLX). 160 contigs (> 500 bp) with a total size of 9.0 Mb were assembled from 522,882 reads (average length of 437 bp) using Newbler software of the 454-suite package, providing a 25.3-fold coverage. The relationship among contigs was determined by using multiplex PCR. Gaps were filled by sequencing PCR products. The final sequence assembly was performed using phred/phrap/consed package (http://www.phrap.org/phredphrapconsed.html), and the low sequence quality region was re-sequenced. Putative protein-coding sequences were determined by combining the prediction results of glimmer 3.02 34 and Z-Curve program 35 . Functional annotation of CDS was performed by searching the NCBI non-redundant protein database and KEGG protein database. Protein domain prediction and COG assignment were performed by RPS-BLAST using NCBI CDD library 36 . Production and purification of ACPs/PCP. The three ACP genes were amplified by PCR using relevant primers from genomic DNA of S. tsukubaensis L19. The resultant products were cloned into pTA2 vector (Toyobo) directly and sequenced to confirm PCR fidelity. Then these genes were digested with NdeI/HindIII and cloned into the same sites of pET28a (Novagen), yielding three ACP-containing-plasmids pET-AACP and pYY0081-pYY0082, respectively. And the PCP gene was cloned into the pET28a-SUMO (Novagen), resulting in a PCP-containing-plasmid pYY0098. Finally, these plasmids were introduced into E. coli BL21(DE3) to overproduce proteins as N-His 6 -tagged protein. The ACPs were overproduced under standard conditions. BL21(DE3) harboring each expression plasmid were grown in LB medium with kanamycin at 37 °C until OD 600 reached 0.4. Then IPTG was added to the final concentration of 0.4 mM and incubation continued at 37 °C for 4 h, resulting in overproduction of proteins in soluble form with good yield. Purification of these proteins by affinity chromatography on Ni-NTA agarose (Qiagen) was performed under standard conditions recommended by the manufacturer. The proteins were dialyzed against 20 mM Tris·HCl (pH 8.0), 25 mM NaCl, 1 mM DTT, and 10% glycerol. Co-expression of ACP/PCP genes with PPTase genes. The five PPTase genes were amplified by PCR using relevant primers from genomic DNA of S. tsukubaensis L19. The resultant products were cloned into pTA2 vector directly and sequenced to confirm PCR fidelity. Then these genes were digested with NdeI/HindIII and cloned into the same sites of pET44a (Novagen), yielding five PPTase-containing-plasmids pYY0072-pYY0074 and pYY0077-pYY0078, respectively. BL21(DE3) harboring both each of ACP/PCP-containing-plasmids and each of PPTase-containing-plasmids were grown in LB medium with both kanamycin and ampicillin at 37 °C until OD 600 reached 0.4. Then IPTG was added to the final concentration of 0.4 mM and incubation continued at 37 °C for 4 h. Purification and dialysis of ACPs/PCP was performed as same as described before. LC-MS analyses of ACPs/PCP. The ACPs produced from E. coli or from the phosphopantetheinylation reaction mixture were directly analyzed by LC-MS (Agilent 1200, Thermo Finnigan LCQ Deca XP MAX). LC separation was carried out on a Agilent SB-C18 column (3.5 μm, 80 Å, 2.1 × 150 mm, Agilent) at 35 °C. Solvent A consisted of 0.1% formic acid. Solvent B consisted of acetonitrile. The following binary gradient was used: 0-5 min, 5% B; 5-45 min, a linear gradient to 75% B; followed by 3 min isocratic elution of 75% B, and equilibrated to initial condition for 13 min at a flow rate of 0.2 ml/min. UV detection was performed at both 254 nm and 280 nm. MS equipped with ESI source was performed as follows: positive; source voltage, 2.5 kV; capillary voltage, 41 V; sheath gas flow, 45 arb; aux/sweep gas flow, 5 arb; capillary temperature, 330 °C. Construction of PPTase:aac3(IV) mutant strains. The PPTase gene disruption mutant was constructed by using a modified PCR targeting system as follows 37 . First, four cosmids, which contain each of FKPPT1, FKPPT2, FKPPT3, and FKPPT4, were screened out by PCR amplification. Second, each disruption cassette aac(3) IV (Apra) was PCR amplified from pHY773 (Z. Qin, Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, China, unpublished results), with the resulting product carrying 59 bp ends with homology to the corresponding region of PPTase gene. Each PCR product was then introduced into E. coli BW25113 carrying pIJ790/cosmid, and transformed cells carrying mutagenized cosmid were selected on LB agar containing apramycin. Each mutagenized cosmid, in which a PPTase gene was replaced with aac(3)IV, was confirmed by PCR analysis using primers accordingly. Third, after conjugal transfer of each mutagenized cosmid from E. coli ET12567 carrying pUZ8002 into S. tsukubaensis L19, exconjugants were obtained after selection for apramycin resistance. Exconjugants were then inoculated onto ISP4 plates for two rounds of nonselective growth before selection by replica plating for thiostrepton-sensitive and apramycin-resistant colonies. Each resulting strain, in which a PPTase gene was replaced with aac (3)IV, was confirmed by PCR analysis using primers accordingly. The This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
4,606
2016-04-07T00:00:00.000
[ "Biology" ]
GTP Hydrolysis Is Essential for Protein Import into the Mitochondrial Matrix* Protein import into the innermost compartment of mitochondria (the matrix) requires a membrane potential (ΔΨ) across the inner membrane, as well as ATP-dependent interactions with chaperones in the matrix and cytosol. The role of nucleoside triphosphates other than ATP during import into the matrix, however, remains to be determined. Import of urea-denatured precursors does not require cytosolic chaperones. We have therefore used a purified and urea-denatured preprotein in our import assays to bypass the requirement of external ATP. Using this modified system, we demonstrate that GTP stimulates protein import into the matrix; the stimulatory effect is directly mediated by GTP hydrolysis and does not result from conversion of GTP to ATP. Both external GTP and matrix ATP are necessary; neither one can substitute for the other if efficient import is to be achieved. These results suggest a “push-pull” mechanism of import, which may be common to other post-translational translocation pathways. Most (Ͼ95%) mitochondrial proteins are synthesized on cytoplasmic ribosomes. These proteins must be imported from the cytosol into the correct mitochondrial subcompartment (outer membrane, intermembrane space, inner membrane, or matrix). Import of preproteins into the innermost compartment (the matrix) is achieved by the coordinated action of two independent translocases; one in the outer and another in the inner membrane (1)(2)(3)(4). A fundamental question is what drives the translocation of proteins into the matrix across two (outer and inner) membrane barriers. This process requires a membrane potential (⌬⌿) across the inner membrane and matrix ATP (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). The membrane potential (negative inside) is required for the initial insertion and partial translocation of the N terminus of the precursor containing a positively charged signal sequence across the inner membrane. Subsequent translocation is completed by ATP-dependent interactions in the matrix. The mechanisms of protein translocation across various membranes apparently share some key features (19,20). One of the most striking examples is the participation of GTP in vesicular trafficking and protein trafficking to nucleus, endoplasmic reticulum, and chloroplasts (for review see Refs. [21][22][23][24]. However, so far there is no evidence that nucleoside triphosphates (NTPs) 1 other than ATP have any direct role in targeting a precursor to the mitochondrial surface, providing energy for transmembrane movement, or for transient unfolding of preproteins en route to the matrix (5)(6)(7)(8)(9)(10)19). Mitochondrial import of preproteins synthesized in cell-free translation systems often requires cytosolic ATP-dependent interactions with chaperones (25)(26)(27)(28)(29). Consequently, the requirement of external ATP for import of native preproteins may not be bypassed. Due to the required presence of ATP, it has been difficult to determine the individual role of NTPs during the import process. Import of denatured precursors, however, does not require cytosolic chaperones (30 -33). Therefore to investigate the GTP requirements for protein import into the mitochondrial matrix, we used a purified, urea-denatured precursor (delta-1-pyrroline-5-carboxylate dehydrogenase, pPut) in our assay. We demonstrate that the translocation of pPut into the matrix is stimulated by GTP. The stimulatory effect by GTP is direct; it requires energy released by GTP hydrolysis and does not result from conversion of GTP to ATP. Our data strongly suggest that the mitochondrial translocation apparatus is powered by an external "push" mediated by GTPhydrolysis and an internal ATP-dependent "pull". Both are necessary and neither one can substitute for the other. EXPERIMENTAL PROCEDURES For the preparation of radiolabeled pPut, Escherichia coli BL21(DE3) cells carrying the plasmid pNYHM170 were induced by isopropyl-␤-D-thiogalactopyranoside in the presence of [ 35 S]Met and subsequently purified as described (34). Mitochondria were isolated from Saccharomyces cerevisiae strain D273-10B (ATCC 24657) (28). Import assay buffer consisted of HSB (20 mM Hepes-KOH, pH 7.5, 0.6 M sorbitol, 0.1 mg/ml bovine serum albumin) containing 40 mM KOAc, 10 mM Mg(OAc) 2 , 5 mM unlabeled Met and 1 mM dithiothreitol. Import was initiated by adding urea-denatured [ 35 S]Met-labeled pPut (75 ng) to mitochondria (100 g) in the assay buffer. The final urea concentration was 0.16 M. A final urea concentration as high as 0.6 M does not inhibit import of native precursors (30). Following import, reaction mixtures were treated with trypsin (0.1 mg/ml) for 30 min on ice. Samples were diluted with HSB containing 5 mg/ml soybean trypsin inhibitor, 100 units/ml Trasylol, and 1 mM phenylmethylsulfonyl fluoride. Mitochondria were sedimented and washed with 10% trichloroacetic acid. Samples were analyzed by SDS-polyacrylamide gel electrophoresis and autoradiography. Bands (pPut and mPut) were quantitated using the software NIH Image. Import of pPut (ϳ64 kDa) into isolated mitochondria was assayed by the following criteria: (i) cleavage of the signal sequence (ϳ3 kDa) by matrix-localized signal peptidase, (ii) protection of the signal-less mature polypeptide (mPut, ϳ61 kDa) from digestion by an exogenous protease, and (iii) sedimentation of imported and protected molecules with mitochondria upon centrifugation. Import efficiency was calulated as the percent conversion of pPut to mPut as described (28). The ladder * This work was supported by grants from the American Heart Association and the W. W. Smith Charitable Trust. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 [1][2][3][4] are likely to result from the partial cleavage of molecules in transit to the matrix. These ladders are not the result of incomplete digestion of nonimported molecules by trypsin, since (i) when import was inhibited in the presence of valinomycin (which dissipates the ⌬⌿ across the inner membrane), pPut was completely digested by trypsin, and (ii) both precursor and mature forms of Put were completely digested by trypsin in the presence of Triton X-100 (not shown). RESULTS Kinetics of pPut Import-We examined the nucleotide requirements for import of urea-denatured pPut into the mitochondrial matrix at various temperatures. This is particularly important for two reasons. First, a wide range of import incubation temperatures and times have been reported in the literature. A thorough study on import kinetics, however, is lacking. Second, previous studies have concluded that GTP must be converted to ATP, presumably by nucleoside diphosphate kinase (NDP kinase) located in the intermembrane space (6), to support protein import (9). Such an enzymatic exchange process is expected to be temperature-dependent. The nucleotide specificity of mitochondrial protein import needs, therefore, to be tested at various temperatures. The data presented in Fig. 1 clearly demonstrate that regardless of the temperature GTP-mediated import was always much greater (2.4 -50-fold) than the corresponding ATP-mediated import. Should GTP be converted to ATP to drive import, it is very unlikely that such a conversion would ever result in import efficiency greater than that obtained by an equivalent amount of externally added ATP. Furthermore, the absolute import efficiency in GTP-containing samples were significant, ranging from 14.3% (0°C, 30 min, Fig. 1A, lane 11) to 36.1% (30°C, 10 min, Fig. 1D, lane 11). On the other hand, ATP could drive import to levels above 5% only at 30°C but not at any lower temperature that we tested. Interestingly, after 5 min at 30°C, GTP-mediated import reached a plateau, whereas ATP-FIG. 1. GTP plays a direct and essential role for import into the matrix. Import of urea-denatured pPut was carried out for different time periods in the presence of ATP or GTP (1 mM each) at various temperatures. A, 0°C; B, 5°C; C, 10°C; and D, 30°C. Samples were then subjected to trypsin treatment and analyzed. The quantitative data on import efficiency are presented below the autoradiograph in each panel. mediated import was in the early stage of the ascending linear range (Fig. 1D). With longer incubation (ϳ20 -30 min) at 30°C, the difference between GTP-and ATP-mediated import was much reduced (not shown). This could be due to the slow conversion of ATP, presumably by NDP kinase located in the intermembrane space (6), to GTP (i.e. ATP ϩ GDP º ADP ϩ GTP). This might explain why some of the earlier studies, where import incubations were longer and carried out at higher temperatures, failed to detect the GTP-stimulatory effect over that mediated by ATP (5)(6)(7)(8) . We conclude that (a) GTP is required for import of pPut into the matrix, and (b) GTP need not be converted to ATP to exert its stimulatory effect on import. Unlike preproteins synthesized in reticulocyte lysate, ureadenatured pPut was imported even on ice (Fig. 1A). Furthermore, import of denatured pPut did not require addition of cytosolic chaperones. Similar results were previously reported using other precursors (30 -33). The urea-denatured precursor circumvents at least one highly temperature-sensitive and rate-limiting step, which presumably represents the unfolding of the native precursor prior to import into isolated mitochondria (30). Subsequent experiments to monitor the nucleotide requirements for import of denatured pPut into the matrix were therefore performed at low as well as physiologically relevant temperatures. GTP Is the Preferred Source of External Energy-We titrated (Fig. 2) the requirements of GTP (50 M-2 mM, lanes 2-7) and ATP (100 M-2 mM, lanes 8 -12) for import of pPut. Considerable import was observed even on ice with physiological concentrations of GTP (ϳ100 M, (35)), and little further dose-dependent stimulation was observed beyond 200 M (Fig. 2A). Thus, the stimulation of mitochondrial import by GTP is physiologically significant. In contrast, ATP was much less effective; mPut that appeared in the presence of 2 mM ATP (Fig. 2A, lane 12) was only 65% of that obtained at one-fortieth the concentration of GTP (50 M, Fig. 2A, lane 2). When reaction mixtures were shifted from ice to 30°C (Fig. 2B), GTP-mediated import was further stimulated by only about 50% to 2-fold. Under identical conditions, ATP-mediated import efficiency, however, was significantly increased (5-8-fold). Nevertheless, the GTP effect was higher than the corresponding ATP effect at all concentrations tested. Since the mitochondrial inner membrane is impermeable to GTP (36), these results suggest that GTP is the primary source of external energy, both at low (0°C, Fig. 2A) and physiologically relevant temperatures (30°C, Fig. 2B). Efficient Import Requires External GTP and Matrix ATP-The complicated issue of the interconversion of NTPs by mito-chondrial NDP kinase perhaps can best be resolved by using a specific inhibitor of the enzyme. Unfortunately, no such inhibitor is known. Alternatively, highly specific NTPases could be used to selectively deplete a particular NTP. For example, of all NTPs tested, only ATP can act as a phosphate donor substrate for E. coli glycerol kinase, which converts glycerol to glycerol-3-phosphate; the reaction is essentially irreversible (37). This enzyme has been used as a means of selectively degrading mitochondrial ATP while sparing other NTPs (8). This ATPscavenging system is likely to degrade both surface-bound ATP as well as ATP in the intermembrane space. The mitochondrial outer membrane is presumably not a barrier for rapid equilibration of NTPs between the cytosol and the intermembrane space. We therefore utilized glycerol kinase to determine the NTP specificity for import in our assays. Mitochondria were pretreated with glycerol kinase plus glycerol. The ATP trap was present during import as well. Whereas ATP-mediated import was completely abolished (Fig. 3, lanes 2 and 4), GTP-mediated import was essentially unaffected (Fig. 3, lanes 3 and 5), thereby confirming our earlier conclusion (Figs. 1 and 2) that GTP need not be converted to ATP to exert its effect. To delineate the individual role of matrix ATP and external GTP, we performed the following experiments. Mitochondria were preincubated with efrapeptin (a potent inhibitor of the F 1 moiety of mitochondrial ATPase) to block respirationdriven ATP synthesis and also with glycerol kinase (8). Both efrapeptin and the ATP trap were present during the entire import reaction (Fig. 3, lanes 6 -12). GTP-mediated import into these ATP-deprived mitochondria was reduced by about 60% (compare lanes 3 and 7), suggesting that both matrix ATP and external GTP are required for efficient import. Indeed, when ATP-deprived mitochondria were supplemented with ␣-ketoglutarate (␣KG) to regenerate matrix ATP (10) through substrate-level phosphorylation via the tricarboxylic acid cycle, the inhibition was almost completely relieved (compare lanes 3, 7, and 10). In the absence of GTP, matrix ATP was completely ineffective (lane 12). How can we rule out the possibility that the generation of ATP from GTP by NDP kinase in the intermembrane space was extremely rapid, and that ATP so generated immediately entered into the matrix via the ADP/ATP carrier, thereby escaping the trap? This seems unlikely for two reasons: (a) carboxyatractyloside (which blocks the ADP/ATP carrier) did not interfere with the reversal of inhibition (compare lanes 10 and 11); and (b) no import was detected either with ATP alone, ␣KG alone, or ␣KG plus ATP (lanes 6, 8 and 9, respectively). Thus, efficient import requires both matrix ATP and external GTP; neither can replace the other. GTP Hydrolysis Is Essential-To determine whether mere FIG. 2. GTP-mediated import is physiologically significant. Import of urea-denatured pPut was carried out in the presence of increasing concentrations of GTP (lanes 2-7) or ATP (lanes 8 -12). One set (panel A) of reactions was incubated for 30 min on ice; the other set (panel B) was incubated for an additional minute at 30°C. Samples were subjected to trypsin treatment and analyzed. The import efficiency was plotted against the concentration of ATP or GTP. GTP binding or GTP hydrolysis is necessary for import, translocation of pPut was measured (Fig. 4) in the absence or presence of a nonhydrolyzable analog, GTP␥S, either on ice (lanes 2-5) or with an additional incubation at 30°C (lanes 6 -9). Under both conditions, GTP-mediated import was strongly inhibited by GTP␥S in a dose-dependent manner. GTP␥S alone was completely ineffective for promoting import (not shown). These data indicate that GTP hydrolysis is essential for import into the matrix. DISCUSSION Although GTP plays an important role in protein trafficking across various membranes, thus far there is no evidence that GTP plays a direct role in mitochondrial protein import (for review see Ref. 19). Our data demonstrate that GTP hydrolysis does indeed play a direct, essential role in translocation of proteins into the mitochondrial matrix, and that mitochondria can now be added to the list of trafficking systems where GTP hydrolysis plays a critical role. Thus, the data presented here confirm earlier predictions that protein translocation across membranes share certain key features (19,20). GTP was previously found to substitute for ATP in promoting import of preproteins into the matrix (5)(6)(7)(8). It was subsequently concluded that GTP must first be converted to ATP, presumably by the NDP kinase located in the intermembrane space (6), to support protein import (9). The ATP thus generated could enter the matrix via the ADP/ATP carrier and drive translocation. These studies were performed using ATP-depleted mitochondria (9). We found that the GTP-mediated import into apyrase-pretreated mitochondria was much lower compared with import into untreated mitochondria (not shown). It is possible that GTP might be necessary, but not sufficient, for efficient protein import. Specifically, NTPs might exert their effect only in the presence of adequate amounts of matrix ATP. Indeed, our data demonstrate that GTP need not be converted to ATP to exert its stimulatory effect on import. Both matrix ATP and external GTP are necessary, and neither one can substitute for the other if efficient import is to be achieved. These conclusions are derived from the following results. (i) Physiological concentrations of GTP can drive significant import (Fig. 2). (ii) GTP hydrolysis is essential for import to proceed (Fig. 4). (iii) GTP-mediated import is much greater than ATP-mediated import regardless of NTP concentration, incubation temperature, or the time that we tested ( Figs. 1 and 2). Depending on the conditions, GTP-mediated import is 2.4 -50 times more efficient. Should GTP be converted to ATP to exert its effect on import, GTP-mediated import would be very unlikely to exceed that obtained with an identical concentration of externally added ATP. (iv) The absolute GTP-mediated import efficiency is comparable to that achieved by both ATP and GTP (not shown). (v) A specific ATP-scavenging system (like glycerol kinase and glycerol) does not inhibit GTP-mediated import (Fig. 3). (vi) GTP-mediated import into ATP-deprived mitochondria is significantly reduced but not completely abolished. Supplementing these mitochondria with ␣KG to regenerate matrix ATP quantitatively relieves the inhibition even in the presence of an ATP trap. Matrix ATP alone is ineffective in driving import (Fig. 3). What, then, are the energetics of mitochondrial import? Two models have been proposed to explain how ATP can assist in pulling a polypeptide into the mitochondrial matrix (for review see Ref. 19). According to the Brownian ratchet model (38,39), movement of the translocating chain is accomplished by Brownian motion acting in concert with the mt-Hsp70-Tim44 complex. This complex binds the incoming polypeptide chain as it emerges on the matrix side of the inner membrane channel (40 -43). Since the translocating chain can oscillate randomly within the channel (44), binding of this complex on the matrix side prevents backward movement of the chain. In the translocation motor model (45), the binding of the polypeptide chain to the mt-Hsp70-Tim44 complex stimulates ATP hydrolysis by mt-Hsp70. This leads to a conformational change in mt-Hsp70, FIG. 3. Both external GTP and matrix ATP are necessary for efficient import. Mitochondria were preincubated at 10°C for 10 min in the absence, or presence of either glycerol kinase (GK) or glycerol kinase plus efrapeptin as described (8). The final concentrations of efrapeptin and GK in the import assay were 5 g/ml and 2.8 units/ml (1.4 units/mg mitochondrial protein), respectively. All samples contained 10 mM glycerol. Where indicated, 100 g/ml carboxyatractyloside (CAT) was included and incubated for 5 min on ice before reaction mixtures were supplemented with either ATP (1 mM) or GTP (1 mM) or ␣KG (5 mM), or a combination. Import of urea-denatured pPut was carried out at 30°C for 4 min. Samples were subjected to trypsin treatment and analyzed. The quantitative data on import efficiency are presented below the autoradiograph. FIG. 4. GTP hydrolysis is required for import into the matrix. Mitochondria were preincubated with different concentrations of GTP␥S for 5 min on ice. GTP (100 M) and urea-denatured pPut were then added. One set of reactions was incubated for 30 min on ice (lanes 2-5); the other set was incubated for an additional 2 min at 30°C (lanes 6 -9). Samples were then subjected to trypsin treatment and analyzed. The import efficiency was plotted against the concentration of GTP␥S. which then actively pulls the bound precursor into the matrix. Neither of these models, however, explains the energy requirements of translocation before a sufficient length of the N terminus has reached the matrix. We propose that GTP hydrolysis shifts the equilibrium of polypeptide chains on the cis and trans sides of the inner membrane to one favoring a trans (matrix) location. Subsequently, mt-Hsp70-Tim44 cycles trap the incoming polypeptide. In addition to a ⌬⌿ across the inner membrane, a pushpull mechanism might be critical to mitochondrial protein import. GTP hydrolysis at the cis side, and ATP-dependent processes at the trans side of the inner membrane presumably constitute the respective push and pull. The energy of GTP hydrolysis could improve the fidelity of the translocation process by actively pushing the polypeptide-in-transit through the import channels or else it could help maintain the polypeptide in a translocation-competent conformation during import. Recently, two GTP-binding proteins and a Hsp70 homolog with a large domain exposed to the intermembrane space have been identified as components of the outer membrane import machinery of chloroplasts (46,47). These GTP-binding proteins have been suggested to be involved in regulating precursor binding at the chloroplast surface (46). It is also possible that one or both of these GTP-binding protein(s), in concert with Hsp70, provide the respective push and pull for post-translational transmembrane movement of proteins across the outer membrane of chloroplasts. Although several constituents of the mitochondrial translocation machinery have been identified (for review see Ref. 48), none is reported to have a GTP-binding motif. The precise role of GTP can be determined only after identification and characterization of novel GTP-binding proteins and/or GTPases involved in mitochondrial translocation. The GTP-dependent mitochondrial translocation system described here will facilitate the identification of unknown components of the translocation apparatus and assist in revealing the mechanisms and energetics involved in the insertion and translocation of preproteins into and across mitochondrial membranes.
4,690
1998-01-16T00:00:00.000
[ "Biology", "Computer Science" ]
Support vector machine with optimized parameters for the classification of patients with COVID-19 Introduction. The COVID-19 pandemic has had a significant impact worldwide, especially in health, where it is crucial to identify patients at high risk of clinical deterioration early. Objective. This study aimed to design a model based on the support vector machine (SVM) algorithm, optimizing its parameters to classify patients with suspected COVID-19. Methodology. One thousand patient records from two health establishments in Peru were used. After applying data preprocessing and variable engineering, the sample was reduced to 700 records. The construction of the model followed a machine learning methodology, using the linear, polynomial, sigmoid, and radial kernel functions, along with their estimated optimal parameters, to ensure the best performance. Results. The results revealed that the SVM model with the linear and sigmoid kernels presented an accuracy of 95%, surpassing the polynomial kernel with 94% and the radial kernel (RBF) with 94%. In addition, a value of 0.92 was obtained for Cohen's kappa, which measures the degree of agreement between the predictions of the machine learning model and the actual results, which indicates an excellent deal for the linear and sigmoid kernel. Conclusions. In conclusion, the SVM model with linear and sigmoid kernels could be a valuable tool for identifying patients at high risk of clinical deterioration in the context of the COVID-19 pandemic. INTRODUCTION On December 21, 2019, doctors in China reported cases of atypical pneumonia in dozens of patients in the city of Wuhan (Mojica Crespo & Morales Crespo, 2020). Taking into account the accelerated growth of infections and deaths that occurred in early 2020, the Chinese authorities submitted a report to the World Health Organization. (Reyes Nuñez & Simón Domínguez, 2020). On March 11, 2020, the WHO, considering the spread dynamics and its great danger, declared the SARS-CoV-2 pandemic, mentioning that it was a highly lethal virus and virologically similar to SARS-Cov-1 (HcoV -229E) that appeared in 2009 and would change the course of human history (De León et al., 2020). The SARS-CoV-2 pandemic has modified the development dynamics in different institutions worldwide (Zarei et al., 2022). In health, immediate attention is required in many activities, such as timely diagnosis and revitalizing early care in health establishments, as a fundamental task of organizations (Rodríguez Yago et al., 2020). In health facilities worldwide, a comprehensive clinical spectrum has been observed in patients with COVID 19 and levels of severity ranging from an asymptomatic course to acute respiratory distress syndrome (ARDS) and even death (Aljameel, Khan, Aslam, Aljabri, & Alsulmi, 2021). This has generated a complex problem in the different health establishments that did not have the necessary machinery and sufficient personnel to respond to timely patient care objectively (Martínez Chamorro, Díez Tascón, Ibañez Sanz, Ossaba Vélez, & Borruel Nacenta, 2021 ). The lack of an early diagnosis and not acting on time in the treatment would increase the clinical risk of the patient with COVID-19. As a consequence, there is a high percentage of deaths in many countries of the world, including Peru . In many cases, the latter presents high comorbidity, multiple pathologies, and specific signs and symptoms (Rivera & del Pino Casado, 2020). It is necessary to have mechanisms that allow the classification of patients in real-time to optimize patient treatment time in a scenario of accelerated growth of those infected with SARS-CoV-2 (Mohammad, Aljabri, Aboulnour, Mirza, & Alshobaiki, 2022). In many countries, it has been suggested to use respiratory triage and severity identification scales, as well as the risk of mortality in patients with suspected infection. The British Thoracic Society maintains that the National Early Warning Scale (NEWS) makes it possible to detect patients with SARS-CoV-2 infection in emergency services to decide admission or hospitalization, in addition to clinical judgment, such as the pneumonia severity index (PSI) or the CURB-65 scale (confusion, urea, respiratory rate, blood pressure and age ≥ 65 years) (Romero Hernández et al., 2020). It is necessary to apply efficient classification mechanisms in patients with COVID-19 who go through potentially complex and changing pathophysiological situations (Li, et al., 2021). Knowing the clinical condition of the patient and classifying it according to the clinical range is the first step for its treatment and stabilization, which should be done as early as possible (Lalueza, et al., 2015;Castellanos & Figueroa, 2023; Aljameel, Khan, Aslam, Aljabri, & Alsulmi, 2021). In hospitals where there is a continuous increase in patients with COVID-19, the classification process must be automated (Aftab et al., 2022) to be a support system for the health professional and provide a timely response from classification (Dinar, et al., 2022;Olusegun Oyetola et al., 2023). Within the traditional alternatives, the statistical methods, minimum distance, and maximum likelihood of category are the statistical classifiers that depend on the multivariate normal distribution of the data of each class. If the normal distribution for each class is correct, then the classification has a minimum probability of generating error, and the maximum likelihood classifier is the appropriate choice (Kavzoglu & Colkesen, 2009;. Indeed, the maximum likelihood method has limitations related to the assumptions of normal distribution and restrictions of the input data (Fuentes Marmolejo & Medina Parra, 2021; Simhan & Basupi, 2023). An alternative is to design an automatic learning model using mathematical algorithms for classifying patients according to the clinical range with the Support Vector Machine (SVM) (Jain, Shankar, & Devi, 2020). This machine can train and learn (Sánchez Gómez, 2019). The importance of models based on machine learning lies in the possibility of making classifications and predictions with different models such as k-nearest neighbors, Bayes classifiers, decision trees, the support vector machine, etc. (Véliz Capuñay, 2020). In addressing non-linear and multivariate classification problems, the support vector machine efficiently solves classification and prediction problems (Guhathakurata, Kundu, Chakraborty, & Banerjee, 2021). Its success is due to its solid foundation of mathematical optimization; In addition, it has powerful tools and algorithms to find the solution in a non-linear context . A support vector machine aims to find an optimal hyperplane that separates one class from another, maximizing the distance between the points of different categories (Pisner & Schnyer, 2020). In an actual application of clarification or regression, the data is often not linearly separable, so it is necessary to design a nonlinear support vector machine; this is possible by incorporating more polynomial characteristics (Ahmad, 2018; Campos Sánchez et al., 2022). Adding features is feasible to implement and works well with all machine learning algorithms. This method cannot handle complex data sets at a low polynomial degree. At high polynomial degrees, it creates many features, making the model too slow (Géron, 2020;Sebo et al., 2023). A viable alternative to solve combinatorial explosion is to apply SVM by adding polynomial features (Zohair et al., 2021). Another strategy, when a set of elements is not linearly separable, is to transform the original space utilizing a non-linear function into a Hilbert space (Díaz-Chieng et al., 2022; Campo León, 2017; Rincon Soto & Sanchez Leon, 2022). The main objective of the kernel functions (i.e., linear, polynomial, radial basis, and sigmoidal) is to maximize the margin between hyperplanes (Ahmad, 2018; Marinho de Sousa et al., 2022). The Hilbert Spaces with Reproductive Kernel theory shows that the kernel functions correspond to a dot product, which induces a linear space with greater dimension than the original space, possibly infinite (Aronszajn, 1944). This fact allows us to reproduce any linear algorithm in a Hilbert area or, equivalently, for any algorithm, there is a non-linear version. This fact is known as the kernel trick (Tharwat, 2019). From the investigations, an SVM-based model has not been obtained by optimizing the parameters and comparing kernel functions to classify patients with suspected COVID-19 accurately. Indeed, this research aims to design a model based on the SVM algorithm, optimizing its parameters, for the classification of patients with suspected COVID-19. LITERATURE REVIEW Significant advances have been made in classifying patients with suspected COVID-19, for example, by identifying chest X-ray images using the support vector machine. Kesav & Jubukumar (2022) mention that machine learning has advanced to solve a wide range of biomedical problems with high precision. The research uses a deep learning mechanism to identify chest X-ray images of patients with COVID-19. The Bayesian optimization technique found that the support vector machine is the most accurate among several compared classifiers. From the analysis of the research on the application of the support vector machine and the classification of patients with suspected COVID-19, it was observed that studies were identified in 52 Scopus articles, 11 IEEE Explore articles, and 17 Web of Science articles in classification by X-rays, comparison of models in predicting patients with COVID-19, and sentiment analysis on the management of the pandemic, among other studies divergent from the present investigation. Dilmi (2022) has developed an approach for automatically diagnosing COVID-19 using chest X-ray images and AlexNet, VGG16, and VGG19 deep learning architectures to extract useful and relevant features. Then, he used as inputs a support vector machine with two discrete outputs: COVID-19 or No-findings. Furthermore, he used the Bayesian optimization (BO) algorithm to fit the parameters of the SVM classifier and choose the optimal parameters. The study's results indicate that the VGG16-SVM-BO and VGG19-SVM-BO models give the best performance with an accuracy of 99.47%. Kesav & M.G. (2022) present an investigation that uses a deep learning mechanism to identify chest X-ray images of patients with COVID-19 and other patients with pneumonia in two-and three-class scenarios. The proposed approach employs the GoogLeNet architecture to extract features fed into different classifiers. With the Bayesian Optimization technique, the Kernel SVM is the most accurate among several compared classifiers. The model showed % overall accuracy of 98.31% for the two-class classification between COVID-19 and non-COVID-19 chest X-ray images and 98.60% for the Three-class classification problem between COVID-19, healthy, and viral pneumonia radiographs. The proposed system outperformed several existing architectures and was tested using smaller data sets to ensure robustness. Jeng & Hsieh (2021) used the SVM-supervised machine learning algorithm to build a model to analyze and predict the presence of COVID-19 in a person based on the symptoms experienced. Hyperparameters such as degree, cost, gamma, and kernels, including linear, radial, polynomial, and sigmoid, were tuned using R Studio to achieve the best possible model performance. The model was tested by ten-fold cross-validation, and the results show that the polynomial kernel with optimized hyperparameters produced the best accuracy of 98.02%. The purpose of the researchers Singh et al. (2020) was to produce real-time forecasts using the SVM model. They investigated the Coronavirus disease 2019 (COVID-19) prediction of confirmed, deceased and recovered cases. The prediction will help plan resources, determine government policy, provide survivors with immunity passports, and use the same plasma for care. Data, including attributes such as confirmed location, deceased, COVID-19 recovery, longitude, and latitude, were collected from January 22, 2020, to April 25, 2020, worldwide. SVM was used to explore the impact on identification, deaths, and recovery. The research of (Zoabi, Deri, & Shomron, 2021; Ramírez Moncada et al., 2022) has been oriented towards the effective detection of SARS-CoV-2 that allows a rapid and efficient diagnosis of COVID-19 and can mitigate the burden on medical care, for which have developed prediction models that combine several characteristics to estimate the risk of infection. These models are intended to assist medical personnel worldwide in triaging patients, especially with limited healthcare resources. We established a machine learning approach that trained on the records of 51,831 people tested (of whom 4,769 were confirmed to have COVID-19). The model they developed predicted COVID-19 test results with high accuracy using only eight binary characteristics: sex, age ≥60 years, known contact with an infected individual, and the occurrence of five initial clinical symptoms. After reviewing the literature, no studies were found that designed a machine learning model based on an automatic support machine that optimizes its parameters and compares the different kernels. Therefore, we aim to build a model based on the support vector machine algorithm to classify patients. The model implementation was developed in Python, a general-purpose programming language with a variety of packages for data science. MATERIALS AND METHODS This research follows the positivist paradigm, quantitative approach, observational design without intervention, and predictive level. For the development of the machine learning model, the methodology for machine learning machine design was applied (Géron, 2020). 3.1. Data collection. One thousand records of patients diagnosed with SARS-CoV-2 infection admitted by the emergency service in health centers in Peru were collected. The variables considered were: age, gender, weight, height, respiratory rate, oxygen saturation, systolic blood pressure, heart rate, and temperature. Since the study is observational without intervention, informed consent was not requested, and the confidentiality of the data has been maintained. 3.2. Data preprocessing. Preprocessing is a fundamental phase that is performed before proceeding to the analysis. This ensures that the data is suitable for training and testing the model. In the present investigation, the data were analyzed with the Pandas package of the Python programming language. The data and records were validated concerning clinical risk to guarantee the adequate classification of the model; missing data were imputed; the characteristics were standardized; Fixed data imbalance issue. After analyzing, reducing the features, and balancing, a sample of 700 records was considered. 3.3. Selection of learning algorithm. There are various machine learning tools. The support vector machine was chosen for this study, which allows classification and prediction to optimize the different kernels and compare the results. 3.4. Implementation of the model and training. The program has been developed using the Python package scikit-learn. A training set, which is 70% of the sample, and a validation set, which is 30%, have been used, for which a program aimed at improving the results was designed. In this way, the error is minimized in each training, guaranteeing an incremental improvement in the model's efficiency. 3.5. Evaluation and validation. It has been verified that the model performs an efficient classification, for which the confusion matrix has been calculated. The performance of the model has been optimized from the linear, polynomial, radial, and sigmoidal kernels, adjusting the cross-validation by incorporating the Grid Search algorithm (Rios, Ulloa, & Borello Gianni, 2019), which allowed the C and Gamma parameters to be calculated automatically. To compare the performance of the different kernels, sensitivity, specificity, and Cohen's kappa metrics have been used. We consider the number of false positives (FP) to the observations erroneously classified as correct. In the research, they are people who do not have COVID-19, and the model ranked them as COVID-19 (+) and the false negatives (FN) were the observations erroneously classified as accurate, that is, people who have COVID-19 and the model ranked them as COVID-19 (-). In addition, correctly classified results correspond to true positives (TV), people who have COVID- 19 RESULTS During the data collection phase, 1,000 medical records of patients diagnosed with COVID-19 were identified in two health facilities in Peru. The data of patients who met the validation criteria about clinical risk were recorded. After performing the preprocessing and analysis of variables, which included reviewing missing values, selecting variables, and resolving the imbalance problem based on the clinical range, a sample of 700 records was considered. The sample presented a mean age of 59 years and a standard deviation 15.90. In addition, it was found that 53.28% of the patients were male, and 63.71% were female. The sklearn package of the Python programming language was used to implement the model. The model's training was carried out using 70% of the random samples from the clinical histories, while the remaining 30% were used to carry out the test. Below is a The results of the evaluation show that, when using the sigmoid kernel, the support vector machine model obtained a high total success rate of 95%. When predicting patients with a severe clinical range, it presented an accuracy of 96% and a sensitivity (recall) of 91%. For patients with a moderate clinical range, it achieved an accuracy of 94% and a sensitivity of 93%. As for patients with a mild clinical range, it exhibited an accuracy of 94% and a sensitivity of 100%. In the case of the radial kernel (Rbf), the model achieved a total hit rate of 94%. When predicting patients with a severe clinical range, it showed an accuracy of 93% and a sensitivity of 91%. For patients with a moderate clinical range, it achieved an accuracy of 94% and a sensitivity of 92%. As for patients with a mild clinical range, it exhibited an accuracy of 96% and a sensitivity of 100%. To guarantee the external validity of the classifying model and avoid its dependence on the number of characteristics of the data, different values of the C parameters (0.1; 1; 10; 50;100;500; 1000,10000) were explored. and gamma (1, 0.1, 0.01, 0.001, 0.0001) using an optimization process. The cross-validation technique was applied to the training matrix, and the Grid Search algorithm was used to obtain the best parameters for each kernel. In order to make an adequate comparison, the model's accuracy was evaluated, and Cohen's Kappa index was calculated to measure the agreement between the categorizations obtained and the reference labels. These measures make it possible to assess the accuracy and consistency of the model in its ability to classify cases correctly. A comparison was made using the accuracy metric, representing the percentage of cases the model classified correctly. The results showed that both the linear kernel and the sigmoid kernel obtained an accuracy of 95%, which is higher than the accuracy of the polynomial kernel with 94%, and the radial kernel (RBF) also with 94%. In addition, Cohen's kappa index was calculated, which measures the degree of agreement between the predictions made by the machine learning model and the actual results. A value of 0.92 was obtained for the linear and sigmoid kernel, which indicates an excellent agreement between the model predictions and the actual results. Table 2). Given that the application of the model based on a support vector machine is carried out in the health system, we are interested in keeping false negatives lower since it would be detrimental for the patient to give a negative diagnosis of COVID-19 when in reality, we must opt for a higher sensitivity value, where false negatives are lower among the implemented kernels. We opted for the line and sigmoid kernel over the polynomial and radial kernels, as they have an accuracy of 95%, a higher sensitivity value and both a Cohen's kappa value of 0.92 higher than the linear and radial kernels. CONCLUSIONS The data for the proposed model were obtained from 1000 records of patients with suspected SARS-CoV-2 infection who were admitted by the emergency service in health centers in Peru. The variables considered were: age, gender, weight, height, respiratory rate, oxygen saturation, systolic blood pressure, heart rate, temperature, and diagnosis based on mild, moderate, and severe clinical risk. The data were analyzed with the Pandas package of the Python programming language. The data and records were validated concerning clinical risk to guarantee the adequate classification of the model; missing data were imputed; the characteristics were standardized; Fixed data imbalance issue. After analyzing, reducing the features, and balancing, a sample of 700 records was considered. Different kernel functions have been used for performance analysis. The parameters for the kernel functions have been optimized: linear, radial, sigmoid, and polynomial. The comparison has been made with the accuracy (accuracy) that measures the percentage of cases that the model has succeeded in the classification. For the linear and sigmoid kernel, it is 95%, while for the polynomial and radial kernel, with an accuracy of 94%. Likewise, Cohen's kappa was calculated to carry out an adequate evaluation of the prediction, and the value 0.92 was obtained, which means the excellent concordance of the forecast through the linear and sigmoid kernel. In contrast, the polynomial and radial kernel have obtained a value of 0.90. The rapid triage process for patients with COVID-19 reduces the number of clinical consequences and healthcare costs. The timely classification of patients with COVID-19 will allow better management of hospital surveillance, prevention, and control strategies. Ethical aspects. The Office of the Vice President approved this study for research at the José Faustino Sánchez Carrión National University (RCU No. 0334 2020-CU-UNJFSC). Informed consent was not requested. Since the information is secondary, it was obtained directly from medical records, and the confidentiality of the data was respected. Acknowledgements. We thank the Office of the Vice President for Research of the José Faustino Sánchez Carrión National University for promoting the research and providing funding for this research. Financing. There is support with ordinary resources from the José Faustino Sánchez Carrión -Huacho National University and the authors. Conflicts of interest. We declare that we have no conflicts of interest in this study.
4,851
2023-06-20T00:00:00.000
[ "Medicine", "Computer Science" ]
Micro-Channel Oscillating Heat Pipe Energy Conversion Approach of Battery Heat Dissipation Improvement: A Review : The application of batteries has become more and more extensive, and the heat dissipation problem cannot be ignored. Oscillating Heat Pipe (OHP) is a good means of heat dissipation. In this paper, the methods to improve the energy conversion and flow thermal performance of micro-channel OHP are studied and summarized. The working principle, heat transfer mechanism, advantages and applications of PHP are also introduced in detail in this study. Proper adjustment of the micro-channel layout can increase the heat transfer limit of PHP by 44%. The thermal resistance of two-diameter channel PHP is 45% lower than that of conventional PHP. The thermal resistance of PHP under uneven heating can be reduced to 50% of the original. PHP pulse heating can alleviate the phenomenon of dryness. Different working fluids have different effects on PHP. The use of graphene nano-fluids as the work medium can reduce the thermal resistance of PHP by 83.6%. The work medium obtained by the mixture of different fluids has the potential to compensate for the defects while inheriting the advantages of a single fluid. Introduction The efficient and stable transfer of heat is the basis for the safe operation of batteries [1]. OHP were proposed by Akachi in 1990 [2,3], which is also named pulsating heat pipe (PHP). OHP has the advantages of high thermal efficiency, simple structure, easy miniaturization, high degree of customization, which are considered to have broad prospects in the fields of battery cooling, energy-saving transmission [4] and superconducting cooling [5]. When OHP works, the heat transfer of phase change is mainly carried out through the circulating flow of liquid plugs and steam plugs which randomly distributed in the pipeline [6]. The evaporation of the liquid film, the heat transferred and generated by the Taylor bubbles and flowing [7,8], which promotes the work fluid in the pipe flow. Micro-channel PHP consists of three parts: the evaporation section, the adiabatic section and the condensation section. The evaporation section is placed at the hot end to absorb heat. The condensation section is placed at the cold end to dissipate heat [9]. The structure of PHP is simple, but its mechanism in the process of heat and mass transfer is still in the exploratory stage [10,11]. PHP is generally studied by the simulation of the work fluid flow in the tube [12] or by an experiment [13,14]. The research on PHP mainly focuses on the heating conditions [15], the PHP structure [16,17], the effect of work fluid on the properties of PHP [18,19] and the flow [20]. The work fluid has a great influence on the thermal performance of PHP. The nano-fluids have become a research hotspot in recent years due to their excellent physical properties and the ability to enhance the performance of PHP. The heat transfer function of the micro-channel PHP is mainly completed by heat convection and heat conduction. Geometry changes the flow of the work medium and the conduction path of the heat, which will affect the energy conversion of the micro-channel The heat transfer methods include convection heat transfer, phase change latent heat transfer, etc. The factors of pressure difference, friction, inertial force, capillary force and gravity play an important role in coupling [39]. At the same time, the heat conduction of the tube wall cannot be ignored [40]. The initial distribution of the work fluid in the tube is not uniform. The vapor and liquid plugs appear at random positions, which lead to different pressure distributions in various parts of the tube, which cause random oscillation of the work fluid in the tube. Thermal Convection of the PHP During the operation of the micro-channel PHP, the pressure difference between the evaporation section and the condensation section pushes the work medium to the condensation section. The work medium flows back to the evaporation section after condensation. There is convective heat transfer in this process. Heat exchange of liquid convection is the main heat transfer mode in the PHP. The convective heat transfer coefficient of the two-phase flow in the vertical pipeline is given in Equations (1) and (2) [41]. The latent heat of the phase transition is given in Equation (3) [42]. 1 The heat transfer methods include convection heat transfer, phase change latent heat transfer, etc. The factors of pressure difference, friction, inertial force, capillary force and gravity play an important role in coupling [39]. At the same time, the heat conduction of the tube wall cannot be ignored [40]. The initial distribution of the work fluid in the tube is not uniform. The vapor and liquid plugs appear at random positions, which lead to different pressure distributions in various parts of the tube, which cause random oscillation of the work fluid in the tube. Thermal Convection of the PHP During the operation of the micro-channel PHP, the pressure difference between the evaporation section and the condensation section pushes the work medium to the condensation section. The work medium flows back to the evaporation section after condensation. There is convective heat transfer in this process. Heat exchange of liquid convection is the main heat transfer mode in the PHP. The convective heat transfer coefficient of the two-phase flow in the vertical pipeline is given in Equations (1) and (2) [41]. The latent heat of the phase transition is given in Equation (3) [42]. h e,lat ≈ h c,lat = 1 + 3.35Ca 2/3 0.67Ca 2/3 where h w-f and h f-w are the heat transfer coefficients of convective boiling and convective cooling, W/(m 2 ·K). k l is the thermal conductivity of the liquid, W/(m· • C). h l is the single- phase heat transfer coefficient, W/(m 2 ·K). q in is heat flow, J/s. P is perimeter, m. P cr is the perimeter of the section, m. B o is the boiling number. x is the vapor mass of the evaporation section, kg. ρ l and ρ v are the liquid density and vapor density of the fluid, kg/m 3 . Ca is the capillary number of the micro-channel. D h is the hydraulic diameter of the micro-channel, m. h c , la and h c , lat are the latent heat transfer coefficient of the condensation section of the evaporation section, W/(m 2 ·K). The flow of the work fluid inside the PHP will cause the heat conduction rate along the inner wall to be out of sync with the heat convection rate. The temperature gradient along the wall is greater than the temperature gradient along the fluid, which will result in convective heat transfer between the fluid and the wall [43]. The local convective heat flux density q between the fluid and the inner wall is shown in Equation (4). where k is the thermal conductivity of the micro-channel. T env is the ambient temperature, • C. R env is the thermal resistance between the channel and the environment, which is assumed to be equal to 0.1 (m 2 ·K)/W. T is the temperature, K. r normalized cross-correlation function. c p is the constant pressure specific heat capacity, J/ (kg·K). z is the coordinate, m. Fluid convection will influence the choice of materials and dimensions when the evaporation section is designed [44]. The use of convection cooling in the condensation section can increase the cooling rate. A higher cooling air flow rate leads to an increase in the convective heat transfer coefficient, which accelerates the cooling rate of the evaporation and condensation sections [45]. When the heat input increases to a certain value, the evaporation section is likely to be completely dried up, which leads the PHP to reach the heat transfer limit. The increase in cooling air flow rate can raise the heat transfer limit of the PHP. Heat Conduction of PHP The heat transfer performance of FP-OHP is weakened because the transverse heat conduction of adjacent channels reduces the temperature gradient for the self-excited oscillation of the work medium [46]. The heat transfer rate of the tube wall is listed in Equation (5) [47]. where T i is the average temperature at the beginning, • C. T o is the average temperature at the end, • C. L is the length, m. A s is the cross-sectional area, m 2 . k is the thermal conductivity of the material, W/(m·K). r i is the inner radius, m. r o is the outer radius, m. Due to the presence of tube heat conduction, the work fluid within the PHP cannot enter a stable state without the generation of air bubbles. Although bubble generation may not be directly involved in the development of the first oscillations, it plays a crucial role in preventing the oscillations from stopping [48]. The pipe material and section have a great influence on PHP startup [49]. The heat diffusion equation of the liquid plug i in the pipe is given in Equation (6), q" W is the heat flux density with the pipe wall as shown in Equation (7) [50]. where ρ l is the density of the liquid plug i, kg/m 3 . c p , l is the specific heat capacity of the liquid plug i, J/(kg·K). λ l is the thermal conductivity, W/(m·K). T i is the temperature of the liquid plug i, K. Nu is the Nusselt number. T H and T L are the evaporator and condenser temperatures, K. PHP has significant advantages of high efficiency and energy saving, which has wide application potential. PHP does not require the assistance of wicks or other structures, which relies on self-excited two-phase flow to function properly. Figure 2 is the structure of PHP. Ji Y et al. [51] fabricated a polydimethylsiloxane (PDMS) PHP using an aluminum mold and a PDMS plate, which consisted of only 5 turns of interconnecting channels bonded to a PDMS plate. Zhao et al. [52] designed a copper closed loop pulsating heat pipe (CLPHP). A red copper tube is bent five times and weld. The CLPHP can be regarded as a copper tube that does not contain other structures except the bent structure. Wu et al. [53] designed and fabricated a PHP for cooling metal cutting tools with the temperature reduce by 5-15%. The PHP is a copper tube is repeatedly bent through hole of the tool to absorb the heat generated by the tool. There is only a curved structure in the tube. Mahajan et al. [54] used PHP for waste heat recovery of ventilation systems with the recovery power of 240 W. The traditional heat pipe heat exchanger for waste heat recovery has an internal wicking structure, sintered screen or coaxial groove. The PHP is made by bending and welding a single copper pipe and the structure is relatively simple. Zhao et al. [55] used copper tube PHP with expanded graphite/graphite as the work fluid for thermal energy storage, which improved the safety of thermal management of power electronic equipment. The pipe body of PHP is made of bent and welded copper pipes. Alizadeh et al. [56] conducted a numerical simulation of a single-turn CLPHP for cooling photovoltaic modules. The use of PHP can increase the power generation of photovoltaic panels by about 18%. The single-turn CLPHP is an end-to-end quartz glass tube with a liquid-filled hole which has no additional complicated structure. Energy Saving with Excellent Heat Transfer Performance PHP has excellent heat transfer performance, and it can be used as a heat exchanger in a heat recovery system or a solar collector system to save energy. Figure 3 is the experimental setup for AGPHP heat recovery. The use of PHP heat exchangers in air conditioning systems can reduce energy consumption to 14% [57]. Liu et al. [58] applied anti-gravity PHP to waste heat recovery. The test results showed that the heat recovery efficiency of anti-gravity PHP (AGPHP) was more than 1.66 times that of pure copper rods. Deng et al. [59] tested a high-temperature exhaust waste heat recovery device based on anti-gravity PHP. The experimental device is displayed in Figure 4. The measured heat absorbed by AGPHP is 228% of pure copper meandering strips. The heat recovery efficiency is much better than traditional copper dielectric. Monroe et al. [60] achieved power Energy Saving with Excellent Heat Transfer Performance PHP has excellent heat transfer performance, and it can be used as a heat exchanger in a heat recovery system or a solar collector system to save energy. Figure 3 is the experimental setup for AGPHP heat recovery. The use of PHP heat exchangers in air conditioning systems can reduce energy consumption to 14% [57]. Liu et al. [58] applied anti-gravity PHP to waste heat recovery. The test results showed that the heat recovery efficiency of anti-gravity PHP (AGPHP) was more than 1.66 times that of pure copper rods. Deng et al. [59] tested a high-temperature exhaust waste heat recovery device based on anti-gravity PHP. The experimental device is displayed in Figure 4. The measured heat absorbed by AGPHP is 228% of pure copper meandering strips. The heat recovery efficiency is much better than traditional copper dielectric. Monroe et al. [60] achieved power generation while transferring heat through magnets. The coils connected in series with the PHP work medium. The maximum and average power generation at the heat input of 200 W were 428 µW and 15.3 µW, respectively. In remote areas without power coverage, the region has broad development potential. Li et al. [61] studied the graphene/waterethylene glycol nano-suspension PHP for low temperature heat recovery. The measured minimum thermal resistance was 0.36 K/W, which can effectively improve the recovery efficiency of the low temperature heat recovery system. Khodami et al. [62] designed a PHP-based waste heat recovery device to recover waste heat from stack exhaust gas. The energy conversion rate was up to 22% in the test. Xu et al. [63] integrated PHP into a solar collector for heat transfer and the measured thermal resistance was as low as 0.26 • C/W. The thermal efficiency of the PHP-integrated solar collector was as high as 50%. High Efficiency for Heat Dissipation PHP can be used to cool and dissipate objects with high heat flux density of electronic components to ensure a safe temperature range. Experiments show that the heat transfer coefficient of multi-walled carbon nanotube nano-fluid PHP is 130% compared with con- High Efficiency for Heat Dissipation PHP can be used to cool and dissipate objects with high heat flux density of electronic components to ensure a safe temperature range. Experiments show that the heat transfer coefficient of multi-walled carbon nanotube nano-fluid PHP is 130% compared with conventional copper fins [64]. The thermal resistance of PHP at 800 rpm is 0.925 • C/W [65] when it was applied to rotating equipment cooling. Czajkowski et al. [66] measured the thermal resistance of the rotating flower-shaped PHP. It decreased to 0.012 • C /W with the increase in centrifugal acceleration. The structure is given in Figure 5, and it has a good application prospect in the field of heat dissipation of high heat flux devices. Ji et al. [67] fabricated and tested the high-temperature liquid metal PHP with the use of the sodium-potassium alloy as the work fluid. The thermal resistance of the high-temperature liquid metal PHP was at least 0.08 • C/W at a working temperature above 500 • C. The low-temperature PHP of the cylindrical shell condenser studied by Sagar K R et al. [68] has an effective thermal conductivity of 16,350 W/(m·K) when the filling rate is 76%, which is about 32.7 times that of solid copper rods under the same conditions. Thompson et al. [69] tested multilayer Ti-6Al-4V-PHP fabricated with selective laser melting process. The effective thermal conductivity of multilayer Ti-6Al-4V-PHP was improved by 400% compared to solid Ti-6Al-4V-500%. Alizadeh et al. [70] conducted a numerical analysis of CLPHP heat dissipation of solar photovoltaic panels and found that the improvement rate of solar photovoltaic panels with CLPHP was 35.3%. metal PHP was at least 0.08 °C/W at a working temperature above 500 °C. The low-temperature PHP of the cylindrical shell condenser studied by Sagar K R et al. [68] has an effective thermal conductivity of 16,350 W/(m·K) when the filling rate is 76%, which is about 32.7 times that of solid copper rods under the same conditions. Thompson et al. [69] tested multilayer Ti-6Al-4V-PHP fabricated with selective laser melting process. The effective thermal conductivity of multilayer Ti-6Al-4V-PHP was improved by 400% compared to solid Ti-6Al-4V-500%. Alizadeh et al. [70] conducted a numerical analysis of CLPHP heat dissipation of solar photovoltaic panels and found that the improvement rate of solar photovoltaic panels with CLPHP was 35.3%. Extensive Application and Promotion Micro oscillation heat pipes (MPHP) can be fabricated by manufacture micro scale channels on silicon chips with microelectromechanical systems technology [71]. Liu [72] et al. tested the heat transfer performance of a silicon-based micro-oscillating heat pipe (MOHP) with the optimal filling rate of 53%. Dang et al. [73] carried out numerical simulation on the PHP cooling rack used to cool the central processing unit (CPU). The results showed that under the load of 1380 W, the CPU temperature of the PHP cooling rack was not more than 60 °C. Figure 6 is the cooling arrangement of PHP. Qu et al. [74] measured the minimum thermal resistance of the silicon-based MPHP is 5.5 °C/W and the startup time is less than 200 s. Kelly et al. [75] studied the radial PHP for the local heat dissipation of electronic equipment. The experiment of the radial PHP can reduce the temperature of the hotspot Extensive Application and Promotion Micro oscillation heat pipes (MPHP) can be fabricated by manufacture micro scale channels on silicon chips with microelectromechanical systems technology [71]. Liu [72] et al. tested the heat transfer performance of a silicon-based micro-oscillating heat pipe (MOHP) with the optimal filling rate of 53%. Dang et al. [73] carried out numerical simulation on the PHP cooling rack used to cool the central processing unit (CPU). The results showed that under the load of 1380 W, the CPU temperature of the PHP cooling rack was not more than 60 • C. Figure 6 is the cooling arrangement of PHP. Qu et al. [74] measured the minimum thermal resistance of the silicon-based MPHP is 5.5 • C/W and the startup time is less than 200 s. Kelly et al. [75] studied the radial PHP for the local heat dissipation of electronic equipment. The experiment of the radial PHP can reduce the temperature of the hotspot by 23 • C. Kearney et al. [76] studied the operation of embedded PHP of electronic equipment. The embedded PHP can operate normally under the heat flux density of at least 2.5 W/cm 2 . Jang et al. [77] tested the heat transfer performance of ultra-thin plate PHP of mobile electronic equipment. The thermal resistance of ultra-thin plate PHP at 90 • and 0 • with the inclination angles of 63% (3 • C/W) and 56% (3.6 • C/W), which is lower than that of graphite sheet. Torresin et al. [78] tested a new type of PHP cooler. In the experiment, the influence of gravity is negligible. The lowest measured thermal resistance is 27 K/kW. Qu et al. [79] studied the standard of PHP structure in the battery management system of new energy vehicles based on the flexible PHP made of a fluororubber tube. The heat transfer performance of PHP structures is in the order of "I" shape, "ladder" shape, "inverted U" shape and "N" shape. When the battery thermal management system is designed, the PHP can be selected according to the standard [80]. Chen et al. [81] tested the TiO 2 nano-fluid PHP of lithium iron phosphate battery thermal management. They measured the maximum temperature with the temperature gradient of the battery of 35.86 • C and 1.15 • C. The improvement rate was 77% and the minimum thermal resistance was 0.098 • C/W. Ling et al. [82] proposed a cooling method for electronic devices which combined phase change material (PCM) with 3D PHP. The new cooling method can control the surface temperature of electronic devices below 100 • C, which is about 35 • C lower than the air-cooling method with the thermal resistance reduce of 36.3%. Wang et al. [83] studied the 3D OHP for the photovoltaic cells cooling. The 3D OHP added with sintered copper particles in the evaporation section could keep the temperature of photovoltaic cells below 57 • C. Wang et al. [84] studied the thermal management system of lithium-ion power battery pack based on PCM/OHP. The maximum energy saving rate was 81.8% after using PCM/OHP battery management system. Wei et al. [85] tested plug-in PHP for the thermal management of electric vehicle batteries. Under the power input of 56 W, the minimum thermal resistance of PHP is 0.193 • C /W. The average temperature of the battery pack can be controlled below 46.5 • C and the maximum temperature difference is 1-2 • C. Mosleh et al. [86] used PHP instead of fins in the air-cooled heat exchanger. The heat transfer coefficient of the air-cooled heat exchanger under natural convection forced convection were increased by 310% and 263% after using PHP instead of fins. Wang et al. [87] studied the application of PHP of LED heat dissipation based on PHP with sintered copper particles. The experimental setup is listed in Figure 7. Figure 7a is the LED heat sink, Figure 7b is the front side of LED chip and Figure 7c is the back side of PCB board. The addition of sintered copper particles is beneficial to the startup of PHP, since it can promote the oscillation movement. The maximum temperature of LED can be controlled below 70 • C. Qian et al. [88] studied PHP for heat dissipation in the grinding wheel grinding area and showed that PHP can operate normally when the heat flux density is lower than 24,000 W/m 2 . The application of PHP in space had made great progress in recent years. Radiation PHP can be used for space applications requires an appropriate amount of heat input to start working at a lower operating temperature [89]. Iwata et al. [90] tested a metal flexible PHP of the spacecraft. The maximum thermal conductivity of the metal flexible PHP can reach 0.8 W/(m·K). The dynamic stiffnesses of the Y-axis and Z-axis are not more than 0.2 N/mm, which is smaller than the graphite. Slobodeniuk et al. [91] designed a PHP composed of molybdenum and sapphire cover plate for the parabolic flight activities. Based on the We number and Ga number as defined in Equation (8), PHP was evaluated by the average We number obtained was the same as the reference critical value (We crit = 4) and the Ga number (1980) was much higher than the reference critical value (Ga crit = 160). where ρ l , ρ v , g, D crit , σ, v, Re, µ l are liquid, and vapor, densities, gravity acceleration, critical channel diameter, surface tension, liquid slug velocity, Reynolds number and liquid dynamic viscosity, respectively. ment of electric vehicle batteries. Under the power input of 56 W, the minimum thermal resistance of PHP is 0.193 °C /W. The average temperature of the battery pack can be controlled below 46.5 °C and the maximum temperature difference is 1-2 °C. Mosleh et al. [86] used PHP instead of fins in the air-cooled heat exchanger. The heat transfer coefficient of the air-cooled heat exchanger under natural convection forced convection were increased by 310% and 263% after using PHP instead of fins. Wang et al. [87] studied the application of PHP of LED heat dissipation based on PHP with sintered copper particles. The experimental setup is listed in Figure 7. Figure 7a is the LED heat sink, Figure 7b is the front side of LED chip and Figure 7c is the back side of PCB board. The addition of sintered copper particles is beneficial to the startup of PHP, since it can promote the oscillation movement. The maximum temperature of LED can be controlled below 70 °C. Qian et al. [88] studied PHP for heat dissipation in the grinding wheel grinding area and showed that PHP can operate normally when the heat flux density is lower than 24,000 W/m 2 . The application of PHP in space had made great progress in recent years. Radiation PHP can be used for space applications requires an appropriate amount of heat input to start working at a lower operating temperature [89]. Iwata et al. [90] tested a metal flexible PHP of the spacecraft. The maximum thermal conductivity of the metal flexible PHP can reach 0.8 W/(m·K). The dynamic stiffnesses of the Y-axis and Z-axis are not more than 0.2 N/mm, which is smaller than the graphite. Slobodeniuk et al. [91] designed a PHP composed of molybdenum and sapphire cover plate for the parabolic flight activities. Based on the We number and Ga number as defined in Equation (8), PHP was evaluated by the average We number obtained was the same as the reference critical value (Wecrit = 4) and the Ga number (1980) was much higher than the reference critical value (Gacrit = 160). Methods for Improvement Energy Conversion of Micro-Channel PHP Although the structure of micro-channel PHP is relatively simple, its heat and mass transfer mechanism are not clear. Its heat transfer performance is affected by many factors. The energy conversion efficiency improvement of micro-channel PHP is an important way to enhance the heat transfer performance of micro-channel PHP. Influence of Section When the inner diameter of the PHP section is too large, the surface tension of the work medium will decrease. The work medium tends to be layered by gravity which cannot work stably. When the inner diameter is too small, the work medium cannot overcome the oscillating flow resistance of the liquid plug between the cold and hot ends, which leads to the failure of start PHP [92]. Jiaqiang E et al. [93] proposed a new type of narrow tube closed PHP with retraction that can enhance the heat transfer performance of fixed direction oscillation cycle. The average heat transfer coefficient of the new narrow tube closed PHP was increased by 52.28% compared with that of the conventional PHP [94] and the average Prandel number (representing the ability of momentum diffusion and thermal fluid transfer) was increased by 25.49% compared with that of the conventional heat pipe. Hua C et al. [95] found the thermal resistance of rectangular channel PHP is only 30-40% of circular channel. The temperature difference between evaporation section and condensation section is 10-20 • C lower than that of circular channel PHP. Figure 8 is heat pipe structures with multiple elbows which are made from different materials. The variable diameter structure reduced the sensitivity of PHP to gravity and enhanced heat transfer performance by the pressure difference increase [96]. Tseng C Y et al. [97] studied the influence of alternate pipe diameters on the heat transfer performance of CLPHP based on CLPHP with 2.4 mm pipe diameters. Table 1 is some studies on PHP cross-sectional forms. The thermal resistance and start power of CLPHP with alternate pipe diameters were lower than conventional CLPHP. MARKAL B [98,99] studied the influence of double section ratio on PHP based on tapered PHP with double section ratio. The thermal resistance of tapered PHP with double section ratio is reduced by 28.4% compared with conventional PHP, which is not easily affected by gravity. The internal pressure fluctuation caused by the unequal hydraulic diameters of adjacent pipes leads to the heat transfer performance of the asymmetric micro pulsation heat exchanger which is better than that of the symmetric micro pulsation heat exchanger. Micro-channel OHP in battery heat management system and electronic device cooling has remarkable potential applications as listed in Table 2. The minimum thermal resistance is 3.4 °C/W [100]. Kwon G H et al. [101,102] studied the flow and heat transfer characteristics of dual diameter channel PHP. The thermal resistance of dual diameter channel PHP is 45% lower than that of conventional PHP. When the pressure difference generated by the channel diameter difference is greater than the friction pressure drop, the work medium can move without gravity as displayed in Figure 9. Figure 9a is the thermal conductivity greatly affected by gravity with dual diameter channels number of 1. Figure 9b is the thermal conductivity hardly affected by gravity with double diameter channels number of 3. Yang K S et al. [103] studied the flow characteristics of silicon-based MPHP pipes with different widths. The micro-channels with alternative widths introduce an unbalanced capillary force to promote the movement of vapor and liquid slugs. Tseng C Y et al. [104] proposed a new type of double pipe PHP. The thermal resistance of the new type of double pipe PHP can be as low as 0.0729 K/W. When the pressure difference generated by the channel diameter difference is greater than the friction pressure drop, the work medium can move without gravity as displayed in Figure 9. Figure 9a is the thermal conductivity greatly affected by gravity with dual diameter channels number of 1. Figure 9b is the thermal conductivity hardly affected by gravity with double diameter channels number of 3. Yang K S et al. [103] studied the flow characteristics of silicon-based MPHP pipes with different widths. The micro-channels with alternative widths introduce an unbalanced capillary force to promote the movement of vapor and liquid slugs. Tseng C Y et al. [104] proposed a new type of double pipe PHP. The thermal resistance of the new type of double pipe PHP can be as low as 0.0729 K/W. Characteristics of Turns OHP turns lead to excessive flow resistance of the work medium in the pipe easily. When the turns number is too less, the oscillation of the work medium in the pipe is easier to stop. There is no recognized standard for the selection of PHP turns, which hinders the large-scale application of PHP [105]. Qian N et al. [106] described the startup process of Characteristics of Turns OHP turns lead to excessive flow resistance of the work medium in the pipe easily. When the turns number is too less, the oscillation of the work medium in the pipe is easier to stop. There is no recognized standard for the selection of PHP turns, which hinders the large-scale application of PHP [105]. Qian N et al. [106] described the startup process of single loop PHP through the second-order dynamic system control equation. The startup speed of single loop PHP depends on the type of work medium and heating power. Mameli M et al. [107] developed a numerical model for predicting the heat transfer performance of PHP. The flow reversal phenomenon caused 3 circles of CLPHP could not operate at the horizontal position and 9 circles of CLPHP could operate at the horizontal position. Spinato G et al. [108] found the thermal resistance of single circuit PHP reached the lowest value under high heat load and low filling rate. The film evaporation was the main local heat transfer mechanism. Lee et al. [109] studied the influence of turns on the heat transfer limit based on 5, 10, 15 and 20 turns of MPHP. The results are given in Figure 10. The influence of gravity on the maximum allowable heat flux of MPHP decreases with the increase in turns. Noh H Y et al. [110] studied the characteristics of 2 turns PHP and the heat transfer performance of PHP was affected by the mass flux of work medium. Kim B et al. [111] tested single loop, parallel and 2 turns PHP. Under low heating power, the thermal resistance of 2 turns PHP is smaller than that of parallel PHP. The influence of pressure drop is greater than the increase in disturbance under high heating power, which caused resistance of 2 turns PHP being larger than that of parallel PHP. Figure 11 is the pipeline structure of some PHPs. The pipeline structure of PHP affects the flow pattern and distribution of work fluid. Kim W et al. [112] compared the influence of cavity size on heat transfer performance based on the MPHP with cavity (10, 20, 30, 40 μm) and without cavity. The power required for startup of the MPHP with cavity was 50% lower than that without cavity. Kang Z et al. [113] studied a kind of PHP with partition walls based on numerical method. The heat transfer performance of PHP with partition walls can be improved by 14% compared with conventional PHP. The maximum equivalent thermal conductivity of PHP on the inner side of the partition wall is about 1194 W/(m·K). The maximum equivalent thermal conductivity is about 1977 W/(m·K) when the partition wall is located in the middle of the channel. Qu J et al. [114] studied the heat transfer performance under vertical heating based on micro groove PHP and the maximum effective thermal conductivity of PHP was 41.8 kW/(m·°C) at 40% filling rate. Figure 11 is the pipeline structure of some PHPs. The pipeline structure of PHP affects the flow pattern and distribution of work fluid. Kim W et al. [112] compared the influence of cavity size on heat transfer performance based on the MPHP with cavity (10, 20, 30, 40 µm) and without cavity. The power required for startup of the MPHP with cavity was 50% lower than that without cavity. Kang Z et al. [113] studied a kind of PHP with partition walls based on numerical method. The heat transfer performance of PHP with partition walls can be improved by 14% compared with conventional PHP. The maximum equivalent thermal conductivity of PHP on the inner side of the partition wall is about 1194 W/(m·K). The maximum equivalent thermal conductivity is about 1977 W/(m·K) when the partition wall is located in the middle of the channel. Qu J et al. [114] studied the heat transfer performance under vertical heating based on micro groove PHP and the maximum effective thermal conductivity of PHP was 41.8 kW/(m· • C) at 40% filling rate. Lim J et al. [115] tested the influence of the channel arrangement on the plate MPHP under local heating. The amplitude oscillation of the liquid slug of the channel randomly arranged MPHP is larger than that of the channel with uniform channel arrangement, which makes it improve by 32% in heat transfer performance. Kim J et al. [116] and Wang J et al. [117] studied the influence of the length of evaporation section and condensation section on PHP. Heat Transfer Performance Improvement of Pipeline Structure As shown in Figure 12, the evaporation section is more likely to dry up with the increase in the length of condensation section. The heat exchange area of the MPHP improved with the increase in the length of the condensation section. The length ratio of the evaporation section increase in the condensation section will help CLPHP start and also reduce thermal resistance. Sedighi et al. [118,119] manufactured the additional branch PHP of a two-stage bubble pump in the evaporation section and compared the heat transfer performance of the additional branch PHP with that of the conventional FP-PHP. The bubble pump enhanced the flow cycle which resulted in less temperature fluctuation of the additional branch PHP. Kim et al. [120] carried out a visual study on the oscillatory motion of work medium in asymmetric MPHP. Two flow phenomena were oscillatory eruption mode (pressure periodic change) and circulation mode (the temperature rise in the evaporation section causes the expansion of the steam plug and the generation of circulation). Chiang C M et al. [121] established a model for predicting the asymmetric MPHP oscillation motion. The stronger oscillation motion caused by the larger average temperature difference between the evaporation section and the condensation section enhanced the heat transfer performance. Okazaki et al. [122] compared the conventional serpentine PHP with the closed-loop ring PHP. The thermal resistances are almost the same, which indicated that the design ideas of the PHP pipeline can be more diversified. Liu et al. [123] tested the heat transfer performance of the double serpentine channel flat plate OHP under multiple heat sources. The average equivalent thermal conductivity of the double serpentine channel flat plate OHP is 5.8 times than that of the pure 6063 aluminum alloy plate. The weight is only 83.6% of that of the pure 6063 aluminum alloy plate with the same geometry. Fonseca et al. [124] designed and tested a helium-based PHP, including 3 sub PHPs. The maximum effective thermal conductivity was 55,000 W/(m·K). Wang et al. [125] studied single loop PHP with a corrugated structure at different positions. The corrugated structure of evaporation section reduced the startup time by 28.96%. He et al. [126] promoted unidirectional flow in 3D CLPHP through series conical nozzles and the lower forward pressure drop alleviated the drying phenomenon with the lowest thermal resistance of 0.87 K/W. Table 3 summarizes the improvement of heat transfer performance of OHP by some pipeline structures. 3D PHP [126] Ethanol/Tandem conical nozzle ↓29.5% of thermal resistance the same geometry. Fonseca et al. [124] designed and tested a helium-based PHP, including 3 sub PHPs. The maximum effective thermal conductivity was 55,000 W/(m·K). Wang et al. [125] studied single loop PHP with a corrugated structure at different positions. The corrugated structure of evaporation section reduced the startup time by 28.96%. He et al. [126] promoted unidirectional flow in 3D CLPHP through series conical nozzles and the lower forward pressure drop alleviated the drying phenomenon with the lowest thermal resistance of 0.87 K/W. Table 3 summarizes the improvement of heat transfer performance of OHP by some pipeline structures. Yeboah et al. [127] designed an experiment for testing the copper spiral OHP with ethanol, methanol and deionized water as work fluids. Ebrahimi et al. [128] added interconnection channels in FP-PHP to enhance heat transfer and increased the working power range of FP-PHP. Qu et al. [129,130] studied 1-5 layers of 3D OHP and reported the thermal resistance of four layers of 3D OHP is the smallest when the heating power is less than 100 W. The copper tube with fewer layers of 3D OHP has less heat transfer and the 3D OHP with more layers has higher demand for heat input. The thermal resistance of two to five layers of 3D OHP is about 0.23 • C/W when the heating power is 100 W. The 3-D OHP and 2-D OHP were compared with paraffin as the work medium as shown in Figure 13. Figure 13a is the structural diagram of 2D OHP and 3D OHP. Figure 13b Valves and Fins of the Work Medium The use of valves helps PHP to promote and maintain the oscillation cycle of the work medium, which improved the heat transfer performance and stability. Ando et al. [131,132] investigated the effect of check valves on PHP start and heat transfer performance. The effective thermal conductivity of the check valve at normal weight is about 6000 W/(m·K), which is 30 times that of conventional aluminum alloy. The thermal resistance refers to no work fluid during the operation of PHP with a check valve and it can operate stably in space for 4 years. PHP enables stable start-up when the check valve is located near the condensation or insulation section. Fairley et al. [133] studied the effect of Tesla valves on PHP based on time-frequency analysis. The Tesla valves effectively reduced the occurrence of intermittent high-energy oscillations in the evaporation section of PHP by promoting circulating flow. De Vries et al. [134] found that Tesla valves reduced the thermal resistance of PHP by about 14% by facilitating the circulation of the work fluid. Thompson Valves and Fins of the Work Medium The use of valves helps PHP to promote and maintain the oscillation cycle of the work medium, which improved the heat transfer performance and stability. Ando et al. [131,132] investigated the effect of check valves on PHP start and heat transfer performance. The effective thermal conductivity of the check valve at normal weight is about 6000 W/(m·K), which is 30 times that of conventional aluminum alloy. The thermal resistance refers to no work fluid during the operation of PHP with a check valve and it can operate stably in space for 4 years. PHP enables stable start-up when the check valve is located near the condensation or insulation section. Fairley et al. [133] studied the effect of Tesla valves on PHP based on time-frequency analysis. The Tesla valves effectively reduced the occurrence of intermittent high-energy oscillations in the evaporation section of PHP by promoting circulating flow. De Vries et al. [134] found that Tesla valves reduced the Thompson et al. [135] observed the effect of Tesla valves on the internal flow of FP-PHP based on neutron radiography technology. The Tesla valves can make PHP by facilitating circulating flow. The thermal resistance is reduced by about 15 to 25%. Feng et al. [136] based on CLPHP with a spring-loaded check valve and studied the influence of the position of the check valve on the heat transfer performance. The experimental apparatuses are in Figure 14. The thermal resistance of CLPHP with a check valve is 25% lower than that of conventional CLPHP and the influence of gravity is weakened. Bhuwakietkumjohn et al. [137] discovered the flow pattern in PHP pipe with check valve changes from annular flow/segment plug flow to segment plug flow/bubble flow. Check valves, gravity and asymmetric heating all promote the flow cycle of the work medium. The synergy can enhance heat transfer when the promoters of loops move in opposite directions; the heat tolerance of PHP is enhanced [138]. Daimaru T et al. [139] simulated PHP with a check valve and observed that the localization of the liquid plug in the condensation section. The addition of fins helps to increase the heat transfer rate of PHP. Rahman et al. [140,141] studied the effect of fins on PHP. The use of fins in the condensation section can enhance the heat transfer effect significantly. Qu et al. [142] introduced micro fins in PHP, which reduced the thermal resistance by up to 41.7%. The effective thermal conductivity could reach 86,262 W/(m·K), which was about 216 times than that of large copper materials. The synergy can enhance heat transfer when the promoters of loops move in opposite directions; the heat tolerance of PHP is enhanced [138]. Daimaru T et al. [139] simulated PHP with a check valve and observed that the localization of the liquid plug in the condensation section. The addition of fins helps to increase the heat transfer rate of PHP. Rahman et al. [140,141] studied the effect of fins on PHP. The use of fins in the condensation section can enhance the heat transfer effect significantly. Qu et al. [142] introduced micro fins in PHP, which reduced the thermal resistance by up to 41.7%. The effective thermal conductivity could reach 86,262 W/(m·K), which was about 216 times than that of large copper materials. Material Properties for Heat Transfer The pipe body of PHP plays a certain role in the heat transfer. Odagiri et al. [143] established a 3D heat transfer model of aluminum flat PHP. The temperature difference in the thickness direction of aluminum PHP was relatively small (0.1 °C) through the simulation. The ratio of the maximum superheat of the hotspot to the average evaporation section temperature was between 9 and 11%. The equivalent thermal conductivity of polypropylene flat PHP is up to 6 times that of polypropylene sheets of the same size [144]. The effective thermal conductivity of polycarbonate PHP is up to 7000 W/(m·K) [145]. The residual sintered powder at the edge of Ti-6Al-4V PHP caused the work medium to produce core suction behavior. This suction behavior increases capillary pumping capacity, which reduced the PHP of gravity and start power [146]. Bhramara [147] analyzed the heat transfer characteristics of copper PHP, which was consistent with experimental data. Lim J et al. [148] tested the heat transfer performance and stability of flexible OHP (FOHP) made of laminated film and low-density polyethylene. Figure 15 is flexible OHP (FOHP) when it is bended which is vertical under heating. The thermal resistance of FOHP is 2.41 K/W, which is 37% lower than that of copper OHP. The service life of FOHP is equivalent to 306 days in the standard atmosphere, which is 18 times that of conventional polymer OHP. Material Properties for Heat Transfer The pipe body of PHP plays a certain role in the heat transfer. Odagiri et al. [143] established a 3D heat transfer model of aluminum flat PHP. The temperature difference in the thickness direction of aluminum PHP was relatively small (0.1 • C) through the simulation. The ratio of the maximum superheat of the hotspot to the average evaporation section temperature was between 9 and 11%. The equivalent thermal conductivity of polypropylene flat PHP is up to 6 times that of polypropylene sheets of the same size [144]. The effective thermal conductivity of polycarbonate PHP is up to 7000 W/(m·K) [145]. The residual sintered powder at the edge of Ti-6Al-4V PHP caused the work medium to produce core suction behavior. This suction behavior increases capillary pumping capacity, which reduced the PHP of gravity and start power [146]. Bhramara [147] analyzed the heat transfer characteristics of copper PHP, which was consistent with experimental data. Lim J et al. [148] tested the heat transfer performance and stability of flexible OHP (FOHP) made of laminated film and low-density polyethylene. Figure 15 is flexible OHP (FOHP) when it is bended which is vertical under heating. The thermal resistance of FOHP is 2.41 K/W, which is 37% lower than that of copper OHP. The service life of FOHP is equivalent to 306 days in the standard atmosphere, which is 18 times that of conventional polymer OHP. Qu et al. [149] tested the heat transfer properties of FOHP of different structures consisting of fluoroelastomer materials and micro-slot copper tubes. Figure 16a is the schematic diagram of different structures of FOHP. Figure 16b is the photographs of different structures of FOHP. The bending of the insulation section will lead to pressure loss. The start-up and heat transfer performance of FOHP is reduced and the heat transfer of FOHP performance is "i" shape, "step" shape, "inverted U" shape and "N" shape from high to low. PHP heat transfer performance can be improved by adjustment of the different wettability modes of the inner walls of the pipe [150]. Hao et al. [151] found the amplitude, velocity and liquid film length of the super-hydrophilic and hydrophilic pipe wall PHP were higher than those of copper PHP. The thermal resistance of the four-circle hydrophilic pipe wall PHP is reduced by about 5 to 15% and the thermal resistance of the sixcircle super-hydrophilic and hydrophilic PHP is reduced by 5 to 15% and 15 to 25%, respectively. Betancur-Arboleda et al. [152] studied the effect of surface treatment on heat transfer properties of pipes based on copper PHP with different degrees of inner wall roughness. The thermal resistance of mixed sanding PHP (which uses standard sandpaper Grit N100 and Grit N1200 grinding in the evaporation section and the condensation section) is conventional 60% of PHP. Xie et al. [153] conducted chrome plating experiments on the inner wall of PHP aluminum tubes filled with moisture. which can reduce the thermal resistance of PHP to about 30% of the original. The stable working time was more than 5 times that of untreated PHP. Qu et al. [149] tested the heat transfer properties of FOHP of different structures consisting of fluoroelastomer materials and micro-slot copper tubes. Figure 16a is the schematic diagram of different structures of FOHP. Figure 16b is the photographs of different structures of FOHP. The bending of the insulation section will lead to pressure loss. The start-up and heat transfer performance of FOHP is reduced and the heat transfer of FOHP performance is "i" shape, "step" shape, "inverted U" shape and "N" shape from high to low. PHP heat transfer performance can be improved by adjustment of the different wettability modes of the inner walls of the pipe [150]. Hao et al. [151] found the amplitude, velocity and liquid film length of the super-hydrophilic and hydrophilic pipe wall PHP were higher than those of copper PHP. The thermal resistance of the four-circle hydrophilic pipe wall PHP is reduced by about 5 to 15% and the thermal resistance of the six-circle super-hydrophilic and hydrophilic PHP is reduced by 5 to 15% and 15 to 25%, respectively. Betancur-Arboleda et al. [152] studied the effect of surface treatment on heat transfer properties of pipes based on copper PHP with different degrees of inner wall roughness. The thermal resistance of mixed sanding PHP (which uses standard sandpaper Grit N100 and Grit N1200 grinding in the evaporation section and the condensation section) is conventional 60% of PHP. Xie et al. [153] conducted chrome plating experiments on the inner wall of PHP aluminum tubes filled with moisture. which can reduce the thermal resistance of PHP to about 30% of the original. The stable working time was more than 5 times that of untreated PHP. Qu et al. [149] tested the heat transfer properties of FOHP of different structures consisting of fluoroelastomer materials and micro-slot copper tubes. Figure 16a is the schematic diagram of different structures of FOHP. Figure 16b is the photographs of different structures of FOHP. The bending of the insulation section will lead to pressure loss. The start-up and heat transfer performance of FOHP is reduced and the heat transfer of FOHP performance is "i" shape, "step" shape, "inverted U" shape and "N" shape from high to low. PHP heat transfer performance can be improved by adjustment of the different wettability modes of the inner walls of the pipe [150]. Hao et al. [151] found the amplitude, velocity and liquid film length of the super-hydrophilic and hydrophilic pipe wall PHP were higher than those of copper PHP. The thermal resistance of the four-circle hydrophilic pipe wall PHP is reduced by about 5 to 15% and the thermal resistance of the sixcircle super-hydrophilic and hydrophilic PHP is reduced by 5 to 15% and 15 to 25%, respectively. Betancur-Arboleda et al. [152] studied the effect of surface treatment on heat transfer properties of pipes based on copper PHP with different degrees of inner wall roughness. The thermal resistance of mixed sanding PHP (which uses standard sandpaper Grit N100 and Grit N1200 grinding in the evaporation section and the condensation section) is conventional 60% of PHP. Xie et al. [153] conducted chrome plating experiments on the inner wall of PHP aluminum tubes filled with moisture. which can reduce the thermal resistance of PHP to about 30% of the original. The stable working time was more than 5 times that of untreated PHP. Heat Source Impact on the Temperature In contrast to continuous heating, pulse heating can change its output power by constantly turning the heat source on and off. Taft B S et al. [154] found the input of PWM power does not affect the thermal resistance of PHP. The "injection-shrinkage" phenomenon of the work medium during pulse heating caused fluctuations in the pressure in the tube to enhance the heat transfer capacity [155]. In practice, PHP is susceptible to uneven heating. Mangini et al. [156] tested mixed PHP in uneven heating mode in space applications. The uneven heating can promote work medium circulation and improve the overall heat transfer performance. The thermal resistance of PHP is reduced by up to 8.7% under normal gravity. The excessive uneven heating tends to dry up the higher parts of the heating power. Jang D S et al. [157] used dimensionless thermal differences to express the degree of inhomogeneity as displayed in Equation (9). where Q 1 and Q 2 are the heat inputs of two heat sources. Q total is the total heat input. The thermal resistance and temperature difference increase with the increase in the dimensionless thermal difference. Chen et al. [158] tested the heat transfer performance of series two-channel plate PHP under uneven heating and the experimental equipment which was shown in Figure 17. The heating of uneven PHP has better heat transfer performance at low heating power. The thermal resistance is about 15.3% of the same size pure 6063 aluminum alloy plate. When the heating power is higher, the heat transfer performance of the series two-channel flat plate PHP is even weaker than that of uniform heating. Zhao et al. [159] studied the work mass motion and heat transfer mechanism of PHP under different heating modes based on mathematical models. The heat transfer performance of PHP was increased by more than 6% under uneven heating. When the heating cycle under uniform pulse heating is short, the oscillation of the fluid maintains stable alternate heating and the dominant heat transfer is increased by 25%. Based on the topology optimization method, Lim et al. [160] proposed a channel layout design of plate MPHP under local heating. The experimental comparison results showed the design can reduce the thermal resistance of MPHP by 50%. Heat Source Impact on the Temperature In contrast to continuous heating, pulse heating can change its output power by constantly turning the heat source on and off. Taft B S et al. [154] found the input of PWM power does not affect the thermal resistance of PHP. The "injection-shrinkage" phenomenon of the work medium during pulse heating caused fluctuations in the pressure in the tube to enhance the heat transfer capacity [155]. In practice, PHP is susceptible to uneven heating. Mangini et al. [156] tested mixed PHP in uneven heating mode in space applications. The uneven heating can promote work medium circulation and improve the overall heat transfer performance. The thermal resistance of PHP is reduced by up to 8.7% under normal gravity. The excessive uneven heating tends to dry up the higher parts of the heating power. Jang D S et al. [157] used dimensionless thermal differences to express the degree of inhomogeneity as displayed in Equation (9). where Q1 and Q2 are the heat inputs of two heat sources. Qtotal is the total heat input. The thermal resistance and temperature difference increase with the increase in the dimensionless thermal difference. Chen et al. [158] tested the heat transfer performance of series two-channel plate PHP under uneven heating and the experimental equipment which was shown in Figure 17. The heating of uneven PHP has better heat transfer performance at low heating power. The thermal resistance is about 15.3% of the same size pure 6063 aluminum alloy plate. When the heating power is higher, the heat transfer performance of the series two-channel flat plate PHP is even weaker than that of uniform heating. Zhao et al. [159] studied the work mass motion and heat transfer mechanism of PHP under different heating modes based on mathematical models. The heat transfer performance of PHP was increased by more than 6% under uneven heating. When the heating cycle under uniform pulse heating is short, the oscillation of the fluid maintains stable alternate heating and the dominant heat transfer is increased by 25%. Based on the topology optimization method, Lim et al. [160] proposed a channel layout design of plate MPHP under local heating. The experimental comparison results showed the design can reduce the thermal resistance of MPHP by 50%. Pressure Fluctuations of the PHP The fluctuation in pressure in PHP is closely related to the generation of bubbles and liquid film. Pipe pressure affects the length of the steam plug and liquid plug, which leads to the heat transfer performance of PHP to change. Nine et al. [161] estimated the heat transfer performance of PHP by means of a pressure spectrum between the evaporation and condensation segments. The PHP had the lowest thermal resistance (about 0.25 °C/W) Pressure Fluctuations of the PHP The fluctuation in pressure in PHP is closely related to the generation of bubbles and liquid film. Pipe pressure affects the length of the steam plug and liquid plug, which leads to the heat transfer performance of PHP to change. Nine et al. [161] estimated the heat transfer performance of PHP by means of a pressure spectrum between the evaporation and condensation segments. The PHP had the lowest thermal resistance (about 0.25 • C/W) and the maximum pressure fluctuation at 2 wt% Cu/water nano-fluid as the work medium. Qu et al. [162] studied the effect of initial pressure on PHP with a thermal resistance increase of 493% at a heating power of 140 W, which was the initial pressure increased from 0.007 MPa to 0.065 Mpa. The average temperature of the evaporation and condensation sections increased and decreased with the initial pressure improvement. PHP fill rate affects fluctuations in pressure and the startup power increases as the fill rate improves [163]. Barua et al. [164] found the heat transfer performance of PHP depends on the work medium, filling rate and heating power. Fonseca et al. [165] studied the effect of filling rate on heat transfer performance based on low temperature PHP as given in Figure 18. The PHP has an effective thermal conductivity of 70,000 W/(m·K) at a filling rate of 20%. More heating power leads to more bubbles, which increased pressure fluctuations. Energies 2022, 15, 7391 21 of 31 and the maximum pressure fluctuation at 2 wt% Cu/water nano-fluid as the work medium. Qu et al. [162] studied the effect of initial pressure on PHP with a thermal resistance increase of 493% at a heating power of 140 W, which was the initial pressure increased from 0.007 MPa to 0.065 Mpa. The average temperature of the evaporation and condensation sections increased and decreased with the initial pressure improvement. PHP fill rate affects fluctuations in pressure and the startup power increases as the fill rate improves [163]. Barua et al. [164] found the heat transfer performance of PHP depends on the work medium, filling rate and heating power. Fonseca et al. [165] studied the effect of filling rate on heat transfer performance based on low temperature PHP as given in Figure 18. The PHP has an effective thermal conductivity of 70,000 W/(m·K) at a filling rate of 20%. More heating power leads to more bubbles, which increased pressure fluctuations. Current Research Insufficient (1) The study of micro-channel layout mainly focuses on the thermal properties of PHP with a certain micro-channel layout design, which does not propose specific design specifications as a reference [166]; (2) The study of pipeline structure is still in the stage of the pipeline geometry change and the heat transfer performance. The PHP heat transfer performance mechanism of pipeline structure is lacking in-depth description [167,168]; (3) The study of materials and work fluids are not related to manufacturing and cost [169]. The work fluid is believed to be one of the factors that may have the greatest influence on PHP. Due to the complex hydrodynamic properties of the work medium, it is difficult to study the mechanism in the process of heat and mass transfer [170]. The certain kinds of work fluids of nano-fluids have own complex and properties which even under-fully recognized [171]. The stability of nano-fluids is a major problem of PHP applications [172]; (4) The current research on PHP work fluids mainly focuses on the heat transfer performance or flow of PHP with a certain work medium [173]. The current research is lacking the selection criteria of the work medium under different conditions, which can only passively test the characteristics of the work fluid in experiments [174]. Current Research Insufficient (1) The study of micro-channel layout mainly focuses on the thermal properties of PHP with a certain micro-channel layout design, which does not propose specific design specifications as a reference [166]; (2) The study of pipeline structure is still in the stage of the pipeline geometry change and the heat transfer performance. The PHP heat transfer performance mechanism of pipeline structure is lacking in-depth description [167,168]; (3) The study of materials and work fluids are not related to manufacturing and cost [169]. The work fluid is believed to be one of the factors that may have the greatest influence on PHP. Due to the complex hydrodynamic properties of the work medium, it is difficult to study the mechanism in the process of heat and mass transfer [170]. The certain kinds of work fluids of nano-fluids have own complex and properties which even under-fully recognized [171]. The stability of nano-fluids is a major problem of PHP applications [172]; (4) The current research on PHP work fluids mainly focuses on the heat transfer performance or flow of PHP with a certain work medium [173]. The current research is lacking the selection criteria of the work medium under different conditions, which can only passively test the characteristics of the work fluid in experiments [174]. Future Trends It can optimize the design of micro-channel layout for the future research trends of PHP. Appropriate adjustment of micro-channel layout can promote cyclic heat transfer [175]. Lee et al. [176] introduced a micro-stick array of the PHP micro-channel layout to increase the maximum permissible input power by 44%. (1) The prediction technology of PHP heat transfer performance is applied. Qian et al. [177] predicted the heat transfer performance of axial rotating OHP with an error of 3.36 to 16% with the grey system-based theory, which enhanced the heat transfer performance ability of PHP in industrial applications with only a small amount of data. Machine learning is applied to predict the heat transfer performance of PHP [178,179], which reduced the cost of PHP design, which is a reliable method for future PHP study; (2) Model optimization of PHP is studied. Chu et al. [180] proposed equations for the pressure difference and flow resistance of the work medium, which provided guidance for the structure optimization of PHP. Min et al. [181] introduced PHP in battery thermal management and compared the heat transfer performance of PHP with other cooling methods by modeling. Kang [182] et al. introduced porous core suction layers into PHP and established numerical models, which provided a new inspiration for the design of PHP; (3) A study trend focused on green environmental protection. In terms of environmental protection, it is a factor that must be taken into account in order to achieve sustainable development which reduced the carbon emissions and resources consumed with the PHP [183]. Monroe et al. [184] designed thermoelectric PHP with magnets and solenoids to recover waste heat into electrical energy, which is conducive to reducing carbon emissions and environmental pollution caused by power generation. PHP can also be combined with PCM materials which is applied to seawater desalination. It can save a lot of energy because it is a green and pollution-free seawater desalination technology [185,186]; (4) The study of the relationship between the physical properties of the work fluid and the heat transfer properties of PHP can gain an in-depth understanding of the mechanism of PHP with the appropriate work medium [187]. Yasuda et al. [188] observed the flow of work fluids in PHP through neutron photography technology, which helped to explore the mechanism of work fluid flow. Wang J et al. [189] found the hydrophilicity of the pipe surface of numerical models, which can help to reduce the thermal resistance of CLPHP and improve the stability of the circulating flow; (5) Gravity PHP increases the drying limit of PHP by the reflux of the work fluid enhancement. PHP adaptability of the working environment will be improved [190]. Chen et al. [191] designed a tandem dual-channel FP-PHP for use in ultra-gravity environments, which can be applied to modern aerospace. Abela et al. [192] conducted experimental analysis and numerical simulation of PHP under microgravity. The prediction deviation was within 7%, which was helpful for studying the mechanism of gravity on PHP; (6) The exploration of industrial production is adopted. Low temperature PHP has the significant advantage of high thermal conductivity when it is used for superconductivity heat dissipation [193,194]. The application of PHP in industrial process will be explored [195]; (7) The miniaturization of electronic equipment inevitably brings the problem of high heat flow density. The miniaturization of the heat dissipation system has become one of the mainstream directions of product iteration. The compact structure of PHP makes it easy to miniaturize and maintain good heat transfer performance. Siliconbased MPHP has micron-sized channels in which fluid flow and the heat transfer had some new characteristics compared to conventional capillary OHP [196]. Kamijima C et al. [197] measured 700 W/(m·K) as the highest effective thermal conductivity of MPHP with a pipe diameter of 350 µm. After miniaturization, PHP was able to work stably with excellent thermal performance. Lin et al. [198] studied the effective range of miniature oscillating heat pipes by experiment. Sun et al. [199] studied the working range of PHP after miniaturization. The MPHP can be started normally and operated stably. The effective fill rate of the horizontal direction is 40 to 55% when the vertical direction of the fill rate is 30 to 75%. (8) Heating applications are extended. PHP is usually used for heat dissipation due to its excellent heat transfer performance. The key step in the equation of the refrigeration and heating process is the heat transfer. PHP is well-used in refrigeration and heating. Aref L et al. [200] tested the thermal performance of flat-panel PHP solar collectors. The thermal efficiency reached 72.4% at a filling rate of 60% in sunny weather. Zhao J et al. [201] studied the heat transfer performance of solar with longdistance heat transmission PHP. The thermal resistance was as low as 0.0024 • C/W. Jin H et al. [202] used transparent PHP with nano-fluids as the work medium for the collection and transmission of solar energy. The maximum energy conversion efficiency can reach 92%. Zhao J et al. [203] conducted experimental tests on PHPbased large-scale heat storage systems. The use of self-humidifying fluids as work fluids would make PHP have greater heat transfer limits and longer heat transfer distances. Qu et al. [130,204] studied 3D PHP thermal properties for latent thermal energy storage (LHTES) devices. The efficiency of 3D OHP LHTES devices increased by about 32% compared to conventional devices and the heat storage was enhanced by PCM. Chen et al. [205] proposed ethane PHP based on stirling chillers. Xu et al. [206] designed PHP refrigeration equipment based on phase change energy storage technology and the utilization rate of PCM reached 78.7%. Saw L H et al. [207] designed a PHP-based roof cooling system which can reduce the temperature of the top floor of the house by 13%. Conclusions In this paper, the methods to improve the energy conversion and flow thermal performance of micro-channel PHP are studied. The use of appropriate physical structures can improve the heat transfer performance, start performance, operating range and stability of PHP. The work fluid is the main carrier of PHP heat transfer. The research and choice of the right work medium are key to achieving the desired performance of PHP. (1) The right structure and material choice had an important impact on PHP performance. Proper adjustment of the micro-channel layout can increase the heat transfer limit of PHP by 44%. The thermal resistance of 2D channel PHP is 45% lower than that of conventional PHP. The thermal resistance of FOHP can be as low as copper OHP of 63%; (2) In practical applications, different heating conditions of PHP are encountered. The thermal resistance of PHP under uneven heating can be reduced to 50% of the original. PHP pulse heating can alleviate the phenomenon of dryness; (3) Work fluids have different effects on PHP. The use of graphene nano-fluids as the work medium can reduce the thermal resistance of PHP by 83.6%. PHP with liquid nitrogen as the work medium can work at temperatures below 100 K. The work medium obtained by the mixture of different fluids has the potential to compensate for the defects while inheriting the advantages of a single fluid. The addition of self-humidifying nano-fluids to the graphene oxide nano-fluid can enhance the heat transfer performance of PHP by 12%, which can inhibit the drying phenomenon. Conflicts of Interest: The authors declare that they have no conflict of interests regarding the publication of this paper.
15,904
2022-10-09T00:00:00.000
[ "Engineering", "Physics" ]