text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Estimating a Finite Population Mean Using Transformed Data in Presence of Random Nonresponse Developing finite population estimators of parameters such as mean, variance, and asymptotic mean squared error has been one of the core objectives of sample survey theory and practice. Sample survey practitioners need to assess the properties of these estimators so that better ones can be adopted. In survey sampling, the occurrence of nonresponse affects inference and optimality of the estimators of finite population parameters. It introduces bias and may cause samples to deviate from the distributions obtained by the original sampling technique. To compensate for random nonresponse, imputation methods have been proposed by various researchers. However, the asymptotic bias and variance of the finite populationmean estimators are still high under this technique. In this paper, transformation of data weighting technique is suggested. (e proposed estimator is observed to be asymptotically consistent under mild assumptions. Simulated data show that the estimator proposed is much better than its rival estimators for all the different mean functions simulated. Introduction A lot of significance is attached to efficient and cost-effective survey sampling designs in sample surveys while estimating a finite population mean, see for instance [1,2]. Careful design of samples based on random selection with known probabilities of population elements should be considered. is gives a target sample of intended respondents where each may provide responses to a set of survey questions that result in an array of responses. Van Buuren et al. [3] observed that nonresponse occurs if some of the expected responses are missing, for instance, where a whole vector of responses is missing for some sampled units or where responses are obtained for some questions and not to others in the sample selected. As noted in [4], nonresponse often occurs in human population surveys as people hesitate to respond in surveys and increases notably while studying sensitive issues. Moreover, the presence of nonresponse increases the bias in estimates, ultimately reducing their efficiency as observed in [4]. e basis for statistical inference is therefore formed by a sampling design that provides a link between a sample and the population. As observed in [1], a good sample survey practise and efficient methods of compensating for nonresponse should be adopted. In sample surveys, nonresponse leads to biased results in the estimation of a finite population mean. It may force samples to deviate from the distributions originally established by the sampling design. e incorporation of regression models is acknowledged as one of the methods of minimizing bias resulting from nonresponse using auxiliary data, for details see [5]. In practise, knowledge on the study variables is unavailable for nonrespondents, whereas auxiliary data may be given. To minimize the bias and variance resulting from nonresponse, Liang and Zeger [6] noted that it is desirable to incorporate auxiliary data in the process of estimation where the probabilities of response are mostly assumed to be correlated with certain characteristics, for instance, age, race, and income in human population surveys. Sanaullah et al. [7] studied nonresponse under stratified two-phase sampling using generalized exponential chain-ratio and chain-product estimators and concluded that their estimator is more efficient than that in [8], one of the pioneer studies of estimation of finite population parameters under nonresponse. In the sequence of addressing the problem of nonresponse in sample surveys, Javaid et al. [9] derived a modified ratio estimator in systematic random sampling. ey proposed the use of single auxiliary variable to estimate a finite population mean. However, as a departure from the work in [7,9], this paper proposed the use of weights due to transformed data to compensate for nonresponse. e weighting method is highlighted in the following section. is paper is organized as follows: introduction of the paper has been presented in Section 1; in Section 2, a review of weighting method has been discussed; the proposed estimator is derived in Section 3; in Sections 3.1-3.3, the bias, the variance, and the mean squared error of the estimator are derived, respectively; Section 4 presents results from the simulation experiment conducted; and the conclusion of the paper has been presented in Section 5. Review of Weighting Method It has been observed by authors in [10] that nonresponse leads to reduced number of observations. Weighting thus implies that the weights are increased almost for all the elements that do not respond in a survey. For instance, the authors in [11,12] explored a modified Horvitzompson estimator to correct the problem of nonresponse using the weighting technique. e estimator used was defined as where p HT is given as such that y k , k ∈ U, is the k th survey value of the k th study variable taken from a sample s selected from a finite population, U � (1, 2, . . . , N), ϕ k is the inclusion probability given by ϕ k � p r (k ∈ s), and c k is the value of the k th respondent in the sample selected s. e estimator y HT adjusts the weights by an unbiased estimator p HT , of the response probabilities of the population mean, as shown in the following equation: An approximate bias of the estimator y HT is thus given as where If C p Y tends to zero, the bias would be minimal, for more details see [13]. e adjusted Horvitz-ompson estimator, y HT , is an illustration of reweighting measurements of respondents without using auxiliary information. However, in this paper, auxiliary information is used in the estimation procedure. To compensate for nonresponse, weights obtained from transformed data are used. Finite Population Mean Estimator Proposed Suppose a finite population of size N consists of M clusters having N j elements in the i th cluster. y ij is taken to represent the survey value of the study variable, Y ij , for the j th unit in the i th cluster, for i � 1, . . . , M; j � 1, . . . , N, for details see [14]. Let the mean of the finite population to be estimated be given by Data are generated by a regression model used in [15,16] and more recently in [17]; the model is given as where m(·) is a function of the auxiliary data having continuous derivatives and e ij is a residual variable having a mean of zero and a nonnegative variance. Auxiliary information is assumed to be known in this study. To predict nonresponse values in the study variable, the following estimator due to transformed data is proposed: e original data X ij are first transformed to g(X ij ), i � 1, 2, . . . , n, j � 1, 2, . . . , m, where g(·) is a nonnegative, continuous, and monotonically increasing function from [0, ∞) to [0, ∞). e transformed data are then reflected g(X 11 ), . . . , g(X mn ) around the origin to obtain −g(X 11 ), . . . , −g(X mn ), for details see [18]. More recently, Bii et al. [14] used the same procedure while developing boundary bias correction under nonresponse. However, the weights resulting from the current study are easier to implement and give better estimates than those in [14]. Hence, utilizing the transformed sample data (g(X ij ), − g(X ij )), i � 1, . . . , n; j � 1, . . . , m, the population mean estimator is defined as where y ij estimates the nonresponse units. Similarly, y ij can be represented as where w * ij (x ij ) are the weights obtained by transformation of the data. ese weights are given as so that m(x ij ) in equation (9) can be rewritten as Hence, using equation (11), equation (8) becomes which is equivalent to e following section gives some properties of the estimator proposed. [18] are used in deriving the bias and the variance of the estimator proposed. More recently, these assumptions have also been used in [14] in boundary bias correction in presence of nonresponse. e Bias of the Estimator. Assumptions in m ″ (x ij ) and g ″ (x ij ) are assumed to exist and are continuous; given g( (0)), m (0) � m, and g (0) � g, where g − 1 is the inverse function of g, whereas m (i) and g (i) are the i th derivatives of m and g, respectively, for Besides, a kernel function K is assumed to be nonnegative and symmetric function with support [−1, 1] such that K(w)dw � 1, wK(w)dw � 0, and 0 < w 2 k(w)dw < ∞. From these assumptions, the following equation is thus obtained: Expanding equation (14) leads to Simplifying equation (15) reduces to Equation (16) is equivalent to It is observed that E(m(x ij )) is approximately equal to m(x ij ) as b ⟶ 0 and mn ⟶ ∞ for all x ij . Hence, the proposed estimator is asymptotically unbiased. Asymptotic Variance of the Estimator Proposed. is estimator suggested in this study has variance given as International Journal of Mathematics and Mathematical Sciences VarE which can similarly be expressed as Using Taylor's series, the asymptotic expansion and simplification of the variance becomes As mn ⟶ ∞, the bandwidth b ⟶ 0, and hence, mnb ⟶ ∞. Hence, the variance, Var(m(x ij )), decreases in mnb. us, the larger the sample size, the smaller the variance. Mean Squared Error of the Estimator Proposed. e mean squared error combines the variance and the squared bias terms of the estimator, that is, erefore, combining equations (17) and (19) lead to As can be observed in equation (22), the mean squared error approaches zero as mn ⟶ ∞; this is true as the bandwidth becomes sufficiently smaller, that is, as b ⟶ 0, in which m represents the sampled clusters, while n is the size of each sampled cluster. It is also noted that there is a trade-off between the variance and the bias terms of the estimator since as the bandwidth decreases, the variance increases, but the bias decreases. Optimal bandwidth ought to be developed to solve the bias-variance trade-off in the estimation process, see for details [19]. e next section presents a simulation study conducted to compare the performance of the finite population mean estimator suggested in this paper with those that exist in literature. Simulation Study e simulation experiment was done using R code. e mean functions in [20] were used for simulation. Table 1 presents the different mean functions used for data simulation. Mean Functions of m(x ij ) Simulated. e following steps were followed in data simulation: (1) e auxiliary data X ij were generated as identical and independent uniform random variables on (2) A sample of m i � 20 clusters was selected in stage one following the simple random sampling procedure with replacement. (4) Using the auxiliary data, x ij , for j � 1, 2, 3, . . . , m k , nonresponse data were obtained from the regression equation Y ij � m(x ij ) + e ij using simple random sampling with replacement in the i th cluster, where m(x ij ) is a function of auxiliary data obtained from the different mean functions given in Table 1; e ij ∼N(0, 1), while K(u)∼U[0, 1]. (5) is procedure was replicated to obtain the mean estimators, Y i1 , . . . , Y in , in the i th cluster. (6) At 95% level, confidence intervals were developed for the population mean estimators, Y i , i � 1, 2, 3, which corresponds to the estimator proposed, Nadaraya-Watson estimator in [21,22], and the modified Nadaraya-Watson estimator in [23], respectively. (7) A Gaussian kernel together with a locally adaptive bandwidth in [19] was used in the simulation of data. e results are discussed in the following section. e simulated data for the estimator proposed, the estimator in [21,22], and the estimator in [23] are given in Tables 2-4. Table 2 presents a summary of the results of bias simulated from the mean functions in [20], as shown in Table 1. Negative values imply underestimation, while positive values of the bias indicate overestimation by the different estimators considered. e proposed estimator has got smaller values of the bias compared to the rest of the estimators considered as can be seen in Table 2 for all the mean functions simulated. For the exponential and quadratic mean functions, Nadaraya-Watson estimator overestimates the finite population mean, while the proposed estimator only overestimates the finite population mean for the quadratic mean function. Modified Nadaraya-Watson estimator underestimates the finite population mean in all the mean functions simulated except for exponential function. Generally, from Table 2, the proposed estimator has got smaller values of the bias in all the mean functions simulated. e mean squared error values presented in Table 3 were generated using different mean functions, as indicated in Table 1. It can be noted that the estimator proposed has got smaller mean squared error values than those of the estimator in [21,22] and the modified Nadaraya-Watson estimator in [23]. Nadaraya-Watson estimator has the largest mean squared error values than any other estimator considered. Comparing the mean squared error values for these three estimators shows that the estimator proposed is better than the rest of the estimators considered in this paper. Confidence intervals are normally constructed around point estimators to provide properly calibrated measures of variability associated with estimators of population parameters of interest. In this paper, confidence intervals were obtained at 95% for the finite population mean estimators. Shorter confidence interval lengths mean the estimator is asymptotically equal to the true population parameter being predicted. From the results in Table 4, the estimator suggested in this paper is observed to have tighter confidence interval lengths than the other estimators considered. is means at 95% coverage rates, the estimator proposed is better than the Nadaraya-Watson and the improved Nadaraya-Watson estimators. Conclusion e estimator proposed in this paper is noted to be more desirable than the estimator in [21,22] and the modified Nadaraya-Watson estimator in [23]. In Table 1, smaller bias values are observed for the proposed estimator for all the mean functions simulated compared to Nadaraya-Watson and the improved Nadaraya-Watson estimators. e mean squared error values in Table 2 indicate that the estimator proposed does much better than the rest considered in this study. Tighter confidence interval lengths can also be observed for the proposed estimator than the rest of the estimators, as given in Table 3. Hence, the proposed estimator provides a better estimation of the mean of a finite population compared to estimators in [21,22] and that in [23]. e sampling procedure used in this study and the finite population mean estimator derived can be used to estimate the average health insurance coverage in a given population. Data Availability To support the theoretical findings, data were generated using R statistical package. Conflicts of Interest e authors declare that they have no conflicts of interest.
3,500
2020-07-08T00:00:00.000
[ "Mathematics" ]
Study the Effect of High Dialysate Potassium Solution in Comparison to Low Potassium Dialysate Solution in End Stage Renal Disease Patients Background: Nowadays cardiovascular diseases remain as the single most common cause of death in chronic dialysis patients; the aim of this study was to evaluate the effects of two different regimens of dialysis potassium removal in patients with a tendency to develop arrhythmias during haemodialysis (HD). Methods and Materials: There were 88 (36 men and 52 women) end stage renal disease (ESRD) patients recruited for the study. They received regular haemodialysis three times per week at the haemodialysis units of a university medical centre (Golestan hospital) during year 2011. We compared the arrhythmogenic effects of two dialysis techniques. Results: There was a tendency in the HD solution with constant (3 mEq/l) K for premature ventricular complex (PVC) appearance in to be reduced as compared with constant (2 mEq/l) K in the time of dialysis period, although this reduction was not statistically significant(P = 0.09). There was a significant reduction in SVC in the HD solution with constant (3 mEq/l) K as compared with constant (2 mEq/l) K. Discussion: In conclusion, the use of a model of intra-HD potassium that is more close to potassium serum concentration of ESRD patients can reduce the arrhythmogenic effect of HD in patients on regular HD treatment. Introduction Although these days a wide range of progressions take place in dialysis technology, cardiovascular diseases remain as the single most common cause of death in chronic dialysis patients [1].One of the major causes of death in end-stage renal disease (ESRD) patients under maintenance haemodialysis (HD) is ventricular arrhythmias [2]. Although uremic patients usually already have a decreased potassium pool [3] [4], in order to counterbalance the interdialysis potassium load and avoid life-threatening hyperkalaemia, potassium is removed by HD.Unlike HD sodium removal, which can even be exclusively convective [5] [6], HD potassium removal is almost exclusively diffusive.It is therefore necessary to create a gradient between plasma and dialysate potassium concentrations, which means greatly reducing plasma potassium concentrations during HD.However, very different amounts of potassium removal can be obtained with quite small differences in the dialysate plasma gradient since the potassium removal is mainly due to a decrease in the intracellular potassium pool. However, relatively few studies have examined patient and HD-specific factors that might be associated with a higher risk of developing cardiac arrhythmias.The arrhythmogenic effect of HD was considered not only because of its still discussed clinical importance, but also because it is a "marker" of the electrophysiological status of the cells of uremic patients, which are greatly modified during standard HD treatment three times a week for many years [7]. Hence, we have undertaken this study of arrhythmias in order to evaluate the role of one of the major factors in the genesis of HD arrhythmias, potassium (K) changes during dialysis. We have evaluated the effects of two different regimens of dialysis potassium removal in patients with a tendency to develop arrhythmias during HD.Our chief interest was to identify the regimen of the least susceptibility to cardiac rhythm disorders during the dialysis cycle. Methods End-stage renal failure patients There were 88 (36 men and 52 women) ESRD patients recruited for the study.They received regular haemodialysis three times per week at the haemodialysis units of a university medical centre (Golestan hospital) during year 2011.All the patients have received a written informed consent approved by Ahwaz Jondishapour research ethic board, and they all agreed to participant the research.Clinical indication for haemodialysis in the present study population is divided into two categories: 1) absolute indication with creatinine clearance rate (Ccr) < 5 ml/min or serum creatinine (Cr) > 8.0 mg/dl; 2) relative indication with Ccr < 15 ml/min or serum Cr > 6 mg/dl and with accompanying life threatening complications as congestive heart failure, lung edema, haemorrhage diathesis, consciousness change, cachexia or uncontrollable hyperkalaemia with drugs.Uremic patients on stable HD treatment were recruited.After they had given their oral consent, the patients fulfilling the entry criteria were selected for the experimental phase. Inclusion criteria Patients aged more than 18 years, who had been on chronic thrice-weekly HD for at least six months, and who were in stable clinical condition were admitted to the study.The continuous 24 hours electrocardiographic (ECG) recording made on the HD day was performed for each patient. Exclusion criteria Patients on antiarrhythmic treatment or receiving pace-maker cardiac stimulation, those whose dosage of digitalis was unstable, and those requiring dialysate K concentration of more than 3 mEq/litter were excluded from the study. Haemodialysis The patients had to undergo HD at the same time of day for both treatments in order to avoid any interference with the circadian rhythms of arrhythmia.In patient and throughout the three weeks of the study, only the concentration of dialysate K could change, and then only in accordance with the study protocol. Twenty-four hour EGG recording on the HD day, continuous 24-hour EGG recording was started at the time of HD.The 24-hour EGG signal was recorded and the tapes were centrally analysed for ventricular arrhythmias. The readers were blinded as to treatment. Treatment definitions All uremic patients that had inclusion criteria randomly divided into two groups.Group A (consist of 44 patients) Standard HD with a dialysate K concentration of 3 mEq/litter.Group B (consist of 44 patients) Experi-mental HD with dialysate K 2 mEq/lit concentration were performed for them. The calculated sample size of 88 patients was based on: a significance level (an error) of 0.05; a β error of 0.2; a power [1-β] of 0.8; after gathering data they were analyse by using SPSS soft ware version 16 and ANOVA and student T test was used for analysing data. A Holter recording selection some of the 24-hour Holter recordings were considered totally unsuitable for analysis because of the bad quality of the ECG signal (presence of superimposed noise, incorrect positioning of the electrodes) and the consequent inability to distinguish normal beats from pathological PVCs.The bad signal quality was sometimes present during only a part of some of the recordings, leading to temporary gaps in which the PVCs could not be detected. Results 88 ESRD patients (36 male) were recruited in the study.The mean age of patients was 58.6 ± 8.3 (Table 1). As its shown in Table 2 the mean age of patients and the electrolyte levels of two groups of ESRD patients [with dialysate potassium solution of 2 mEq/lit (k 2 ) and Dialysate potassium of 3 mEq/lit (k 3 )] were comparable and had not significant change (P > 0.05). As shown in Table 4 it was a trend for PVC, AF to increase with potassium solution of 2 mEq/lit (numbers of PVC, AF in patients with potassium solution of 2 mEq/lit was 37.7 ± 6.1, 13.28 ± 0.5 and in patients with potassium solution of 3 mEq/lit these were 12.4 ± 1.6, 9.2 ± 0.4).Significant change was seen with K 2 in the number of SVC. Table 5 shows an increase in the difference of potassium before and after dialysis lead to significant change in the number of PVC and SVC. Discussion In HD with dialysate potassium solution of 3 mEq/lit, a lower appearance of PVC as compared to dialysate potassium solution of 2 mEq/lit was apparent, even though the difference did not turn out to be statistically significant (P = 0.09), but SVC appeared more in group with dialysate potassium solution of 2 mEq/lit no significant difference was found between the two treatments in either isolated PVC, or pairs or runs.Again, there was no statistical difference between the two treatments in regarding to electrolytes.Studies on dialytic potassium removal [8] [9] show that the higher the plasma dialysate potassium gradient, the greater the potassium removal from intracellular fluid.The uraemia related inhibition of the Na-K pump produces a potassium shift from intra-to extracellular fluid and consequently induces hyperkalaemia; thus, the more advanced the cellular sickness, the bigger the cellular depauperation of Potassium during analysis.Consequently, there is a vicious circle: although it improves the function of the Na-K pump, dialysis treatment maintains and even increases cellular distress by reducing the intracellular potassium pool.Thus, in our opinion, the new approach explored in this study seems to be particularly suitable for patients prone to life-threatening hyperkalaemia [10]- [12]. It is known from physiology studies that the wholeness of the cellular potassium pool is necessary to many cellular functions: volume and PH regulation [13]. Given selective cell permeability to potassium (K), the resting electric membrane potential (REMP) of most cells is related to the diffusive passive fluxes of K ions [14].The level of REMP affects cell excitability: the greater the difference between REMP and the threshold level of the electric potential, the longer the depolarization time [15] [16]. In chronic haemodialysis (HD) patients, intradialytic K is removed by diffusion throughout the HD membrane, according to the concentration gradient between plasma and dialysate K levels.With constant dialysate K levels of 2 mEq/lit, this gradient decreases rapidly during the first hour of HD and slowly in the following hours.The diffusive fluxes of K through the HD membrane produce diffusive passive fluxes through the membrane cells, with greater negativization of membrane of cells and, consequently, less cell membrane excitability.This phenomenon is more evident during the first hour of HD, since the plasma-dialysate K gradient is higher. The changes in K fluxes induced by HD may therefore influence cardiac cell electrophysiology, and it may lead to cardiac arrhythmias. In HD with constant and low potassium (range 2 mEq/l) a large amount of potassium is abruptly removed from the extracellular space [17]. The depletion of the potassium reserves within the cells may have important repercussions on cardiac electrophysiology. Another mechanism by which potassium change lead to cardiac arrhythmia is that Potassium fluxes during HD have been associated with an increase in QT interval [18] [19], an increase in the dispersion of QT and in the inhomogeneous repolarisation revealed by the analysis of the spatial aspects of T-wave complexity [20].The resulting repolarisation heterogeneity allows for the onset of distinctive re-entrant arrhythmias, and hypokalaemia may act as a triggering factor in the genesis of premature ventricular depolarisations.The incidence of related ECG abnormalities during HD ranges from 18% to 76%, depending on the definition of the abnormal ventricular electrical activity [21]. An Italian study by Redaelli et al. [22] has demonstrated a 36% reduction in premature ventricular complexes using a profiling of potassium dialysate concentration in arrhythmic HD patients.Morrison et al. [23] demonstrated decreased ventricular ectopic activity in four out of six patients whose dialysate potassium concentration had changed from 2 to 3.5 mEq/lit.In our study, the serum potassium trends are significantly different with the use of constant K concentration (3 mEq/l) in comparison with (2 mEq/l) K dialysate concentration and in our study the number of PVC increased more than 3 times in dialysate solution with potassium 2 mEq/l that was very close to the result of Redaelli et al. on the other hand SVC numbers raised more than 5 times independently. Rombol`a et al. [24] found like us a greater fall in intra-erythrocyte potassium concentration in patients with an arrhythmic tendency, as compared to those without such a tendency [25] but he didn't exactly mention witch kind of arrhythmia are more prone to the potassium changes. In our opinion analysis of data support Rombol`a idea that some stressors such as fluid overload, increase in blood pressure and the changes in pH and bicarbonates, may act as trigger of ventricular arrhythmias [26].Obviously this may be particularly true for patients with cardiomyopathy, left-ventricular hypertrophy, coronary artery disease and cardiac heart failure, which provide the perfect backdrop for arrhythmias to occur [27]. Naturally, these demonstrate a higher incidence of sudden death in a group of patients treated by HD with constant 2 mEq/l potassium dialysate in comparison with another group treated with constant 3 mEq/l potassium solutions. Conclusion In conclusion, the use of a model of intra-HD potassium that is more close to potassium serum concentration of ESRD patients can decrease the arrhythmogenic effect of HD in patients on regular HD treatment.This result was obtained without adversely affecting pre-dialysis plasma potassium levels in comparison with the standard HD procedure.The results of this study not only showed that the amount of cardiac arrhythmia raised more in potassium concentration of 2 mEq/lit but also demonstrated that supra ventricular arrhythmias are more influenced by those changes.The results of this study clearly illustrate that the use of a model of HD potassium removal which is more close to blood concentration of potassium is capable of reducing the arrhythmogenic effect of standard HD in chronic uremic patients prone to this complication.This decrease was statistically and clinically significant for both PVC and SVC. Table 1 . Age and sex distribution of end stage renal disease patients. Table 2 . Electrolyte and age distribution in different dialysis potassium solution. Table 3 . Electrolytes changes before and after hem dialysis in ESRD patients. Table 4 . Numbers and standard deviation and confidence interval of PVC and AF and SVC in two different potassium solutions. Table 5 . Mean potassium differences in plasma of ESRD patients in different dialysate potassium solution and number of PVC and SVC in them.
3,074.4
2016-02-25T00:00:00.000
[ "Medicine", "Biology" ]
Understanding Teachers ’ Integration of Moodle in EFL Classrooms : A Case Study The study explores the integration and implementation of the Moodle platform at the English Language Center of the Salalah College of Technology. To achieve this purpose, a qualitative, interpretive approach with a case study research design was used to collect the data and to deepen our understanding of the phenomena and how it was constructed in social reality of the school.Two teachers have been chosen to be the interviewees, to give their opinions and views on the topic under study, and the factors affecting both the implementation and integration of the Moodle programme. It was evident from the narratives of the two interviewees that the integration of Moodle was successful, and that it has proven to be a useful tool in the teaching and learning processes of English. In spite of some existing factors that may hinder the working mechanisms of the implementation and integration of Moodle, it may be concluded that this platform could be recommended to be extended to the other skills of the English language that it currently does not support. Following this process will inevitably improve the comprehension and production of the English language and related materials, online and real, respectively. Theoretical Background Nowadays, technology has been transformed into becoming fundamental part of our daily lives as well as specifically in the field of education everywhere throughout the entire globe.Every several years, a high number of educational institutions are spending increasing amounts of money to reform their systems for the purpose of bridging the existing technological gaps in the curriculum (Buabeng-Andoh, 2012).This process of reform requires an effective implementation of technology in the affected curriculums to help teachers make a successful integration of different kinds of technologies (Tomei, 2005). In line with these rapid technological developments, the Salalah College of Technology (SCT) in Dhofar Governate of the Sultanate of Oman, has executed an ambitious programme to provide its students and teachers in the English Language Center with all of the needed facilities required for enhancing the instruction and acquisition of English (Alyafaei & Attamimi, 2018).The college has introduced online open-source learning software in teaching English, which has been universally referred to as the acronym of "Moodle".Moodle stands for "Modular Object-Oriented Dynamic Learning Environment".According to Brandle (2005), Moodle is designed to encourage teachers to create quality online instruction.Likewise, it is simultaneously used to grant students the freedom on deciding which activities they prefer to participate in and in what form, and to what degree, the participation will take place (Littlejohn & Pegler, 2007).Additionally, Moodle could be used to develop the learners' language skills, as well as to encourage students to interact online with their teachers and their classmates as well (Al-Ani, 2008). Since the spring of the academic year 2014-2015, courses offered within the English Language Centre have been systematically generated within the Moodle e-learning platform in an exact parallel direction with the face-to-face teaching method.This approach has aimed at providing Omani students with additional support for the four English language skills, namely, reading, writing, speaking and listening.The courses have been provided as a blended learning venue, where the classroom activities are supported by Moodle.The assignments, tests and other learning activities in Moodle are strongly interrelated with the lessons which are taught in the classrooms in a way that encourages students to gain a much deeper, more comprehensive understanding of their language courses. Factors Influencing Teacher's Integration of Moodle A bulk of studies have been undertaken to explore the factors that could affect the incorporation of the Moodle platform in many EFL contexts.Many studies in this field have previously revealed an inherent interest on the part of both teachers and students to incorporate Moodle in language education, as they believe that the use of it enhances the language learning process (Banerjee, 2011;Henderson, 2010;Alani, 2013).Additionally, the implementation of Moodle encourages a student-centered approach in which both teachers and students take part in the classroom, and the focus of instruction is shifted from the teachers to the students.There has been a range of research showing that Moodle usage enriches overall learning inside and outside the classroom (Alani, 2008;Govender, 2009;Georgouli, Skalkidis, & Guerreiro, 2008).In this respect, Moodle carries an importance in the universities as an essential part of the blended learning option which is generally described as a combination of face-to-face and online approaches to instruction. Besides the advantages of Moodle, some challenges should be acknowledged as they are likely to influence the implementation of Moodle in the language classroom.Banerjee (2011), for example, conducted a survey to study students' satisfaction with blended learning.He argued that students' satisfaction depends primarily on the difficulties presented by the topic, how much self-directed learning is needed and the effectiveness of the selected pedagogical methods that are employed in each given case.Similarly, MacKeogh and Fox (2009) qualitatively investigated the factors that may affect the motivation of the individual students to engage in blended learning.The results demonstrated that many teachers continue to prefer face-to-face lectures as they are doubtful about the potential for student learning online.The study illustrated other barriers such as lack of time, lack of technical support and fear of the loss of control over the classroom.Perez and Medallon (2015) addressed some obstacles like poor Internet connections and lack of knowledge of Moodle use on the part of the teachers.It can be seen that there are some intrinsic and extrinsic barriers which might affect the successful integration of Moodle, so teachers are required to tackle them and find solutions to overcome any obstacles. Concerning the context of the current study, two important studies by Al Busaidi and Tuzlukova (2013) and Alani ( 2013) have been carried out to explore the effectiveness of Moodle on students' learning at Sultan Qaboos University.The results indicated that the Moodle platform has tremendously enhanced language learning practices among students at the university due to the flexibility and facility of access.However, both studies made no attempt to offer adequate information or explanation on how EFL Omani teachers integrated Moodle in their teachings. Rationale of the Study Omani EFL teachers are highly encouraged by SCT to make use of the facilities available in the computer laboratories as an integral component of their teachings.Thus, some English language classes are taught in the computer laboratories in order to encourage students to make use of the Moodle platform.However, no study has been carried out in Oman to study the way in which teachers are integrating Moodle in EFL classrooms.My experience as a lecturer at the college suggests that many teachers might react negatively towards implementing a new teaching technique, especially when it comes to technology.They might argue that Moodle seems to be an extra burden on them imposed by the college for which they have no input on to what degree, or even whether or not, to deal with it.According to Cosh (1999), "unless they are accepted by the staff, the only relevance of those schemes is likely to be to accountability, rather than genuine teacher development" (p.23).To explore the extent to which this might be the case, this study investigates the way in which Omani EFL teachers incorporate Moodle in their teachings at the English Language Centre of SCT. Main Research Questions The purpose of the study is to qualitatively investigate how Omani EFL teachers integrate or implement Moodle in the teaching of the English language at the English Language Centre of SCT.Specifically, this research addresses the following questions: 1) How do Omani EFL teachers integrate Moodle in their classrooms? 2) What are the factors that influence their integration of Moodle in their teachings? Method The current study is interpretivist in its nature, as it aims to gain a deeper understanding of the use of the Moodle platform in Omani EFL classrooms, and not to generalize the results to the whole population.In order to optimally achieve these stated aims, a case study research design was employed to collect the data required to answer the research questions.The choice of this design was based on a set of reasons.First, the study was exploratory in nature as it sought to explore the phenomenon of the intergration and implementation of Moodle at the English Language Center of SCT where not much data was available about this phenomenon at the time when this study was being conducted.Second, such a case study design was deemed to be appropriate as it would help us gain more in-depth information on the topic under study and would inevitably widen our understanding of the phenomenon. In recent years, the interview has increasingly emerged as a common data method of data for research studies which seeks out a thorough and rich understanding of a specific issue (Kajornboon, 2005).Since this method happens to satisfy the aim of this study, interviews were chosen to be the method of my investigational inquiry.More precisely, semi-structured interviews were my source of data, by which I could understand the same phenomenon from different perspectives.A semi-structured mode of interviewing design was employed, as it was the most likely research vehicle by which the research questions could be most effectively answered.Both of my research questions were best answered by qualitative research methods that generated qualitative data.Epistemologically, semi-structured interviews were chosen as the meaning is here constructed between the researcher and the participants in the research site The Interviews The conducted interviews included two main sections.The first section was designed to understand the way in which the interviewees integrate Moodle into their teaching.For example, aspects of Moodle which are used, what activities are implemented, the role of the teachers and students inside and outside the classroom and so on, were all covered.The second section was developed to elicit the factors that are likely to influence the teacher's implementation and integration of Moodle in their classes. After the stage of researching, the next step was to prepare the interview questions.An important point to note is that the questions were formed in a way that led to open-ended answers.Asking a closed-end question may not help the interviewer gain an in-depth understanding of the topic.For the case of the current study, open-ended questions were the perfect instrument for my perceived data-collecting purposes.These were the key components that were considered during the preparation of the interview. A pilot version of the interview questions was sent to an academic colleague who is currently implementing a Moodle platform.He was consulted about the study and the questions of the interviews through email communications.He suggested deleting questions no.7 and 11 as they were parts of other questions which had already been included.Furthermore, he recommended making some modifications in the phrasing of questions no. 3, 5, 8 and 10 to avoid close-ended and neutral answers. Participants and Sampling The target population of this research included only the EFL teachers' academic year 2017-2018.Due to time limitations, a non-random convenience sampling was employed to include whoever was available at the time of conducting the research.Two EFL Omani teachers have been selected to be the interviewees.Both of them are currently teaching at the college, and they are implementing Moodle in their teachings.They have rich experience in this field since they have been using Moodle for more than five years.Therefore, they were ideal candidates/subject to seek more of their views regarding the implementation of Moodle through the interviews. Data Collection Procedure Prior to conducting this study, an e-mail with a copy of the interview questions was sent to the administration of SCT to gain their permission to conduct the interviews with two lecturers at the college.After that, the head of the English Language Centre sent another e-mail to the lecturers who are implementing Moodle in their classes requesting them to collaborate in the process. It was endeavored to conduct as many face-to-face interviews as possible as they have been widely used as an interview technique in the field of qualitative research.However, this was not possible due to temporal and financial constraints.Also, doing direct research on teachers in the physical location where direct face-to-face communication was not feasible, paved the way towards carrying out the interviews through WhatsApp video-calls and Skypes both interviewees were in favor of the substituted communication channel.As one interviewee mentioned "I prefer to do the interview by telephone even if it takes more than an hour.I think my colleagues will be in favor of it as well".This is reasonable, as telephone interviewing has become more common in the last two decades due to the rapid pace of developments in the applicable technology (Cachia & Millward, 2011).Any researcher can conduct interviews with people all over the world if they have access to a telephone.Thus, it saves time and reduces the cost of the face to face interview, as there is no need to travel to different places for conducting the interviews.Another important characteristic of the telephone interview is that it provides flexibility in setting up the appointment on a time that is most suitable to the interviewees.However, due to the more recent developments in the related technology, the interviewer can see the interviewees through the use of video-calls, to take one example, so social cues are still available as a source of extra information. Discussion The answers to these two given research questions by the two interviewees will be given following two different approaches of reporting results.In the case that both of the answers given were identical, they will be treated as one collective response.Whereas, in the case that the answers given were different or not strictly identical, they will be treated discretely, with the differences themselves explicitly highlighted.For the first type of question, as to how teachers integrate Moodle into their teachings, the responses of the two teachers were identical, as both teachers have reported that, in terms of integration and implementation, the two terms should be dealt with different levels of discretion.As teachers of the English language, who have been requested to use the Moodle platform to teach SCT students subjects such as grammar, or more instrumental subjects such as the class communication projects, the school has already taken the step of integrating this platform into the school curriculum.And, as teachers, according to the effective study plans, we will have the responsibility to implement all of the Moodle aspects into all classrooms.This proposed implementation, as well as the associated schedule and benchmarks of progress to be gleaned from such implementation, has several benefits and drawbacks, according to the best estimate of the two interviewees, there is a decided variation in the amount and degree of Moodle integration that currently exists within the fourth level of the English Language Center, which to this point is the only level required to make explicit use of the Moodle platform.The teachers who teach in the levels before the fourth level of instruction are recommended to make use of Moodle in only a piecemeal fashion, such as to help accommodate the learning of individual aspects of curriculum, namely grammar or reading, but nothing approaching the kind of systemic comprehension deemed to be necessary for inclusion in level four. Responding to the second research question of the study, the two interviewed teachers have given diverse factors that influence the integration of Moodle inside the classroom, both positively and negatively.The factors could be classified on three different stages or levels: existing infrastructure, the entry level of expertise carried by students who come with different educational outcome expectations and the level of expertise of the teachers who will be assigned to guiding and assessing their classwork that is to be generated by Moodle.One of the more interesting responses given by an interviewee in the interview was that infrastructure and computers are a baseline consideration for any successful implementation of Moodle.In this regard, the conditions that exist for Moodle reception, and computer and Internet coverage of any kind, are at best variable in several of the potential classrooms or dedicated computer laboratories where Moodle would be conceptually integrated.With regard to this one of the interviewees has said: "…It is very essential for me to make sure that all of the computers are available for the students before commencing the class…" The other interviewee did not rate the existent of infrastructure to be as important as the first interviewee, but has instead focused on the students' level of expertise in using computers and other software instruments.The second interviewee stated: "… The students' knowledge of using computers is the main factor to ensure a useful use of the Moodle platform, without their mastery of computer skills, nothing will be done in the Moodle…" He has further declared that the level of computer knowledge of students in level 4, as well as other levels in which he has taught and used Moodle, is far from optimal.Furthermore, he regarded this condition as an obstacle to the elementary implementation of Moodle, as well as the basic conduction of classes.Regarding this point, the first interviewee has added that students enrolled in IT-dedicated classes at the English Language Center should be given introductory exposure to computers, software and general computer literacy issues that will enable those students to more successfully use and integrate Moodle into their learning experiences.They both agreed that: "…students have to go through a training process prior to introducing any new approach, especially when it comes to using technology…" Finally, the two interviewees agreed that the level of teachers' expertise is of paramount importance to the success of Moodle implementation in the classrooms.They have stated that teachers involved in the programme have very diverse and wide-ranging differences in the levels of accommodation and knowledge of computers going into the classroom, which will naturally influence and effect their efforts to give Moodle a wider exposure within the curriculum that all are expected to deliver.From these responses, it has been found that younger teachers are better at using computers, whereas older teachers lack some basic skills that allow them to use computers readily and easily.For these reasons, both interviewees recommend further workshops and official exposure to Moodle and other forms of computer aided language learning.Also, both interviewers agree that Moodle can be used as an inspirational tool in motivating students to learn English.Moodle has been proven to be an effective online platform from which students can accomplish all of the assigned tasks.Students enjoy it because it can encourage team work and group work, and in so doing, increases the general level of motivation for all students to continue to learn and study the English language, as well as achieving the intended learning outcomes set by national and regional policymakers and stakeholders in the Sultanate of Oman. Conclusion It can be concluded that Moodle, as a platform, is a useful tool to be used in language learning classes in general, and it is evident within the context of the English Language Center of the Salalah College of Technology.The two teachers involved in the study have expressed their inherent interests regarding the integration of the Moodle platform in their classrooms, and have listed a number of reasons why the integration of Moodle has been the right decision to have been made.Despite the drawbacks that may be connected to some lacking aspects in the infrastructure of the school, as well as the lack of exposure and knowledge of this and other available online platforms by students and staff alike, these areas of concern may still be remedied and improved on.Above all, the Moodle platform, from the testimony of the two teachers in the study, different levels of students may be able to use the Moodle platform, as it enhances and improves the learning experience for students and teachers, and furthermore, it gives the institute at large another delivery system from which to gather information and culture, as well as constructive commentary and feedback from the people who use it. It is highly recommended that a large scale study on Moodle should be carried out in the future, with a larger and more diverse body of respondents.This study has been based upon the analysis of only two individual teachers, which could be considered to be a limitation, to the degree that the voices of students should also be included, in order to further elaborate upon the use, nature of integration of the Moodle platform within this English as a Foreign Language context, where we will obtain more insightful perspectives and contrastive feedback on the implications and integrations of Moodle into the larger educational framework, as well as factors which could be deemed as detrimental to a successful integration and adaptation of this secure and broad-based learning management system (i.e., Moodle).
4,776.8
2019-03-01T00:00:00.000
[ "Education", "Computer Science" ]
Representation and reconstruction of covariance operators in linear inverse problems We introduce a framework for the reconstruction and representation of functions in a setting where these objects cannot be directly observed, but only indirect and noisy measurements are available, namely an inverse problem setting. The proposed methodology can be applied either to the analysis of indirectly observed functional images or to the associated covariance operators, representing second-order information, and thus lying on a non-Euclidean space. To deal with the ill-posedness of the inverse problem, we exploit the spatial structure of the sample data by introducing a flexible regularizing term embedded in the model. Thanks to its efficiency, the proposed model is applied to MEG data, leading to a novel approach to the investigation of functional connectivity. Introduction An inverse problem is the process of recovering missing information from indirect and noisy observations. Not surprisingly, inverse problems play a central role in numerous elds such as, to name a few, geophysics (Zhdanov 2002), computer vision (Hartley and Zisserman 2004), medical imaging (Arridge 1999, Lustig et al 2008 and machine learning (De Vito et al 2005). 4 Author to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Solving a linear inverse problem means nding an unknown x, for instance a function or a surface, from a noisy observation y, which is a solution to the model where y and ε belong to an either nite or in nite dimensional Banach space. The map K is called a forward operator and is generally assumed to be known, although its uncertainty has also been taken into account in the literature (Arridge et al 2006, Golub and van Loan 1980, Gutta et al 2019, Kluth and Maass 2017, Lehikoinen et al 2007, Nissinen et al 2009, Zhu et al 2011. The term ε represents observational error. Problem 1 is a well-studied problem within applied mathematics (for early works in the eld, see Adorf 1995, Calderón 1980, Geman 1990). Its main dif culties arise from the fact that, in practical situations, an inverse of the forward operator does not exist, or if it does, it ampli es the noise term. For this reason such a problem is called ill-posed. Consequently, the estimation of the function x in (1) is generally tackled by minimizing a functional which is the sum of a data ( delity) term and a regularizing term encoding prior information on the function to be recovered (see, among others, Cavalier 2008, Hu and Jacob 2012, Lefkimmiatis et al 2012, Mathé and Pereverzev 2006, Tenorio 2001. For convex optimization functionals, modern ef cient optimization methods can be applied (Beck and Teboulle 2009, Boyd et al 2010, Burger et al 2016, Chambolle and Pock 2011, Chambolle and Pock 2016. Alternatively, when it is important to assess the uncertainty associated with the estimates, a Bayesian approach could be adopted (Calvetti and Somersalo 2007, Kaipio and Somersalo 2005, Repetti et al 2019, Stuart 2010. The deep convolutional neural network approach has also been applied to this setting . In imaging sciences, it is sometimes of interest to nd an optimal representation and perform statistics on the second order information associated with the functional samples, i.e. the covariance operators describing the variability of the underlying functional images. This is, for instance, the case in a number of areas of neuroimaging, particularly those investigating functional connectivity. In this work, we establish a framework for reconstructing and optimally representing indirectly observed samples C 1 , . . . , C n , that are covariance operators, expressing the second order properties of the underlying unobserved functions. The indirect observations are covariance operators generated by the model where K * i denotes the adjoint operator and the term E i models observational error. The term K i • C i • K * i represents the covariance operator of K i X (i) , with X (i) an underlying random function whose covariance operator is C i . As opposed to more classical linear inverse problems formulations, problem 2 introduces the following additional dif culties: • We are in a setting where each sample is a high-dimensional object that is a covariance operator; it is important to take advantage of the information from all the samples to reconstruct and represent each of them. • The elements {C i } and {S i } live on non-Euclidean spaces, as they belong to the positive semide nite cone, and it is important to account for this manifold structure in the formulation of the associated estimators. • In an inverse problem setting it is fundamental to be able to introduce spatial regularization, however it is not obvious how to feasibly construct a regularizing term for covariance operators re ecting, for instance, smoothness assumptions on the underlying functional images. More general non-Euclidean settings could also be accommodated. Speci cally, the error term could be de ned on a tangent space and mapped to the original space through the exponential mapping. Another setting of interest is the case of error terms that push the observables out of the original space. In our applications this is not an issue, as the contaminated observations are themselves empirical covariance matrices, which belong to the non-Euclidean space of positive semide nite matrices. We tackle problem 2 by generalizing the concept of principal component analysis (PCA) to optimally represent and understand the variation associated with samples that are indirectly observed covariance operators. The proposed model is also able to deal with the simpler case of samples that are indirectly observed functional images belonging to a linear functional space. Motivating application-functional connectivity In recent years, statistical analysis of covariance matrices has gained a predominant role in medical imaging and in particular in functional neuroimaging. In fact, covariance matrices are the natural objects to represent the brain's functional connectivity, which can be de ned as a measure of covariation, in time, of the cerebral activity among brain regions. While many techniques have been proposed to describe functional connectivity, almost all can be described in terms of a function of a covariance or related matrix. Covariance matrices representing functional connectivity can be computed from the signals arising from functional imaging modalities. The choice of a speci c functional imaging modality is generally driven by the preference to have high spatial resolution signals, and thus high spatial resolution covariance matrices, versus high temporal resolution, and thus the possibility to study the temporal dynamic of the covariance matrices. Functional magnetic resonance falls in the rst category, while electroencephalogram (EEG) and magnetoencephalography (MEG) in the second. However, high temporal resolution does generally come at the price of indirect measurements and, as shown in gure 1 in the case of MEG data, the signals are in practice detected on the sensors space. It is however of interest to produce results on the associated signals on the cerebral cortex, which we will refer to as brain space. The signals on the brain space are functional images whose domain is the geometric representation of the brain and are associated with the neuronal activity on the cerebral cortex. We borrow here the notion of brain space and sensors space from Johnstone and Silverman (1990) and we use it throughout the paper for convenience, however it is important to highlight that the formulation of the problem is much more general than the setting of this speci c application. The signals on the brain space are related to the signals on the sensors space by a forward operator, derived from the physical modeling of the electrical/magnetic propagation, from the cerebral cortex to the sensors. This is generally referred to as the forward problem. For soft-eld methods like EEG, MEG and functional near-infrared spectroscopy (Eggebrecht et al 2014, Ferrari and Quaresima 2012, Mosher et al 1999, Singh et al 2014, Ye et al 2009, the forward operator is de ned through the solution to a partial differential equation of diffusion type. Such a mapping induces a strong degree of smoothing and consequently the corresponding inverse problem, i.e. the reconstruction of a signal on the brain space from observations in the sensors space, is strongly ill-posed. In fact, signals with fairly different intensities on the brain space, due to the diffusion effect, result in signals with similar intensities in the sensors space. In gure 1, we show an example of a signal on the brain space and the associated signal on the sensors space. From a practical perspective, it is crucial to understand how the different parts of the brain interact, which is sometimes known as functional connectivity. A possible way to understand these interactions is by analyzing the covariance function associated with the signals describing the cerebral activity of an individual on the brain space (Fransson et al 2011, Lee et al 2013. On the top left, head model of a subject and superimposition of the 248 MEG sensors positioned around the head, called 'sensors space'. On the top right, brain model of the same subject represented by a triangular mesh of 8k nodes, which represents the 'brain space'. On the bottom left, an example of a synthetic signal detected by the MEG sensors. The dots represent the sensors, the color map represents the signal detected by the sensors. On the bottom right, intensity of the reconstructed signal on the triangular mesh of the cerebral cortex. Covariance matrices of the signal detected by the MEG sensors from three different subjects of the human connectome project. The size of the matrices is 248 × 248. The dark blue bands represent missing data, which are due to the exclusion of some channels after a quality check of the signal. Li et al 2009). More recently, the interest has shifted from this static approach to a dynamic approach. In particular, for a single individual, it is of interest to understand how these covariance functions vary in time. This is a particularly active eld, known as dynamic functional connectivity (Hutchison et al 2013). Another element of interest is understanding how these covariance functions vary among individuals. In gure 2, we show the covariance matrices, on the sensors space, computed from the MEG signals of three different subjects. The remainder of this paper is organized as follows. In section 2 we give a formal description of the problem. We then introduce a model for indirectly observed smooth functional images in section 3 and present the more general model associated with problem 2 in section 4. In section 5, we perform simulations to assess the validity of the estimation framework. In section 6 we apply the proposed models to MEG data and we nally give some concluding remarks in section 7. Mathematical description of the problem We now introduce the problem using our driving application as an example. To this purpose, let M a be a closed smooth two-dimensional manifold embedded in R 3 , which in our application represents the geometry of the cerebral cortex. An example of such a surface is shown on the top right of gure 1. We denote with L 2 (M) the space of square integrable functions on M. De ne X to be a random function with values in a Hilbert functional space F ⊂ L 2 (M) with mean µ = E[X], nite second moment, and assume the continuity and square integrability of its dv, for all g ∈ L 2 (M). Mercer's lemma (Riesz and Szokefalvi-Nagy 1955) guarantees the existence of a non-increasing sequence {γ r } of eigenvalues of C X and an orthonormal sequence of corresponding eigenfunctions {ψ r }, such that As a direct consequence, X can be expanded 5 as X = µ + ∞ r=1 ζ r ψ r , where the random variables {ζ r } are uncorrelated and are given by The collection {ψ r } de nes the modes of variation of the random function X, in descending order of strength, and these are called principal component (PC) functions. The associated random variables {ζ r } are called PC scores. Moreover, the de ned PC functions are the best nite basis approximation in the L 2 -sense, therefore for any xed R ∈ N, the rst R PC functions of X minimize the reconstruction error, i.e. The case of indirectly observed functions In the case of indirect observations, the signal is detectable only through s sensors on the sensors space. Let {K l : l = 1, . . . , m} be a collection of s × p real matrices, representing the potentially sample-speci c forward operators relating the signal at p pre-de ned points {v j : j = 1, . . . , p} on the cortical surface M with the signal captured by the s sensors. The matrices {K l } are 5 More precisely, we have that = 0, i.e. the series converges uniformly in mean-square. discrete versions of the forward operator K introduced in section 1. Moreover, de ne the evaluation operator Ψ : F → R p to be a vector-valued functional that evaluates a function f ∈ F at the p pre-speci ed points {v j } ⊂ M, returning the p dimensional vector ( f (v 1 ), . . . , f (v p )) T . The operators Ψ and {K l } are known. However, in the described problem the random function X can be observed only through indirect measurements {y l ∈ R s : l = 1, . . . , m} generated from the model where {x l } are m independent realizations of X, and thus expandible in terms of the PC functions {ψ r } and the coef cients {ζ l,r } given by ζ l,r = M {x l (v) − µ(v)}ψ r (v)dv. The terms {ε l } represent observational errors and are independent realizations of an s-dimensional normal random vector, with mean the zero vector and variance σ 2 I p , where I p denotes the p-dimensional identity matrix. We consider the problem of estimating the PC functions {ψ r } in (5), and associated scores {ζ l,r }, from the observations {y l }. In gure 3 we give an illustration of the introduced setting. Note that it would not be necessary to de ne the evaluation operator if the forward operators were de ned to be functionals {K l : F → R p }, relating directly the functional objects on the brain space to the real vectors on the sensors space. It is however the case that the operators {K l } are computed in a matrix form by third party software (see section 6 for details) for a pre-speci ed set of points {v j } ⊂ M and it is thus convenient to take this into account in the model through the introduction of an evaluation operator Ψ. In the case of single subject studies, the surface M is the subject's reconstructed cortical surface, an example of which is shown on the right panel of gure 1. In this case, it is natural to assume that there is one common forward operator K for all the detected signals. In the more general case of multi-subject studies, M is assumed to be a template cortical surface. We are thus assuming that the individual cortical surfaces have been registered to the template M, which means that there is a smooth and one-to-one correspondence between the points on each individual brain surface and the template surface M, where the PC functions are de ned. However, notice that when it comes to the computation of the forward operators, we are not assuming the brain geometries of the single subjects to be all equal to a geometric template, as in fact the model in (5) allows for sample-speci c forward operators {K l }. The individual cortical surfaces could also have different number of mesh points, in that case the subject-speci c 'resampling' operator could be absorbed into the de nition of sample-speci c evaluation operators {Ψ l }. The estimation of the PC functions in (5) has been classically dealt with by reconstructing each observation x l independently and subsequently performing PCA. However, such an approach can be sub-optimal in particular in a low signal-to-noise setting, as when estimating one signal, the information from all the other sampled signals is systematically ignored. The statistical analysis of data samples that are random functions or surfaces, i.e. functional data, has also been explored in the functional data analysis (FDA) literature (Ramsay and Silverman 2005), however, most of those works focus on the setting of fully observed functions. An exception to this is the sparse FDA literature (see e.g. Yao et al 2005), where instead the functional samples are assumed to be observable only through irregular and noisy evaluations. In the case of direct but noisy observations of a signal, previous works on statistical estimation of the covariance function, and associated eigenfunctions, have been made, for instance, in Bunea and Xiao (2015) for regularly sampled functions and in Huang et al (2008), Yao et al (2005) for sparsely sampled functions. A generalization to functions whose domain is a manifold is proposed in Lila et al (2016) and appropriate spatial coherence is introduced by penalizing directly the eigenfunctions of the covariance operator to be estimated, i.e. the PC functions. In the indirect observations setting, Tian et al (2012) propose a separable model in time and space for source localization. The estimation of PC functions of functional data in a linear space and on linear domains, from indirect and noisy samples, has been previously covered in Amini and Wainwright (2012). They propose a regularized M-estimator in a reproducing kernel Hilbert space (RKHS) framework. Due to the fact that in practice the introduction of an RKHS relies on the de nition of a kernel, i.e. a covariance function on the domain, this approach cannot be easily extended to non-linear domains. In Katsevich et al (2015), driven by an application to cryo-electron microscopy, the authors propose an unregularized estimator for the covariance matrix of indirectly observed functions. However, a regularized approach is crucial in our setting, due to the strong ill-posedness of the inverse problem considered. In the discrete setting, also other forms of regularization have been adopted, e.g. sparsity on the inverse covariance matrix (Friedman et al 2008, Liu andZhang 2019). The case of indirectly observed covariance operators A natural generalization of the setting introduced in the previous section is considering observations that have group speci c covariance operators. In detail, suppose now we are given a set of n covariance functions {C i : i = 1, . . . , n}, representing the underlying covariance operators {C i : i = 1, . . . , n} on the brain space. In our driving application, each covariance function C i : M × M → R describes the functional connectivity of the ith individual or the functional connectivity of the same individual at the ith time-point. We consider the problem of de ning and estimating a set of covariance functions, that we call PC covariance functions, which enable the description of {C i } through the 'linear combinations' of few components. Such a reduced order description is of interest, for example, in understanding how functional connectivity varies among individuals or over time. We de ne a model for the PC covariance functions of {C i } from the set of indirectly observed covariance matrices, computed from the signals on the sensors space, and thus given where . . , p} are the sampling points associated with the operator Ψ. The forward operators {K i } act on both sides of the covariance functions {C i }, due to the linear transformation K i Ψ applied to the signals on the brain space before being detected on the sensors space. The term E T i E i is an error term, where E i is an s × s matrix such that each entry is an independent sample of a Gaussian distribution with mean zero and standard deviation σ. Model (6) could be regarded as an implementation of the idealized problem 2, where the covariance operators are represented by the associated covariance functions. An illustration of the setting introduced can be found in gure 4. The problem introduced in this section has not been extensively covered in the literature. In the discrete case, Dryden et al (2009) introduce a tangent PCA model for directly observed covariance matrices. An extension to directly observed covariance operators has been proposed in Pigoli et al (2014). Also related to our work is the setting considered in Petersen and Müller (2019), where the authors propose a regression framework for responses that are random objects (e.g. covariance matrices) with Euclidean predictors. The proposed regression model is applied to study associations between age and low-dimensional correlation matrices, representing functional connectivity, which have been computed from a parcellation of the brain. In section 4, we propose a novel PCA approach for indirectly observed high-dimensional covariance matrices. Principal components of indirectly observed functions The aim of this section is to de ne a model for the estimation of the PC functions {ψ r } from the observations {y l }, de ned in (5). Although the model proposed in this section is not the main contribution of this work, it allows us to introduce some of the concepts necessary to the de nition of the more general model for indirectly observed covariance functions in section 4. Model Now let z = (z 1 , . . . , z m ) T be an m-dimensional real column vector and H 2 (M) be the Sobolev space of functions in L 2 (M) with rst and second distributional derivatives in L 2 (M). From now on F is instantiated with H 2 (M). We propose to estimatef ∈ H 2 (M), the rst PC function of X, and the associated PC scores vector z, by solving the equation where · is the Euclidean norm and ∆ is the Laplace-Beltrami operator, which enables a smoothing regularizing effect on the PC functionf . The data t term encourages K l Ψf to capture the strongest mode of variation of {y l }. The parameter λ controls the trade-off between the data t term of the objective function and the regularizing term. The second PC function can be estimated by classical de ation methods, i.e. by applying model (7) on the residuals {y l −ẑ l K l Ψf }, and so on for the subsequent PCs. The proposed model can be interpreted as a regularized least square estimation of the rst PC function ψ 1 in (5), with the terms {z l } playing the role of estimates of the variables {ζ l,1 }. In the simpli ed case of a single forward operator K := K 1 = · · · = K m , the minimization problem (7) can be reformulated in a more classical form. In fact, xing f in (7) and minimizing over z gives which can then be used to show that the minimization problem (7) is equivalent to maximizing with Y an m × s real matrix, where the lth row of Y is the observation y T l . This reformulation gives further insights on the interpretation off in (7). In fact,f is such that KΨf maximizes (KΨf ) T 1 m Y T Y (KΨf ) subject to a norm constraint. The term 1 m Y T Y is the empirical covariance matrix in the sensors space. The term z T z in (7) places the regularization term λ M ∆ 2 M f in the denominator of the equivalent formulation (9). Thus,f is regularized by the choice of norm in the denominator of (9), in a similar fashion to the classic functional principal component formulation of Silverman (1996). Ignoring the spatial regularization, the point-wise evaluation of the PC function Ψf in (9) can be interpreted as the rst PC vector computed from the dataset of backprojected data [K T 1 y 1 , . . . , K T m y m ] T , similarly to what is proposed in Dobriban et al (2017) in the context of optimal prediction. Algorithm Here we propose a minimization approach for the objective function in (7), which we approach by alternating the minimization of z and f in an iterative algorithm. In (7), a normalization constraint must be considered to make the representation unique, as in fact multiplying z by a constant and dividing f by the same constant does not change the objective function. We optimize in z under the constraint z = 1, which leads to a normalized version of the estimator (8): For a given z, solving (7) with respect to f will turn out to be equivalent to solving an inverse problem, which we discretize adopting a mixed nite elements approach (Azzimonti et al 2014). Speci cally, consider now a triangulated surface M T , union of the nite set of triangles T , giving an approximated representation of the manifold M. We then consider the linear nite element space V consisting of a set of globally continuous functions over M T that are af ne where restricted to any triangle τ in T , i.e. This space is spanned by the nodal basis φ 1 , . . . , φ κ associated with the nodes ξ 1 , . . . , ξ κ , corresponding to the vertices of the triangulation M T . Such basis functions are Lagrangian, for all v ∈ M T . To ease the notation, we assume that the p points {v j } associated with the evaluation operator Ψ coincide with the nodes of the triangular mesh ξ 1 , . . . , ξ κ , and thus we have that the coef cients c are such that c = Ψf for any f ∈ V. Consequently, we are assuming the forward operators {K l } to be s × κ matrices, relating the κ points on the cortical surface of the ith sample, in one-to-one correspondence to ξ 1 , . . . , ξ κ , to the s-dimensional signal detected on the sensors for the ith sample. Let now M and A be the mass and stiffness κ × κ matrices de ned as Practically, ∇ MT φ j is a constant function on each triangle of M T , and can take an arbitrary value on the edges 6 . Let h = max τ ∈T (diam(τ )) denote the maximum diameter of the triangles forming M T , then the solutionf h of (7), in the discrete space V, is given by the following proposition. Proposition 1. The surface nite element solutionf h ∈ V of model (7), for a given unitary Equation (12) has the form of a penalized regression, where the discretized version of the penalty term is AM −1 A. The sparsity of the linear system (12), namely the number of zeros, depends on the sparsity of its components. The matrices M and A are very sparse, however M −1 is not, in general. To overcome this problem, in the numerical analysis of partial differential equations literature, the matrix M −1 is generally replaced with the sparse matrixM −1 , whereM is the diagonal matrix such thatM j j = j ′ M j j ′ (Fried andMalkus 1975, Zienkiewicz et al 2013). The penalty operator AM −1 A approximates very well the behavior of AM −1 A. Moreover, in the case of longitudinal studies that involve only one subject, we have a single forward operator K := K 1 = · · · = K m common to all the observed signals, and consequently Algorithm 1. Inverse problems-PCA algorithm 1: Initialization: (a) Computation of M and A (b) Initialize z, the scores vector associated with the rst PC function 2: PC function's estimation: compute c such that 4: Repeat steps 2 and 3 until convergence equation (12) can be rewritten as the sparse overdetermined system to be interpreted in a least-square sense. A sparse QR solver can be nally applied to ef ciently solve the linear system (13). In algorithm 1 we summarize the main algorithmic steps to compute the PC functions and associated PC scores for indirectly observed functions. The initializing scores z can be chosen either at random or, when there is a correspondence between the detectors of different samples (e.g. K 1 = · · · = K m ), with the scores obtained by performing PCA on the observations in the sensors space. Eigenfunctions of indirectly observed covariance operators Suppose now we are in the case of a single forward operator K. Combining steps 2 and 3 of algorithm 1, and moving the normalization step from (z l ) to f h , we obtain the iterations The obtained algorithm depends on the data only through m l=1 y l y T l that up to a constant is the covariance matrix computed on the sensors space. The proposed algorithm can thus be applied to situations where the observations {y l } are not available, but we are given only the associated s × s covariance matrix on the sensors space, computed from {y l }. This could be of interest in situations where the temporal resolution is very high and the spatial resolution is low, therefore it is convenient to store the covariance matrix rather than the entire set of observations. Reconstruction and representation of indirectly observed covariance operators Consider now n sample covariance matrices S 1 , . . . , S n , each of size s × s, representing n different connectivity maps on the sensors space. Three of such covariance matrices, associated with three different individuals, are shown in gure 2. Recall moreover that we denote with M the brain surface template and with {K i ∈ R s×p } the set of subject-speci c forward operators, relating the signal at the p pre-speci ed points {v j } on the cortical surface M with the signal detected on the s sensors. The aim of this section is to introduce a model for the reconstruction and representation of the covariance functions {C i }, on the brain space, associated with the actually observed covariance matrices {S i }, on the sensors space. The matrices {S i } are related to the covariance functions {C i } through formula (6) that we recall here being i is the spectral decomposition of S i and D 1/2 i denotes the diagonal matrix whose entries are the square-root of the (non-negative) entries of D i . Each square-root decomposition S 1/2 i can be interpreted as a data-matrix whose empirical covariance is S i . Another possible choice for the square-root decompositions is S The output of the proposed algorithms will not depend on the speci c choice of the square-root decompositions. In the most general setting, each covariance matrix S i is an indirect observation of an underlying covariance function C i , which can be expressed in terms of its spectral decomposition as where, for each i, γ i1 γ i2 · · · 0 is a sequence of non-increasing variances and {ψ ir } r a set of orthonormal eigenfunctions. Introduce now {f i ∈ H 2 (M)} and {ẑ i ∈ R s }, obtained by applying model (7) to each sample independently, i.e. with · F denoting the Frobenius matrix norm. Each estimatef i , from model (14), can be interpreted as a regularized estimate of the leading PC function of S 1/2 i and thus of the eigenfunction ψ i1 . The subsequent eigenfunctions can be estimated by de ation methods, i.e. by removing the estimated componentsẑ i (K i Ψf i ) T from S 1/2 i and reapplying model (14). This leads to a set of estimates {f ir } and {ẑ ir }. The unregularized version of model (14) is equivalent to a singular value decomposition applied to each matrix S 1/2 i independently, which would lead to a set of orthogonal estimates {ẑ ir } r ⊂ R s , for each i = 1, . . . , n. In the regularized model orthogonality is not enforced, however the estimated PC components can be orthogonalized post-estimation by means of a QR decomposition. De ne now the empirical variances to beγ ir = f ir 2 L 2 (M) and consider the L 2 (M)normalized version of {f ir }. An approximate representation of S i = (S 1/2 i ) T S 1/2 i is thus given by and the associated approximate representation of C i , in terms of {γ ir } and {f ir }, is whereγ ir is an estimate of the variance γ ir andf ir is an estimate of ψ ir . The tensor prod- The regularizing terms in (14) introduce spatial coherence on the estimated {f ir } and thus on the estimated eigenfunctions of {C i }, fundamental in an inverse problems setting. The reconstructed covariance functions {C i } could be discretized on a dense grid, leading to a collection of covariance matrices (C i (v j , v l )) jl . Following the approach in Dryden et al (2009), a Riemannian metric could be de ned on the space of covariance matrices, followed by projection of (C i (v j , v l )) jl on the tangent space centered at the sample Fréchet mean. PCA could then be carried out on vectorizations of the tangent space representations. A related approach, for covariance functions, has been adopted in Pigoli et al (2014). However, the aforementioned approaches could be prohibitive in our setting. In fact, performing PCA on tangent space projections produces modes of variation that are geodesics passing through the mean, and whose interpretation in a high-dimensional setting is often challenging. Therefore, in the next section, we propose an alternative model that enables joint reconstruction, and representation on a 'common basis', of indirectly observed covariance functions. A population model Let {ẑ i } n i=1 ⊂ R s andf ∈ H 2 (M) be given by the following model: The newly de ned model, as opposed to model (14), has now a subject-speci c s-dimensional vector z i and a term f that is common to all samples. As in the previous model, the subsequent components can be estimated by de ation methods, leading to a set of estimatesf r andẑ ir . De ne now the empirical variances to beγ ir = ẑ ir 2 f r 2 L 2 (M) and consider the L 2 (M)normalized version of {f r }. The empirical term in model (16) suggests an approximate representation of S i , that is where each underlying covariance function C i is approximated by the sum of the product between a subject-speci c constantγ ir and a componentf r ⊗f r common to all the observations. The regularizing term in (16) introduces spatial coherence on the estimated functions {f r }. The covariance operators {C i } are said to be commuting if C i C i ′ = C i ′ C i for all i, i ′ = 1, . . . , n. This property can be equivalently characterized as with {γ ir } r subject-speci c variances and {ψ r } a set of common orthonormal functions. Thus, a collection of commuting covariance operators is such that its covariance operators can be simultaneously diagonalized by a basis {ψ r }. In this case, the functions {f r } can be regarded as estimates of {ψ r } and {γ ir } estimates of {γ ir }. On the one hand, model (16) constrains the estimated covariances to be of the form C i = rγ irf r ⊗f r and not of the more general form C i = rγ irf ir ⊗f ir . On the other hand, such a model takes advantage of all the n samples to estimate the components {f r ⊗f r }. Moreover, the associated variables {γ ir } give a convenient approximate description of the ith covariance, as they are comparable across samples, as opposed to the one computed from model (14). In fact, the ith covariance function can be represented by the variance vector (γ i1 , . . . ,γ iR ) T , for a suitable truncation level R, where each entry is associated with the rank-one component f r ⊗f r . For each r, a scatter plot of the variances {γ ir } i , as the one in gure 14, helps understand what the average contribution of the rth components is and what its variability across samples is. Model (17) could also be interpreted as a common PCA model (Benko et al 2009, Flury 1984, as {f r } are the estimated regularized eigenfunctions of the pooled covariance C = 1 n n i=1 C i . Potentially, PCA could be performed on the descriptors (γ i1 , . . . ,γ iR ) T to nd rank-R components that maximize the variance of linear combinations of {γ ir } (i.e. the variance of the variances). However, results would be more dif cult to interpret, as they would involve variations that are rank-R covariance functions around the rank-R mean covariance function. Algorithm The minimization in (14), for each xed i, is a particular case of the one in (7) (see section 3.2), so we focus on the minimization problem in (16) which is also approached in an iterative fashion. We set n i=1 z i 2 = 1 in the estimation procedure. This leads to the estimates of {z i }, given f, that are The estimate of f given {z i }, in the discrete space V introduced in section 3.2, is given by the following proposition. , the scores of the rst PC 3: PC function's estimation from model (14): compute c such that : Scores estimation from model (14): Algorithm 2 contains a summary of the estimation procedure. From a practical point of view, the choice to de ne the representation basis to be a collection of rank one (i.e. separable) covariance functions, of the type F r =f r ⊗f r , is mainly driven by the following reasons. Firstly, rank-one covariance functions are easier to interpret due to their limited degrees of freedom. Secondly, on a rank one covariance function F r =f r ⊗f r spatial coherence can be imposed by regularizingf r , as in fact done for model (14), and this is fundamental in the setting of indirectly observed covariance functions. Finally, due to their size, it might not be possible to store the full reconstructions of the covariance functions {C i } on the brain space, instead, the representation model in (17) allows for an ef cient joint representation of such covariance functions in terms of rank-one components. Simulations In this section, we perform simulations to assess the performances of the proposed algorithms. To reproduce as closely as possible the application setting, the cortical surfaces and the forward operators are taken from the MEG application described in section 6. The details on the extraction and computation of such objects are left to the same section. For the same reason, the signals on the brain space considered here are vector-valued functions, speci cally functions from the brain space M to R 3 , as is the case in the MEG application. The proposed methodology can be trivially extended to successfully deal with this case, as shown in the following simulations. Indirectly observed functions We consider M T to be a triangular mesh, with 8k nodes, representing the cortical surface geometry of a subject, as shown on the left panel of gure 1. Each of the 8k nodes will represent a location v j associated with the sampling operator Ψ. The locations of the nodes {v j } on the brain space, the location of the 241 detectors on the sensors space and a model of the subject's head, enable the computation of a forward operator K describing the relation between the signal generated on the locations {v j }, on the brain space, and the signal detected on the 241 sensors in the sensors space. In practice, the signal on each node v j is described by a three dimensional vector, characterized by an intensity and a direction, while the signal detected on the sensors space is a scalar signal. Thus, the forward operator is a 241 × 24k matrix. We rst want to assess the performances of the proposed model in the case of indirect functional observations belonging to a linear space. To this purpose, we produce synthetic data following the generative model (5). Speci cally, on M T , we construct the four L 2 (M T ) orthonormal vector-valued functions {ψ r = (ψ r,1 , ψ r,2 , ψ r,3 ) : r = 1, . . . , 4}, with ψ r : M T → R 3 . These represent the PC functions to be estimated. In gure 5 we show the four components of {ψ r } and the associated energy maps { ψ r (v) 2 : v ∈ M T }, with · denoting the Euclidean norm in R 3 . We then generate m = 50 smooth vector-valued functions {x l } on M T by where {z lr } are i.i.d realizations of the four independent random variables {z r ∼ N(0, γ r ) : r = 1, . . . , 4}, with γ 1 = 3 2 , γ 2 = 2.5 2 , γ 3 = 2 2 and γ 4 = 1. The functions {x l } are sampled at the 8k nodes, and the forward operator is applied to the sampled values, producing a collection of vectors {y l } each of dimension 241, the number of active sensors. Moreover, on each entry of the vectors {y l }, we add Gaussian noise with mean zero and standard deviation σ, for different choices of σ, to reproduce different signal-to-noise ratio regimes. In the following, we compare the PC model (7) to an alternative approach that we call the naive approach. In fact, the individual functions {x l } could be estimated from {y l } by use of classical inverse problem estimators. Here, we adopt the estimates {x l } de ned aŝ where eachx l is de ned in such a way that it balances the tting term and the regularization term in (20). Due to the fact that f is vector-valued, ∆ M f 2 is de ned as denoting the components of f. The same penalty operator is also adopted to generalize to vector-valued functions the PC models introduced in sections 3 and 4. In this approach, the constant λ is chosen independently for each of the m functions by partitioning the 241 detectors in roughly equally sized K = 2 groups and applying K-fold cross-validation. The criterion for the optimal λ is the average reconstruction error, on the sensors space, computed on the validation groups. Once we obtain the estimates {x l } we can compute the estimated PC functions {ψ r } by applying classical multivariate PC analysis on the reconstructed objectsx l . The estimates are compared to those of the proposed PC function model, as described in algorithm 1, with 15 iterations. Note that, instead, a tolerance could be xed to test if the algorithm has converged. However, 15 iterations give satisfactory convergence levels in our simulations and application studies. We partition the m observations in equally sized K = 2 groups and perform K-fold cross-validation for the choice of the penalty. Speci cally, we choose the coef cient λ that minimizes the sensors space reconstruction error, on the validation groups. To evaluate the performances of the two approaches, we generate 100 datasets as previously detailed. The quality of the estimated rth PC function is then measured with E ψ r ,ψ r = 3 q=1 ∇ M (ψ r,q −ψ r,q ) 2 . The results are summarized in the boxplots in gure 6, for two different signal-to-noise ratios, where the Gaussian noise has standard deviation σ = 5 and σ = 10. In gure 7 we show an example of a signal on the brain space corrupted with the speci ed noise levels. The boxplots highlight the fact that the proposed approach provides better estimates of the PC functions (i.e. lower estimation errors E(ψ r ,ψ r )), when compared to the naive approach. Differences in the estimation error are higher in a low signal-to-noise regime, as it is for the estimation of the fourth PC function, where intuitively, the low variance associated to the PC function makes it more dif cult to distinguish this structured signal from the noise component. Also surprising is the stability of the estimates of the proposed algorithm across the generated datasets, as opposed to the naive approach of reconstructing the functional observations independently, which instead returns multiple particularly unsatisfactory reconstructions. An example of such reconstructions is shown in gure 8. Indirectly observed covariance functions In this section, we consider M T to be a 8k nodes triangular mesh, this time representing a template geometry of the cortical surface, which is shown in gure 10. This contains only the geometric features common to all subjects. Moreover, each subject's cortical surface is also represented by a 8k nodes triangular surface, which is used, together with the locations of the Figure 6. On the left, a summary of the results in a medium signal-to-noise ratio regime. On the right, a summary of the results in a low signal-to-noise ratio regime. Each boxplot displays the paired differences of the estimation errors E(ψ r ,ψ r ) between the estimates of the two steps naive method and those obtained by applying algorithm 1. A paired difference greater than 0 indicates that, for the dataset in question, algorithm 1 has performed better than the two steps naive approach. 241 detectors on the sensors space, and the head model, to compute a forward operator K i for the ith subject. The 8k nodes of each subject's triangular mesh are in correspondence with the 8k nodes of the template mesh M T . This allows the model to be de ned on the template M T . As in the previous section, we construct four L 2 (M T ) orthonormal functions ψ r = ψ r,1 , ψ r,2 , ψ r,3 : r = 1, . . . , 4 . The energy maps of {ψ r } are shown in gure 9. We generate synthetic data from model (6) as follows: where z i1 , . . . , z i4 are i.i.d realizations of the four independent random variables {z r ∼ N(0, γ r ) : r = 1, . . . , 4}, with γ 1 = 3 2 , γ 2 = 2.5 2 , γ 3 = 2 2 and γ 4 = 1. The matrixvalued form of the covariance functions arises from the fact that the observed functions on the brain space are vector-valued. Subsequently, we construct the point-wise evaluations matrices C i ∈ R 24k×24k , from which the correspondent covariance matrices on the sensors space are de ned as The term E T i E i is an error term, where E i is an s × s matrix with each entry that is an independent sample from a Gaussian distribution with mean zero and standard deviation 5. We then apply algorithm 2 with 15 iterations, feeding in input {S i }. The results are shown in gure 9, in terms of energy maps of the reconstructed functions ψ r . These are a close approximation of the underlying functions {ψ r }. The delity measure 3 q=1 ∇ M ψ r,q −ψ r,q 2 of such estimates is 6.8 × 10 −2 , 6.1 × 10 −1 , 6.8 × 10 −1 and 7.4 × 10 −1 , for ψ 1 , . . . , ψ 4 respectively, which is comparable in term of order of magnitude to the results obtained in the case of PCs of indirectly observed functions. Across the generation of multiple datasets, results are stable, with the exception of few situations where the cross-validation approach suggests a penalization coef cient λ that under-smoothes the solution, due to very similar associated signals on the sensors space of the under-smoothed solution and the real solution. However, the crossvalidation is only a possible approach to the choice of the penalization constant, and many other options have been proposed in the inverse problems literature (Vogel 2002). Some of these, however, involve visual inspection. Application In this section, we apply the developed models to the publicly available human connectome project (HCP) young adult dataset (Van Essen et al 2012). This dataset comprises multi-modal neuroimaging data such as structural scans, resting-state and task-based functional MRI scans, and resting-state and task-based MEG scans from a large number of healthy volunteers. In the following, we brie y review the pre-processing pipeline, applied to such data by the HCP, to ultimately facilitate their use. Pre-processing For each individual a high-resolution 3D structural MRI scan has been acquired. This returns a 3D image describing the structure of the gray and white matter in the brain. Gray matter is the source of large parts of our neuronal activity. White matter is made of axons connecting the different parts of the gray matter. If we exclude the sub-cortical structures, gray matter is mostly distributed at the outer surface of the cerebral hemispheres. This is also known as the cerebral cortex. By segmentation of the 3D structural MRI, it is possible to separate gray matter from white matter, in order to extract the cerebral cortex structure. Subsequently a mid-thickness surface, interpolating the mid-points of the cerebral cortex, can be estimated, resulting in a 2D surface embedded in a 3D space that represents the geometry of the cerebral cortex. In practice, such a surface, sometimes referred to as cortical surface, is a triangulated surface. Moreover, from the 3D structural MRI, a surface describing the individuals' head can be extracted. The latter plays a role in the derivation of the model for the electrical/magnetic propagation of the signal from the cerebral cortex to the sensors. An example of the cortical surface of a single subject, is shown on the right panel in gure 1, instead the associated head surface and MEG sensors positions are shown on the left panel of the same gure. Moreover, a surface based registration algorithm has been applied to register each of the extracted cortical surfaces to a triangulated template cortical surface, which is shown in gure 10. Post registration, the triangulated template cortical surface is sub-sampled to a 8k nodes surface. Moreover, the nodes on the cortical surface of each subject are also sub-sampled to a set of 8k nodes in correspondence to the 8k nodes of the template. For each subject, a 248 × 24k matrix, representing the forward operator, has been computed with FieldTrip (Oostenveld et al 2011) from its head surface, cortical surface and sensors position. Such a matrix relates the vector-valued signals in R 3 , on the nodes of the triangulation of the cerebral cortex, to the one detected from the sensors, consisting of 248 magnetometer channels. With the aim of studying the functional connectivity of the brain, for each subject, three 6 min resting state MEG scans have been performed, of which one session is used in our analysis. During the 6 min, data are collected from the sensors at 600k uniformly distributed time-points. Using FieldTrip, classical pre-processing is applied to the detected signals, such as low quality channels and low quality segments removal. Details of this procedure can be found in the HCP MEG reference manual. Moreover, we apply a band pass lter, limiting the spectrum of the signal to the [12.5, 29] Hz, also known as the beta waves. For the signal of each channel we compute its amplitude envelope (see gure B.1) which describes the evolution of the signal amplitude. The measure of connectivity between channels that we adopt in this work is the covariance of the amplitude envelopes. Other connectivity metrics, such as phase-based metrics, have been proposed in the literature (see Colclough et al 2016, and references therein). Analysis Here we apply the population model introduced in section 4.2 to the HCP MEG data. The rst part of the analysis focuses on studying dynamic functional connectivity of a speci c subject. For this purpose, we subdivide the 6 min session in n = 40 consecutive intervals. Each of these segments is used to compute a covariance matrix in the sensors space, resulting in n covariance matrices S 1 , . . . , S n . In this setting, we have one forward operator K = K 1 = · · · = K n . The aim is understanding the main modes of variation of the functional connectivity on the brain space of the subject. Thus, algorithm 2, with 20 iterations, is applied to S 1 , . . . , S n to nd the PC covariance functions. A regularization parameter λ common to all the PC components is chosen by inspecting the plot of the regularity of the rst R = 10 PC covariance functions ( R r=1 M ∇ψ r 2 ) versus the residual norm, for different choices of the parameter. This is a version of the L-curve plot (Hansen 2000) and is shown on the left panel of gure B.2. Here we show the results for λ = 10 2 , in the appendices we show the results for λ = 10. The energy maps of the estimated ψ 1 ,ψ 2 andψ 3 resulting from the analysis are shown in gure 11. These are associated with the rst three PC covariance functionsψ 1 ⊗ψ 1 ,ψ 2 ⊗ψ 2 andψ 3 ⊗ψ 3 . High intensity areas, in yellow, indicate which areas present high average interconnectivity, either by means of positive or negative correlation in time. In gure 12, we show the plot of variances associated with each time segment, describing the variation in time of the PC covariance functions, hence the variation in interconnectivity. The variance can be either de ned on the sensors space, by normalizing the PC covariance functions {Kψ r }, with K the forward operator, or on the brain space, by normalizing the PC covariance functions on the brain space {ψ r }. Due to the presence of invisible dipoles, which are dipoles that display zero magnetic eld on the sensors space, the two norms can be quite different, leading to different average variances for each PC covariance function. Due to the high sensitivity of the source space variances on the choice of the regularization parameter, we focus on the estimated variances on the sensors space. We have also applied our model to the covariances obtained by subdividing the MEG session in n = 80 segments. As expected the PC covariance functions, shown in gure B.5 are very similar. However, the variances, in gure B.4, show higher variability in time, which can be Figure 11. Top side and bottom side views of the estimated energy mapsψ 1 ,ψ 2 and ψ 3 obtained by applying algorithm 2 to the covariance matrices computed from the MEG resting state data of a single subject on n = 40 consecutive time intervals. On the right panel, the covariance functions associated with these energy maps. On the top right panel we highlight with red circles the areas with high average interconnectivity, which correspond to the neighborhoods of the red crossed vertices in the plot of the energy map of ψ 1 . partially explained by the fact that shorter time segments lead to covariance estimates that have higher variability. The second part of the analysis focuses on applying the proposed methodology to a multisubject setting. Speci cally, n = 40 different subjects are considered. For each subject, the 6 min scan is used to compute a covariance matrix, resulting in n covariance matrices S 1 , . . . , S n . The template geometry in gure 10 is used as a model of the brain space. Algorithm 2 is then applied to nd the PC covariance functions on the template brain, associated with S 1 , . . . , S n . We run the algorithm for 20 iterations, and choose the regularizing parameter to be λ = 10 2 by inspecting the L-curve plot in the right panel of gure B.2. The results for λ = 10 are shown in the appendices. The energy maps of the estimated functionsψ 1 ,ψ 2 andψ 3 and the associated rst three covariance functionsψ 1 ⊗ψ 1 ,ψ 2 ⊗ψ 2 andψ 3 ⊗ψ 3 , are shown in gure 13. High intensity areas, in yellow, indicate which areas present high average connectivity. In gure 14, we show the subject-speci c associated variances, both in the sensors space and the brain space. The presented methodology opens up the possibility to understand population level variation in functional connectivity, and indeed, whether, just as we need different forward operators for individuals (due to anatomical differences), we should also be considering both population and subject-speci c connectivity maps when analyzing connectivity networks. In fact, it is of interest to note that in both the single and multi-subject settings, the areas with high interconnectivity, displayed in yellow in gures 11 and 13, seem to be at least partially overlapping with the brain's default network (Buckner et al 2008, Yeo et al 2011. The brain's default network consists of the brain regions known to have highly correlated hemodynamic activity (i.e. highest functional connectivity levels), and to be most active, when the subject is not performing Figure 12. Plots of the segment-speci c variances of the rst R = 10 PC covariance functions. On the left, the estimated variances on the sensors space, on the right, the estimated variances on the brain space. Figure 13. Top side and bottom side views of the estimated energy mapsψ 1 ,ψ 2 and ψ 3 obtained by applying algorithm 2 to the covariance matrices computed from the MEG resting state data of n = 40 different subjects. On the right panel, the covariance functions associated with these energy maps. Figure 14. Plots of the subject-speci c variances associated with the rst R = 10 PC covariance functions. On the left, the estimated variances on the sensors space, on the right, the estimated variances on the brain space. any speci c task. An image of the spatial con guration of the default network can be found, for instance, in gure 2 of Buckner et al (2008). From the plots of the associated variances in the sensors space (left panel of gures 12 and 14) we can see that these areas are also the ones that show high variability in connectivity across time or across subjects. This might suggest that the brain's default network is also the brain region that shows among the highest levels of spontaneous variability in connectivity. The plots of the variances on the brain space (right panel of gure 14), when compared to those on the sensors space (left panel of gure 14), demonstrate that these type of studies are highly sensitive to the choice of the regularization, not only in terms of spatial con guration of the results, but also in terms of estimated variances on the brain space. With a naive ' rst reconstruct and then analyze' approach, where the reconstructed data on the brain space replace those observed on the sensors space, this issue could go unnoticed, as the variability that does not t the chosen model is implicitly discarded in the reconstruction step and does not appear in the subsequent analysis. Also, importantly, our analysis deals with statistical samples that are entire covariances, overcoming the limitations of seed-based approaches, where prior spatial information is required to choose the seed. Seed locations are usually informed by fMRI studies and this comes with the risk of biasing the analysis when comparing electrophysiological networks (MEG) and hemodynamic networks (fMRI). In general, care should be taken when drawing conclusions from MEG studies. Establishing static and dynamic functional connectivity from MEG data remains challenging, due to the strong ill-posedness of the inverse problem. It is known that other variables, such as the choice of the frequency band or the choice of the connectivity metric can in uence the analysis. While the choice of the neural oscillatory frequency band could be seen as an additional parameter in MEG functional connectivity studies, there is no general agreement on the choice of the connectivity metrics (Gross et al 2013). It is important to highlight that in this paper we focus on methodological contributions to the speci c problem of reconstructing and representing indirectly observed functional images and covariance functions. Discussion In this work we introduce a general framework for the reconstruction and representation of covariance operators in an inverse problem context. We rst introduce a model for indirectly observed functional images in an unconstrained space, which outperforms the naive approach of solving the inverse problem individually for each sample. This model plays an important role in the case of samples that are indirectly observed covariance functions, and thus constrained to be positive semide nite. We deal with the non-linearity introduced by such constraint by working with unconstrained representations, yet incorporating spatial information in their estimation. The proposed methodology is nally applied to the study of brain connectivity from the signals arising from MEG scans. The models proposed here can be extended in many interesting directions. From an applied prospective, it is of interest to apply them to different settings, not necessarily involving neuroimaging, where studying second order information has been so far prohibitive. Direct examples are second order analysis of the dynamics of meteorological observations, such as temperature. Another possible application is the study of the dynamics of ocean currents, where the irregularity of the spatial domain, and its complex boundaries, can be easily accounted for thanks to the manifold representation approach in our models. From a modeling point of view, it is of interest to take a step further toward the integration of the inverse problems literature with the approach we adopt in this paper. For instance, penalization terms that have been shown to be successful in the inverse problems literature, e.g. total variation penalization, could be introduced in our models. Appendix B. Application-additional material Here we present further material complementing the analysis in section 6. In gure B.1 we show the amplitude envelope computed from a ltered version of a signal detected by an MEG sensor. The covariance of the amplitude envelopes across different sensors is the measure of connectivity used in this work. In gure B.2 we show the L-curve plots associated with the PC covariance models applied to the dynamic and multi-subject functional connectivity studies. In gures B.3 and B.4 we show respectively the plots of the estimated PC covariance functions and associated variances from the dynamic functional connectivity study on n = 40 segments with regularization parameter λ = 10. In gures B.5 and B.6 we show the estimated PC covariance functions and associated variances from the dynamic functional connectivity study on n = 80 time segments with regularization parameter λ = 10 2 . In gures B.7 and B.8 we show the estimated PC covariance functions and associated variances from the multi-subject functional connectivity study on n = 40 subjects with regularization parameter λ = 10. Plots of the regularity of the rst R = 10 PC covariance functions, measured as 10 r=1 M ∇ψ r 2 versus the residual norm in the data, for different choices of log(λ). On the left panel, the plot refers to the dynamic connectivity study, on the right panel the plot of the multi-subject connectivity study. Energy maps of the estimatedψ 1 ,ψ 2 andψ 3 obtained by applying algorithm 2, with lower regularization (λ = 10), to the covariance matrices computed from the MEG resting state data of a single subject on n = 40 consecutive time intervals. On the right panel, the covariance functions associated with these energy maps. Energy maps of the estimatedψ 1 ,ψ 2 andψ 3 obtained by applying algorithm 2, with λ = 10 2 , to the covariance matrices computed from the MEG resting state data of a single subject on n = 80 consecutive time intervals. On the right panel, the covariance functions associated with these energy maps. Energy maps of the estimatedψ 1 ,ψ 2 andψ 3 obtained by applying algorithm 2, with lower regularization (λ = 10), to the covariance matrices computed from the MEG resting state data of n = 40 different subjects. On the right panel, the covariance functions associated with these energy maps.
15,279.6
2018-06-11T00:00:00.000
[ "Mathematics" ]
On the Large Charge Sector in the Critical $O(N)$ Model at Large $N$ We study operators in the rank-$j$ totally symmetric representation of $O(N)$ in the critical $O(N)$ model in arbitrary dimension $d$, in the limit of large $N$ and large charge $j$ with $j/N\equiv \hat{j}$ fixed. The scaling dimensions of the operators in this limit may be obtained by a semiclassical saddle point calculation. Using the standard Hubbard-Stratonovich description of the critical $O(N)$ model at large $N$, we solve the relevant saddle point equation and determine the scaling dimensions as a function of $d$ and $\hat{j}$, finding agreement with all existing results in various limits. In $4<d<6$, we observe that the scaling dimension of the large charge operators becomes complex above a critical value of the ratio $j/N$, signaling an instability of the theory in that range of $d$. Finally, we also derive results for the correlation functions involving two"heavy"and one or two"light"operators. In particular, we determine the form of the"heavy-heavy-light"OPE coefficients as a function of the charges and $d$. Introduction and Summary Quantum dynamics often simplifies in the limit of large quantum numbers, and results which may be inaccessible within standard perturbation theory can be obtained by a semiclassical calculation. For example, in the context of the AdS/CFT duality, the expansion at large R-charge [1] and large spin [2] has provided many non-trivial tests and crucial insights on the gauge/string duality. Expansions in large quantum numbers have also proved useful in deriving various non-perturbative results in quantum field theory, for example in the context of conformal field theory (CFT), see e.g. [3][4][5]. Recently, the large charge expansion in CFTs with global symmetry was studied from a rather general viewpoint in [6] using effective field theory methods, see e.g. [7][8][9][10][11][12][13] for further developments, and [14] for a review and a more comprehensive list of references. In this note, we study large charge operators in the canonical example of the critical O(N ) model in dimension d. As it is well-known, this CFT can be described as the IR (1.1) The 1/N expansion of the CFT correlation functions can be developed by integrating out the fundamental fields φ i , which yields an effective action for σ where N acts as the coupling constant. In practice, this leads to a set of Feynman diagrammatic rules where one uses an induced σ propagator and the σφ i φ i vertex (see e.g. [15,16] for reviews). This standard 1/N perturbation theory works as long as one considers correlation functions of operators with quantum numbers that are finite in the large N limit. However, when the quantum numbers are of order N , the ordinary 1/N perturbation theory breaks down. This is because in this case the operator insertions are of the same order as the "classical" action, and hence the path integral is expected to be dominated by a non-trivial saddle point. In this paper we focus on observables involving scalar operators O j in the rank-j totally symmetric traceless representation of O(N ), in the limit (1. 2) In this limit, the scaling dimension of the operators are expected to take the form where the non-trivial function h(ĵ) can be determined by a semiclassical saddle point calculation. In d = 3, this problem was studied recently in [17] using a conformal map to R t × S 2 (the analogous problem in the -expansion, where one holds j fixed, was studied in [7,10,12]). Here we we work in Euclidean R d throughout, and find the scaling dimensions and a largeĵ expansion of the form This largeĵ behavior is precisely consistent with the effective field theory approach [6,7]. Note that, as it is evident from (1.4), this semiclassical evaluation of the scaling dimensions in fact resums an infinite number of terms in the usual 1/N expansion, and hence provides an infinite number of checks on standard 1/N Feynman diagrams. Having the result for general d, we can also make contact with the -expansion in the overlapping regime of validity with the large N expansion, and we find agreement with all existing results. To obtain the scaling dimension, we study directly the two-point function of the large charge operators on R d , and determine the semiclassical saddle point for the σ field as a function of the insertion points of the "heavy" operators. This approach also allows us to extract without much further work the correlation functions involving two "heavy" operators and various "light" operators. In particular we will derive the expression for the three-point function coefficients in the "heavy-heavy-light" configuration. Similar results in the effective field theory and analytic bootstrap approaches were previously obtained in [7,9]. One interesting application of our results is to the O(N ) model in d > 4. It is known that the standard 1/N perturbation theory can be formally continued above four dimensions [18], and it appears to be unitary and well-defined to all orders in 1/N (for operators with quantum numbers that do not scale with N ). This matches onto the formal UV fixed point of the quartic theory in d = 4 + , and onto the IR fixed point of a model with cubic interactions in d = 6 − [19]. However, as shown in [20], the theory in 4 < d < 6 is non-perturbatively unstable due to instanton effects, which lead to small imaginary parts in physical observables. where N j is a normalization constant (in general, scheme dependent), and ∆ j is the scaling dimension that we want to determine. The operator O j is the lowest dimension operator in the sector with charge j, and is not expected to undergo mixing. Thus, we can determine the scaling dimension by computing the two-point function as where we have introduced the auxiliary Hubbard-Stratonovich field, and dropped the term in (1.1) proportional to σ 2 /λ which is irrelevant in the critical limit. 2 Since the action is quadratic in the φ i fields, we may evaluate the two-point function by Wick contractions to Here G(x 1 , x 2 ; σ) denotes the Green's function of the differential operator −∂ 2 + σ. We are interested in the limit j → ∞, N → ∞ withĵ = j/N fixed, and hence we may write which highlights the fact that the insertion of the large charge operators contributes a term of order N to the σ effective action. In the large N limit, the path integral over σ is expected to be dominated by a saddle point which extremizes the effective action In the absence of the insertion (i.e.,ĵ = 0), the saddle point on R d is simply σ = 0. However, in the presence of the large charge operators, we expect the saddle point to be at , with σ * a non-trivial profile which depends on the insertion points of the large charge operators. To proceed, we make an ansatz for the form of the saddle point profile σ * . The key observation is that we may view σ * as the one-point function of σ in the presence of the large charge operators. In other words, this is related to the 3-point function O j (x 1 , u 1 )O j (x 2 , u 2 )σ(x) . Following steps similar to the ones above, we have: Recalling that σ in the critical O(N ) model is an operator of scaling dimension ∆ = 2 + O(1/N ), and using the form of the three-point function of scalar operators fixed by conformal we deduce that we must have where c σ is an undetermined constant that should be fixed by solving the saddle point equation. 3 Explicitly, this is obtained by extremizing the effective action in (2.5), and reads Computing the functional derivative, one finds and Combining these two results, we may write the saddle point equation as In order to solve for the constant c σ in (2.8), we will need to evaluate explicitly the Green's function G(x, y; σ * ). This is a non-trivial calculation, which we carry out in the next subsection. The Green's function The Green's function is the solution to where σ * is given in (2.8). This equation may be solved as a power series in σ * , by writing Here G (0) is the well-known free field massless propagator (2.16) Solving (2.15) iteratively, one then finds Conformal integrals of precisely this kind were evaluated in arbitrary d in [23], exploiting a connection to conformal quantum mechanics. Using the results obtained there, we find (2.20) 4 Our variables η, ξ are denoted u, v in [23]. Introducing now the conformal cross ratios after an integration by parts, we may write for L ≥ 1 Plugging this into (2.18) yields (see [24] for a similar calculation in d = 4) where we defined = log Y X , and J k (x) denotes the standard Bessel function. After an integration by parts, we finally get This is the final result for the Green's function in the presence of the non-trivial profile σ * . Note that the fact that it depends on the conformal cross ratios in (2.22) is expected from conformal invariance. Indeed, by an argument similar to the one in eq. (2.6), one can see that the Green's function is related to the four-point function of two "light" scalar operators in the presence of the two large charge operators. We will come back to this point in section 3 below. In order to solve the saddle point equation (2.12), we need to evaluate the Green's function (2.25) in various limits. Let us first consider the coincident point limit G(x, x; σ * ). From (2.22), we see that this limit corresponds to X → ∞, Y → ∞ and X/Y → 1, or → 0. Then we find 26) or, after an integration by parts 5 Next, we consider the case when x → x 1 , or y → x 2 , or both. In all of these cases, we have the Green's function (2.25) may be also written as (2.28) Taking the limit X → 0, Y → 1 (leaving fixed for now), we have The integral can be evaluated using the identity [25] ∞ 0 dx e −αx J 0 β x 2 + 2γx = 1 which yields the following: . (2.31) Now we may plug in the explicit form of in the limit where we have introduced a small regulator δ to deal with the short distance singularity that appears when x collides with x 1 . Plugging this into (2.31), we have Similarly, we have . (2.34) Finally, when both x → x 1 and y → x 2 , we get . ( This is one of our main results, and it will allow us to obtain the scaling dimensions of the large charge operators by evaluating (2.4) at the saddle point. The only missing ingredient is the functional determinant, which we obtain in the next subsection. The functional determinant Similarly to the Green's function, the functional determinant may be evaluated as a power series in σ * . We have 6 where for brevity we have omitted the dependence of σ * on the insertion points x 1 , x 2 of the heavy operators. Expanding (2.27) in powers of c σ , we can read off Plugging this result into (2.37) and using (2.8), we find after performing the sum (2.39) The integral over x is divergent and needs to be regularized. We will adopt the following analytic regulator where δ → 0 and µ is a mass scale introduced on dimensional grounds. Using (2.41) 6 The term of order zero in σ, i.e. log det(−∂ 2 ), is naturally regulated to zero in flat space. we find The pole in the regulator that appears here should be removed as part of the renormalization of the composite operator O j . We will drop it in the following and just keep track of the dependence on log |x 1 − x 2 |, which is sufficient to extract the scaling dimensions. Our final result for the functional determinant is then (2.43) The scaling dimension We can now evaluate the two-point function of the large charge operators in the large N limit withĵ = j/N fixed. Using (2.4), the leading large N result is obtained by evaluating the σ effective action (2.5) at the saddle point Let us define (2.45) Then, using (2.35) and (2.43), we find 7 where c σ is the solution of the saddle point equation (2.36). Note that, using where c σ (ĵ) is the solution to (2.36), or equivalently Smallĵ expansion In the smallĵ limit, we may solve (2.49) in powers ofĵ. Note that where and the higher order coefficients are straightforward to obtain, though they become rather lengthy. Note that, recalling thatĵ = j/N , the expression in (2.52) contains an infinite number of terms from the point of view of the usual 1/N expansion, namely those with the highest power of j at each order in 1/N . The expression for h 2 (d) can be seen to be in agreement with the known result for the anomalous dimension of the charge-j operators to order 1/N [26], which can be computed by standard Feynman diagram methods The correction of order 1/N 2 was also computed in [27], and one can check that the term of order j 3 /N 2 = Nĵ 3 in the result obtained there precisely matches the function h 3 (d) in (2.53 (2.57) This is in precise agreement with the result obtained long ago in [28]. 9 Largeĵ expansion To obtain the expansion of the scaling dimension at largeĵ, one may rescale the integration variable in (2.45) and expand in inverse powers of c σ . Using the and solving (2.49) order by order in 1/ĵ, we get (2.60) Plugging this expansion back into equation (2.48), we get (2.62) Note that the large j behavior ∆ j ∼ j d d−1 agrees with the prediction of the effective field theory approach [6,7]. This can be seen to be in agreement with the result obtained in [12], taking the appropriate large N limit. Complex dimensions in 4 < d < 6 Note that if we try to continue eq. (2.64) to d = 4 + , due to the fractional power of the scaling dimension in the largeĵ limit becomes complex ∆ j ≈ e ±i π 3 3N 2 4/3 1/3ĵ4/3 , d = 4 + . CFT. Interestingly, the functionĵ crit (d) appears to be qualitatively similar to the function f (d) controlling the instanton induced imaginary parts ∼ e −N f (d) that was found in [20]. It would be interesting to clarify the relation between these quantities. Correlation functions at large charge Having obtained the Green's function (2.25) as a function of the insertion points of the large charge operators, it is relatively straightforward to derive the correlation functions of two "heavy" and an arbitrary number of "light" operators. Below we focus on threepoint functions, from which we can extract the OPE coefficients in the "heavy-heavy-light" configuration, and on the four point functions in the "heavy-heavy-light-light" configuration. Three-point functions The three-point function of scalar operators with charges j 1 , j 2 , j 3 (in the totally symmetric traceless representation of O(N )) is fixed by conformal symmetry and O(N ) symmetry to take the form (3.1) The O(N ) symmetry requires this 3-point function to vanish unless the charges satisfy the triangular inequalities j i + j j ≥ j k , and i j i = even. Let us now consider the heavy-heavy-light configuration Note that O(N ) symmetry requires −j 3 ≤ q ≤ j 3 (and j 3 +q = even). Now using the explicit form of the operators O j (x, u) = (u · φ(x)) j , we have where we used the shorthand G ij = G(x i , x j ; σ), and the n j+q,j,j 3 factor comes from the combinatorics of Wick contractions, which gives Now we note that in (3.3), the only term that affects the calculation of the saddle point at large N is the factor G j 12 = exp(Nĵ log(G 12 )), which is the same as in the two-point function calculation in section 2. Therefore, the σ path-integral is dominated by the same saddle point as found there, and we simply have to evaluate all factors in (3.3) at σ = σ * . Stripping off the position dependent and polarization dependent factors which are fixed by symmetry, this yields for the 3-point function coefficient 5) where N is the normalization factor coming from the Green's function, see eqs. (2.33)-(2.35). To obtain the 3-point coefficient for unit normalized operators, which we denote by a j+q,j,j 3 , we may divide (3.5) by the square root of the two-point function normalization factors. This yields a j+q,j,j 3 = C j+q,j,j 3 Recalling that we are working in the large N limit with j/N =ĵ fixed, we obtain the final result a j+q,j, where c σ is fixed in terms ofĵ by (2.49). In particular, in the limit of largeĵ, we get with C 0 given in (2.60). The leading largeĵ scaling a j+q,j,j 3 ∼ j (d−2)j 3 2(d−1) = j ∆ j 3 /(d−1) agrees in d = 3 with the EFT result obtained in [7] (see also [9]). Four-point functions For simplicity, let us specialize to the case of four-point functions of two large charge operators and two fundamental (charge 1) fields. Also, let us split φ i = (φ 1 , φ 2 , ϕ a ), a = 1, . . . , N − 2, and take the "heavy" operators to be Z j = (φ 1 + iφ 2 ) j andZ j = (φ 1 − iφ 2 ) j . Then we can consider two kinds of heavy-heavy-light-light 4-point functions: Z jZ j ϕ a ϕ b and Z jZ j ZZ . In the former case, we get (3.9) Here we have used (2.25), and we have defined the function of conformal cross ratios σ * . Note that in the limit we consider, the first term in the square bracket is subleading compared to the second term, due to extra factor of j = Nĵ in front of the latter. From (2.33)-(2.35), we have (3.13) So the four-point function, to leading order at large N with j/N fixed, is (3.14) Let us make a consistency check of this result with the OPE expansion, in the channel 13 → 24. To extract the OPE data in this limit, it is convenient to recast (3.14) as (see e.g. [31]) where, comparing with (3.14), remembering ∆ φ = d/2−1+O(1/N ), and using the definition of the cross ratios in (3.11), we have This function should have the OPE expansion G(X, Y ) = ∆,s a 2 ∆,s X (∆−s)/2 g ∆,s (X, Y ), where the sum is over operators of dimension ∆ and spin s that appear in the 13 → 24 channel, a 2 ∆,s are squared OPE coefficients, and g ∆,s (X, Y ) are the conformal blocks (normalized such that g ∆,s (X, Y ) = 1 + . . . for X → 0, Y → 1). In the limit X → 0, Y → 1, the leading contribution should come from a scalar operator of charge j + 1 that appears in the OPE of Z j and Z. Comparing (3.16) with the OPE expansion in this limit, we see that the dimension of the exchanged operator of charge j + 1 should satisfy ∆ j+1 − ∆ j = (d/2 − 1) 2 + c σ . (3.17) This is precisely as expected. Indeed, writing ∆ j = N h(ĵ), we have in the large N limit On the other hand, from (2.48) and (2.49), we see that 19) in agreement with (3.17). From (3.16) we can also read off the squared OPE coefficient This is in precise agreement with the result (3.7) derived earlier, setting j 3 = 1, q = 0. Conclusion In this paper we have studied large charge operators in the large N critical O(N ) model in general d, in the limit where the charge j goes to infinity withĵ = j/N fixed. In particular, we have obtained the scaling dimensions to leading order at large N and arbitraryĵ, as well as the 3-point and 4-point functions involving two large charge operators. In the range 4 < d < 6, we have observed an interesting transition from real to complex scaling dimensions at a critical value of the ratio j/N , which we view as a manifestation of the instability of the interacting O(N ) model in 4 < d < 6 that is not captured by the ordinary 1/N perturbation theory. There are several extensions of our results that would be worth pursuing. For example, a natural further step would be to compute the subleading corrections to the scaling dimensions and other observables in the large charge limit we considered. For instance, the order N 0 correction can be computed by including the one-loop determinant arising from the quantum fluctuations around the semiclassical saddle point we found in Section 2. It would be interesting to evaluate such correction to ∆ j explicitly for arbitraryĵ and d. It would be also useful to extend the calculation of correlation functions to the case of more than two heavy operators. For instance, deriving the 3-point function coefficients in the "heavy-heavy-heavy" configuration would be an interesting and non-trivial problem. It would be also interesting to further investigate the instability of the theory in 4 < d < 6, and in particular understand the relation between the complex dimensions of the large charge operators that we found here and the imaginary part of the thermal free energy computed in [20,21]. Another natural direction would be to see if the methods we used in this paper can be extended to other kinds of operators with large quantum numbers. For example, one could consider other large representations of O(N ), or operators with spin s in the large N limit with s/N fixed. In the case of the critical O(N ) model, or its generalizations involving Chern-Simons gauge theory, this may have interesting applications to the duality [16,32,33] with Vasiliev higher spin theory in AdS [34,35]. Since the bulk coupling constant is identified with 1/N , the CFT states with quantum numbers of order N should be related to non-trivial classical solutions of the bulk higher-spin theory.
5,308.8
2020-11-23T00:00:00.000
[ "Physics" ]
The gene expression and protein profiles of ADAMTS and TIMP in human chondrosarcoma cell lines induced by insulin: The potential mechanisms for skeletal and articular abnormalities in diabetes Background:The delay in wound healing, decrease in the long bones resilience to fracture, and delay in fracture healing are among common complications DM patients, and they still remain as challenging issues to be solved. The mechanism has not been fully understood yet, but high sugar and/or insulin deficiency or unresponsiveness to insulin in blood are potential causes to blame. Extracellular matrix degradation/remodeling is one of the important mechanisms whereby cell differentiation, bone remodeling and wound repair can be regulated. ADAMTS proteins play important roles in cartilage/bone metabolism. This study aimed to determine whether ADAMTS/TIMP proteins were affected by insulin application in OUMS-27 cells.Material and methods:OUMS-27 cells were induced by 10μg/mL insulin for 1,3,7,and 11days. Cells were harvested, mRNA and protein extractions were performed. Total mRNA and cDNA levels were measured by qRT-PCR and protein levels were detected by WB.Results:ADAMTS1,5, and 7 levels were significantly decreased, while TIMP-3 levels were detected increased (mRNA/protein concentrations).Conclusions:Pathologies and disturbances of cartilage/bone metabolism, delayed fracture healing in particular, in patients with DM may result from insulin deficiency. ADAMTS genes that play a role in healing process are increased during insulin deficiency, which consequently interrupts healing process by causing cartilage ECM degradation. INTRODUCTION Glucose is an essential nutrient for metabolism and structural needs for the growth centers of the hyaline cartilage and the bone growth center. When chondrocytes are left in an anaerobic environment, it does not only use glucose as a primary substrate for the production of ATP (1,2), but it is also the source of glucosamine sulfate which is important for the development, protection, reparation and remodeling of the cartilage. Wound healing, bone resilience, healing of fractured bones and in procedures done before tooth implant surgery to change the bone structure are abnormal in patients with type-2 diabetes mellitus (DM). experiment has ended, the samples were run and visualized on an agarose gel (2 %) to assess the results. The optimal annealing temperature and primer combination was given to the band with the highest intensity (yield) with no nonspecific products. The calculated Tm values by gradient PCR were given in Table 1. Protein isolation, Immunoblotting and Antibodies: Anti-ADAMTS1, Anti-ADAMTS4, Anti-ADAMTS5, Anti-ADAMTS7, Anti-ADAMTS15, Anti-ADAMTS18, Anti-TIMP-3, Anti-TIMP-4, and Anti-GAPDH primary antibodies were purchased from Santa Cruz Biotechnology, Inc., CA and used at a 1:1000 dilution. Cross-reactivity was confirmed before the study to match with data that described on the manufacturer's data sheet. The cells were washed once with phosphate-buffered saline and scraped from the plates. Cells were solubilized in 300 μL of CelLytic M (Sigma Aldrich, St. Louis, MO) with a protease/phosphatase inhibitor (Pierce Protease and Phosphatase Inhibitor Mini Tablets, EDTA Free, Thermo Fisher Sci.). After incubation in at 4 °C for 15 min, the samples were centrifuged (15 min, 12,000xg). The protein concentration of cell extracts was determined using Thermo Scientific Bradford Assay kit and BSA as standard. Protein samples were boiled at 95 °C in Laemmli sample buffer (BioRad) and β-mercaptoethanol for 8 min. Ten micrograms of total protein were run in Western blotting (WB) tanks. Briefly, 10μL of each sample including protein marker (BioRad Precision Plus Protein Western C Standard) were loaded to WB gel (BioRad Mini PROTEAN TGX Stain Free Gels, 4-15 %, 15-well comb, 15 μL) in BioRad 1XTris/Glycerine/SDS running buffer and run at 250 V for 20 min. After electrophoresis, proteins were transferred onto PVDF membranes (BioRad Trans-Blot Turbo Transfer Pack, 0.22 μM) (BioRad, Singapore). Membranes were blocked for one hour in 2.5 % nonfat dried skim milk in Tris-buffered saline containing of 0.05 % Tween-20. The membranes were incubated overnight with primary antibodies ( Table 2) diluted in blocking buffer. After washing with Tris-buffered saline/Tween 20 (thrice) for 8 min. each at room temperature, the membranes were incubated for one hour with the secondary antibodies ( Table 2). Following three successive washes, immunoreactive bands were visualized using enhanced chemiluminescence system (BioRad Immun-Star Western C kit) for 90s. Signals were detected with BioRad ChemiDoc MP Imaging System (Singapore), and the densitometry was performed with Image-J software and normalized to the signal intensity of GAPDH for equal protein loading control of each sample in each experiment. This quantification was performed with the linear range of the standard curve defined by the standard GAPDH, for all densitometry analysis. Statistical Analysis: Statistical Package for Social Studies (SPSS) version 16.0 was used for statistical tests. Nonparametric Kruskal Wallis Test was applied. The relationships between the variables were tested by Mann Whitney-U test. P<0.05 was accepted as significant. RESULTS In order to assess the effects of insulin on ADAMTS gene expression in cultured OUMS-27 cells, qRT-PCR was performed on mRNA eluted from cell lysate. Quantitation and standard curves as well as other data of qRT-PCR instrument were regular and flawless. On the other hand, in order to achieve a better understanding of the mechanism of insulin on cartilage matrix degradation, we analyzed ADAMTS proteins in cultured OUMS-27 cells in vitro. WB analysis of ADAMTS proteins using antibodies, which recognize specific enzyme proteins of ADAMTS, was performed. In the graphics of WB analysis, x-axis shows insulin application time intervals (in days), y-axis shows band densities of ADAMTS and TIMP proteins normalized by GAPDH densities. There is no numeric value of this ratio. ADAMTS1/GAPDH graphic is shown in Figure 1A. From quantitative nucleic acid concentration graphics, cDNA concentration of D1, D7 and D11 groups was rather low compared to that of control group P= 0.047, 0.009, 0.009, and 0.028, respectively). It generally means that insulin application lead to a decrease in amplicon levels of ADAMTS1. There was also a statistically significant difference in amplicon concentrations between D1 and D7 groups (P= 0.028) ( Figure 1A). It seems that ADAMTS1 protein production was decreased upon insulin induction in OUMS-27 cells. Immunoreaction between antibodies and proteins attached to the membranes revealed two bands: 110 kDa and 70 kDa. It was suggested that similar bands could be found after protein degradation. According to the manufacturer's instruction and the related scientific publications, the second band was being expected around 85 kDa. The bands shown on 110 kDa and 70 kDa were identified as preform (inactive) and proform (active), respectively. Protein levels at first day after insulin induction were the same compared to control group. There was a dramatic decrease on D3, then the decrease continues up to D7. Although ADAMTS1 level increases on D11, the level was still about one fourth of the beginning level ( Figure 1B). The results of ADAMTS4 nucleic acid concentration are given in Figure 2A. Nucleic acid concentrations were decreased in D1 group and increased in D3, D7, and D11 groups compared to control group. However, as seen from high-rise SD bars, there were no significant differences between study and control groups (Figure 2A). Figure 1: (A) qRT-PCR test results of A disintegrin and metalloproteinases with thrombospondin type 1 motif 1 (ADAMTS1) in OUMS-27 chondrosarcoma cells. Values were standardized and normalized by proportioned with GAPDH values. Bars and error bars represent mean values and standard deviations, respectively. There were statistically significant differences between Control-Day 1 study group (C-D1), C-D7, C-D11, and D1-D7 (P = 0.047, 0.009, 0.009, and 0.028 respectively). (B) The electrophoretic image of ADAMTS1 protein of OUMS-27 cells by Western blotting technique and their graphics (two different bands) obtained by the calculation of band densities by CCD camera and ImageJ program. The x-axis shows insulin application time intervals (in days) and y-axis shows band densities of ADAMTS1. The densities for ADAMTS1 were normalized with GAPDH densities. Although the antibody used for ADAMTS5 protein showed two bands at 105 kDa/75 kDa, the second one could not be obtained properly. Thus, only 105 kDa band was interpreted. The production of ADAMTS5 protein was decreased upon insulin induction in OUMS-27 cells. Protein levels were decreased gradually during the experiments up to D7. Band density was increased to some extent on D11 ( Figure 3B). Figure 4A shows the quantitation of nucleic acid concentration of ADAMTS7. Concentrations of D1 and D3 groups are quite similar to those of control group. Although there is an increase upon insulin induction in D1 group, it was not expected to be significant because of high SD. However, nucleic acid concentrations of D7 and D11 groups were significantly decreased (P= 0.009). It probably means that insulin application affected mRNA levels of ADAMTS7 proteins at least in two groups. It is not valid for D1 and D3 groups of cells. In addition to that, there were statistically significant differences in cDNA concentrations between D1-D7, D1-D11, D3-D11 (P= 0.016, 0.009, and P= 0.050, respectively) ( Figure 4A). (B) The electrophoretic image of ADAMTS5 protein of OUMS-27 cells by Western blotting technique and its graphic (one band) obtained by the calculation of band density by CCD camera and ImageJ program. The x-axis shows insulin application time intervals (in days ) and y-axis shows band densities of ADAMTS5. The density for ADAMTS5 was normalized with GAPDH densities. ADAMTS7 protein level was decreased dramatically during the experiment in OUMS-27 cells (there was a 40 % decrease in protein amount of 70 kDa band and 60 % decrease for 55 kDa at D1 of experiments compared to controls). There was a gradual decrease in both bands up to around 90 % in D11 ( Figure 4B). The qRT-PCR results of ADAMTS15 is presented in Figure 5A. Although there were some differences in quantitative nucleic concentrations of groups D1 (increase), D3 (increase), D7 (decrease), and D11 (decrease) compared to control, they were statistically insignificant. As a conclusion, insulin application did not affect ADAMTS15 mRNA concentrations. There were two bands for ADAMTS15. The first band was anticipated at around 103 kDa according to prospectus whereas it was found at around 70 kDa. Two possibilities arise from this discrepancy: This band may originate from degradation of ADAMTS15 or it may be accepted as a nonspecific band. ADAMTS15 protein levels were affected by insulin in OUMS-27 cells. At the first and subsequent days, protein amounts were decreased gradually up to D7. After that day, 55 kDa band continued to decrease, but 70 kDa band showed an increase up to those levels of D1 ( Figure 5B). There was a gradual increase in ADAMTS18 nucleic acid concentrations of groups D1, D3, D7, and D11 in OUMS-27 cells induced by insulin ( Figure 6A). Because there were great differences between the individual results, statistical analyses gave no difference between study groups and control group. The antibodies used for detection of ADAMTS18 in OUMS-27 cells gave three bands in WB: two (100 kDa and 75 kDa) of them were considered to be nonspecific bands. The smallest band, around 45-50 kDa, was considered to be the active form of enzyme. The decrease in the protein amount of ADAMTS18 upon insulin induction was found to be negligible when compared to other ADAMTS proteins. There was a 30% decrease on the first day of insulin induction, and then it remained stable around this percentage during 11 days of experiment ( Figure 6B). Although OUMS-27 cells produced 6 times more TIMP-3 protein in D1 after insulin induction compared to controls, it was not statistically significant because of a huge gap between lowest and highest values of instrument analyses. There was serious decrease in TIMP-3 cDNA levels after D3. The result of D3 was not statistically significant although a prominent (20%) decrease was observed compared to control group. However, the decrease in nucleic acid concentration of groups D7 and D11 compared to control group was statistically significant (P= 0.017 and 0.047, respectively). Therefore, the insulin application led to the decrease in TIMP-3 mRNA levels in OUMS-27 cells. This is not valid for D1 and D3 groups. Besides, there were significant differences in cDNA concentrations between D1-D7, D1-D11, and D3-D7 groups (P= 0.016, 0.021, and P= 0.028, respectively) ( Figure 7A). Figure 6: (A) qRT-PCR test results of A disintegrin and metalloproteinases with thrombospondin type 1 motif 18 (ADAMTS18) in OUMS-27 chondrosarcoma cells. Values were standardized and normalized by proportioned with GAPDH values. Bars and error bars represent mean values and standard deviations, respectively. There were no statistically significant differences between the groups. (B) The electrophoretic image of ADAMTS18 protein of OUMS-27 cells by Western blotting technique and its graphic (one band) obtained by the calculation of band density by CCD camera and ImageJ program. The x-axis shows insulin application time intervals (in days) and y-axis shows band densities of ADAMTS18. The density for ADAMTS18 was normalized with GAPDH densities. The bands of TIMP-3 were detected around 25 kDa. Insulin induction led to a marked increase in TIMP-3 protein amount in OUMS-27 cells. When compared to control group, protein levels were decreased 49% on D1, 63% on D3, and 22% on D7 and finally protein amount suddenly showed a 2.75 fold increase compared to control values ( Figure 7B). TIMP-4 bands were detected around 22 kDa. The effect of insulin on TIMP-4 proteins in OUMS-27 cells was limited. Insulin induction during all 11 days led to an increase of merely 10 %. The levels of TIMP-4 protein on other harvesting days were almost the same as those of control group (Figure 8). 0.016, 0.047, 0.016, 0.021, and 0.028 respectively) . (B) The electrophoretic image of TIMP-3 protein of OUMS-27 cells by Western blotting technique and its graphic (one band) obtained by the calculation of band density by CCD camera and ImageJ program. The x-axis shows insulin application time intervals (in days) and y-axis shows band densities of TIMP-3 protein. The density for TIMP-3 was normalized with GAPDH densities. At a glance, the WB and qRT-PCR results overlap at some points and show differences in others. For example, while mRNA and protein measurements show parallelism in terms of ADAMTS1, 5 and 7 concentrations, the mRNA and protein levels show differences in terms of TIMP-3. It has been observed that some changes in the protein levels of other studied ADAMTS enzymes have not occurred in mRNA levels. For example, ADAMTS4 protein decreased gradually after the application of the insulin. This gradual reduction is present in both observed bands. If we only compare the control and D11 groups, we can see a half reduction on the first and a nearly 3-fold reduction on the second band. No difference is found in the qRT-PCR results between the control and insulin-induced group. It is the same for ADAMTS15. In fact, the results are more dramatic here. The reduction on the 55 kDa band is more than 10 times in the D11 insulin-induced group. A 40% reduction in the D1 group, a 3-fold reduction in the D2 group and a 5-fold reduction in the D7 group are present. Although not as severe as that in 70 kDa band, when the D11 group is considered, the two-fold reduction attracts attention. If we compare the qRT-PCR results of ADAMTS15 of the control and insulin groups, the differences appear to be statistically insignificant. In other words, no significant difference is observed in the mRNA levels. In addition to this, the ADAMTS18 results of the WB and qRT-PCR analyses have again revealed to be parallel. In both methods, no significant differences have been found between the control and insulin groups. Because TIMP-4 has only been examined with the WB method, it has not been possible to compare the results. As for the differences of ADAMTS4 and 15 between the two methods; the cells have maintained their mRNA levels in the insulin and non-insulin groups up to the protein phase, but after that the protein stayed intact in the insulin-free environment whereas the level decreased due to digestion of the protein because of insulin induction. This result actually brings a different aspect of the original agenda: Insulin may be acting as a controlling regulator of the digestion of these proteins. DISCUSSION In this experimental study, we investigated the effects of insulin in chondrosarcoma cells in vitro. Since the study was performed in an immortal cancerous cell line in vitro, it is essential to refrain from overinterpretations and to be cautious when conducting and extrapolating these findings in vivo and disease conditions. However, it is also essential to connect the basic findings with real disease conditions in order to give an objective estimation to the readers. The obtained positive and/or negative findings of PCR and WB are important in several aspects and might be qualified as original: 1) Because our experiment was conducted in a way that the induction of insulin to the cells was investigated and the absence/ineffectiveness of insulin in DM was considered as a precondition, the exact opposite of our findings should be considered. In this case, there should be two changes; the intracellular matrix degrading enzymes would be activated and their inhibitory enzymes would become passive. These two findings make this study unique, because molecular causes of fracture healing problems in DM have a potential for further research. There might be some clues in our findings to reveal the molecular mechanisms of the fracture healing problems in DM. Figure 8: The electrophoretic image of the tissue inhibitors of metalloproteinases-4 (TIMP-4) protein of OUMS-27 chondrosarcoma cells by Western blotting technique and its graphic (one band) obtained by the calculation of band density by CCD camera and ImageJ program. The x-axis shows insulin application time intervals (in days) and y-axis shows band densities of TIMP-4 protein. The density for TIMP-4 was normalized with GAPDH densities. 2) The main reason for increased fragility of long bones in patients with uncontrolled DM may be absence and/or ineffectiveness of insulin. The reducing and/or inhibitory effects of insulin on ADAMTS proteins, ensure that the cartilage and bone tissues prefer to go to anabolic direction. In insulin absence, it can be anticipated that an increase in ADAMTS activity may accelerate the destruction of ECM in cartilage and bone tissues, leading consequently to a reduction in the resilience to bone fractures. Little is known about the effect of DM on the cartilage phase of fracture healing. This phase plays an important role in the reparation process because the cartilage acts as a template for the subsequent bone formation (24). Our study has proven the reducing effect of insulin on three ADAMTS and the increasing effect on TIMP protein, and also sheds light on the molecular basis of deterioration of cartilage and bone repair in diabetes. Previous studies have shown that diabetic animals create a fracture callus, but the volume compared to that in normal animals is much smaller (25). Clinical and case studies have reported that diabetes delays fracture healing and dramatically increases recovery time (26). Animal studies have shown that DM causes the formation of small calluses resulting in the reduction of bone and cartilage development, reduces the proliferation and differentiation of osteoblastic cells and chondrocytes and that the mechanical strength is reduced two times during fracture reparation (9,16). An important decrease of collagen content in the callus has been identified in diabetic animals compared to normoglycemic animals (16). In an experimental study, during the healing of a tibial fracture, an unidentified catabolic effect and accelerated cartilage loss in the diabetic group was determined (27). When insulin is induced into SRC chondrocyte cells, the insulin receptors have increased and this is not interpreted as an increase in the de novo receptor synthesis or a translocation of other intracellular receptors located in the cell compartments to the cell surface, but most likely as a decrease in the rate of receptor degradation (28). Chondrocytes synthesize and secrete the macromolecules forming the ECM of cartilage cells. SRC chondrocytes respond in a very special way to physiological concentrations of porcine insulin at in vitro conditions and produce plenty of cartilage-like proteoglycans (29), type-II-collagen (30), hyaluronic acid (31) and other secreted proteins (32). Rat chondrosarcoma chondrocytes are insulin-and IGF-1-dependent tumors. This tumor, shrinks significantly in hypophysectomized rats and also shows a distinct decrease in rats with streptozotocin-induced diabetes (33). ADAMTS4/5 undertakes important roles in the degradation of the cartilage matrix proteoglycan (34). These enzyme levels drastically increase in animals with DM (35) and proteoglycans are removed very rapidly from the ECM. However, this process is slowed down with insulin therapy. Parallel to that, when insulin has been applied into chondrocyte cells as in our study, the levels/activity of these proteins have decreased and the removal of the proteoglycan groups have slowed down. We can provide a similar effect by directly introducing insulin into chondrosarcoma cells, so when viewed in terms of protein, we see that ADAMTS4 and 5 decrease drastically. To understand the disorders occurring during increased fragility of long bones and delay in healing of fractured bones, the endochondral ossification process needs to be well analyzed. The cartilage matrix is digested during endochondral ossification. Studies in recent years related to the proteolytic enzymes, which could be responsible for the digestion, are based on two important main structures of the cartilage matrix proteins, type-II collagen and aggrecan digesting proteinases. Fibrillary collagen and aggrecan digesting MMP13 inside the growing cartilage are especially expressed in hypertrophic chondrocytes (36). The chondrocytes undergo regular hypertrophy in MMP13 knockout mice, but the end of hypertrophic chondrocytes expands and the leading ossification invasion delays (37). The extinction of the MMP13 protein as a result of a mutation in the MMP13, results in a growth defect called Missouri-type spondylometaphyseal dysplasia (38). Contrary to MMP13, MMP9 does not digest native fibrillar collagen, but can digest denatured collagen and aggrecan (36). The deletion of both MMP9 and MMP13 genes in mice results in huge delay during ossification compared to the situation in which only one deletion in either MMP9 or MMP13 genes takes place (37). ADAMTS1, 4 and 5 knockout mice can grow up normal and do not show any defect in the cartilage (39,40). Because these are the most important aggrecanases found in the cartilage, none of them takes part alone in the removal of cartilage aggrecan in a significant amount. Various enzymes belonging to the MMP and ADAMTS families are able to function interchangeably, thus if any single ADAMTS gene undergo knockout, another can fill its place to avoid growth defects. An alternative interpretation is the following: for a normal invasion premise of ossification, the removal of the fibrillar collagen is required, but the aggrecan removal is not important to this process. One of the most important proteins in our study, the ADAMTS1 gene, has initially been cloned as a gene that is responsive to inflammation (41). It is not expressed in normal tissues but induced by lipopolysaccharide stimulation. It has been shown that ADAMTS4 together with ADAMTS1 play a key role in versican proteolysis (42). When compared with MMPs, ADAMTS proteases recognize substrates such as procollagen and proteoglycans with a broader specificity (42). It has also been reported that ADAMTS1 acts with a different function as angiogenesis inhibitor (43). One interesting fact is that the matrix around the cells does collide to allow chondrocyte hypertrophy. The fact that neither MMP nor ADAMTS proteins prevent this process of genetic manipulation, leads us to think that other proteinases are responsible for this process. Cathepsins and calpains are candidates for such a role because they are expressed by chondrocytes and have matrix destructive activities (36). TIMPs are important for various physiological process controls. These can be classified as cell invasion, angiogenesis, digestion of articular cartilage, trophoblast implantation, inner fold formation of mammary gland, and wound healing (44,45). According to our results, the TIMP-3 levels were increased. qRT-PCR results do not support this finding. According to our WB results, the TIMP-3 level was increased. A significant decrease has been observed by D7 and 11. This increase in protein level presents a parallelism in the decrease of the ADAMTS level. TIMP-3 blocks the tissue proteinases, the cartilage tissue ECM cannot be cleaved, and the structure remains intact. No study was found about the levels of TIMP-3 and 4 after insulin induction to compare present results. In this respect, our findings are original. Upon insulin application, the TIMP-3 level will rise, trigger the anabolic processes in cartilage and healing bone tissue, contributing to tissue strength. TIMPs are natural inhibitors of MMPs. Factors like cytokines, hormones, and many other factors, including growth factors, affect the metalloproteinase activity (45). There are a lot of biological factors that affect metalloproteinase activities, including cytokines, hormones, growth factors, and many others (45). Aggrecan proteoglycan gives the cartilage features, such as strength against the pressure force, dynamic weightbearing function and an osmotic feature (46). The aggrecan, which is an incredibly complex macromolecule, is specific to the cartilage and intervertebral disc, and is an extremely hydrophilic proteoglycan that consists of glycosaminoglycan chains, containing about 100-chondroitin sulfate and 30-keratin sulfate chains (47). The breaking down of the aggrecan is one of the important indicators of osteoarthritis. In the analysis of synovial fluid of arthritis patients, it has been shown that the aggrecan is pathologically cut from the aggrecanase cutting field (48). Many members of the ADAMTS family cut the molecule in vitro at this cutting point (49): most effective ones are ADAMTS4 and 5. Both of them are expressed in the normal and osteoarthritic cartilage and the synovium (50). Their inhibition prevents the collapse of aggrecan in vitro (51). ADAMTS4/5 knockout mice phenotypically differ from wild-type rats. The results obtained from this study have lots of data that shed light on mechanisms of many joint diseases, especially osteoarthritis. The loss of aggrecan from the cartilage in joint pathologies is mainly a proteolytic process conducted by aggrecanases. In vitro studies with human recombinant ADAMTS4 (52) and ADAMTS5 (53) enzymes, have shown that aggrecan was preferably cleaved from the CS-2 part. According to our findings, insulin behaves in a direction that might be classified totally as cartilage protective. These effects of insulin on the chondrosarcoma cells, which we have been discussing in details, whether it is turning hyperglycemia to its normal state or its direct effect on cells, can be listed as following: accelerating fracture healing, improving the resilience to bone breakage/fractures, providing the growth and development of skeletal structures in children and ensuring success with pre-operational processes in dental implant surgery. To further investigate the subject and to clarify the exact mechanism(s), it would be helpful to investigate the ADAMTS and TIMP in terms of gene, mRNA, and protein phases in animals with experimentally induced DM.
6,147.4
2020-01-16T00:00:00.000
[ "Medicine", "Biology" ]
Rippled Graphene as an Ideal Spin Inverter : We analyze a ballistic electron transport through a corrugated (rippled) graphene system with a curvature-induced spin–orbit interaction. The corrugated system is connected from both sides to two flat graphene sheets. The rippled structure unit is modeled by upward and downward curved surfaces. The cooperative effect of N units connected together (the superlattice) on the transmission of electrons that incident at the arbitrary angles on the superlattice is considered. The set of optimal angles and corresponding numbers of N units that yield the robust spin inverter phenomenon are found. Introduction Since the discovery of spin transport in graphene, far-reaching consequences for fundamental aspects of spintronics and its potential applications were soon realized [1]. It is well understood that, in this case, applications in spintronics are sensitive dependent on the strength of spin-orbit coupling (SOC). In particular, the form of the SOC suggested by Kane and Mele [2], and by other authors [3][4][5], in freestanding graphene is too weak for practical applications. Furthermore, this form relies on the presence of external fields, which introduces additional constraints. Noteworthy is the fact that a graphene sheet is corrugated naturally due to intrinsic strains. It is predicted that a corrugation (ripple) in a tight-binding approximation could create electron scattering in graphene, caused by the change in nearest-neighbor hopping parameters by the curvature [6,7]. It is notable that the lattice deformation changes the relative orientation of the orbitals of the corrugated graphene sheet, leading to hybridizations of the π-and σ-bonds [8]. As a result, it is shown that a one-dimensional periodic rippled nanostructure produces a strong focusing effect of ballistic electrons due to Klein tunneling. More importantly, in the low-energy physics of graphene, the mean curvature generates a curvature-induced SOC [9] without any external field. Based on this fact, it was demonstrated that curvature-induced SOC [9,10] could produce a chiral transport [11,12]. In this case, the transport of ballistic electrons through periodically repeated ripples is subject to selection rules: Depending on the direction of motion, the system is transparent only for one spin polarization. Moreover, the polarization changes to the opposite when the flow direction changes. A similar phenomenon has been discussed recently in metalhalide semiconductors (for a review, see [13]). All these predictions imply that it might be possible to control the electronic and transport properties of a graphene sheet by altering its curvature. Experimental achievements in a spatial variation of graphene provide a sufficient basis for such reasoning. For example, ripples can be formed by means of electrostatic manipulation without any change in doping [14]. Periodically rippled graphene is fabricated by the epitaxial technique [15], and by means of the chemical vapor deposition [16]. It is discovered that ripples, acting as potential barriers, yield the localization of charged carriers [17]. Indeed, the effect of the SOC in graphene, in conjunction with the ability to control its geometry, allow for rich spin physics. We recall that a consistent approach to introduce curvature-induced spin-orbit coupling in the low-energy physics of the carbon nanotubes (CNTs) have been developed by Ando [9] (see also [3,18,19]) in the framework of effective mass and tight-binding approximations. Experiments in ultra-clean CNTs [1,20] confirm the importance of SOC for the interpretation of the energy spectra in nanotubes. Indeed, the measured shifts are compatible with theoretical predictions [9]. On the other hand, the role of different spin-orbit terms in metallic and non-metallic CNTs is still debatable (see, for example, discussions in [21][22][23][24][25][26]). We should nevertheless point out that, at least for armchair CNTs, one obtains two SOC terms: one preserves the spin symmetry (a spin projection on the CNT symmetry axis), while the second one breaks this symmetry [9,10,24,25]. Note that the contribution of the second term was underrated [3,9,24,25]. In this paper, we will demonstrate how the both SOC terms could be used to invert a polarized spin current with a high efficiency in a rippled graphene system. One of the goals of this paper is to figure out symmetries, as well as elucidate the transport properties, of periodic rippled graphene nanostructures, and allow the prediction of various remarkable properties. In particular, we focus on the most general case, when a beam of ballistic electrons, propagating from a flat graphene sheet, incidents at arbitrary angle on a periodically rippled graphene structure (superlattice), and exits from the opposite side to the flat graphene sheet. The main result of the present paper is that geometrical properties of this superlattice can be used as an effective mechanism of a spin-flip phenomenon for spin-polarized current traveling between non-magnetic flat graphene contacts. The Model Hamiltonian and the Eigenvalue Problem We model a corrugated graphene structure with a curved surface with periodically repeated N elements (the superlattice). The cross section of each element consists of the curved surface perpendicular to the y axis (our quantization axis), which has the form of the direct arc of a circle (the concave surface) connected to the inverse arc (the convex surface) (see Figure 1). The first element is connected from the left side to a flat graphene sheet. Further, this structure (the first element) is repeated N times, and the last element is connected to the flat graphene sheet on the right side of our graphene structure. Hereafter, we consider a wide enough graphene sheet, keep the translational invariance along the y axis, and neglect the edge effects. A unit cell of a honeycomb lattice contains two sublattices, called the A or B site, respectively. The effective model Hamiltonian of the flat graphene in the nearest-neighbor tight-binding model [27] has the following form: where the Pauli matrices τ x,y act on the sublattice degrees of freedom, and I s is the identity matrix of rank 2, acting in the spin space. The eigenvalues and eigenstates of the flat graphene Hamiltonian are well known (see, e.g., textbooks [28,29]): where e −iϕ = (k x − ik y )/ k 2 x + k 2 y , k k k = (k x , k y ), r r r = (x, y). Hereafter, we choose the up and down spins as eigenstates of the Pauli matrixσ y . For the sake of convenience, we introduce the following equivalent definitions: σ = (+/−) ⇔ σ = (↑ / ↓). The sign κ = +(−) corresponds to the conductance (valence) band. These bands touch at two nonequivalent Dirac points (the Fermi level E = 0) or valleys K and K , which are at the corners of the hexagonal Brillouin zone in reciprocal space. Thus, each state is four-fold degenerate, i.e., two spin and two valley degenerate. The region I (the concave arc) is a part of a nanotube of radius R, defined as 0 < x < 2R cos θ 0 . At θ 0 = 0, the up surface is half that of the nanotube, while at θ 0 = π/2 the curvature does not exist. For the sake of analysis, we introduce the angle φ = π − 2θ 0 . The region II (the convex arc with the radius R) is characterized by similar parameters to those of region I. Here, we have −∞ < y < ∞. To describe the scattering phenomenon, one has to define wave functions in different regions: flat (L,R) and curved (I, II) graphene surfaces. The solution for a curved graphene surface can be expressed in terms of the results obtained for armchair CNTs in the effective mass approximation, when only the interaction between nearest neighbor atoms is taken into account [10]. We assume that a curvature is smooth enough on the lattice scaling of graphene and does not induce the inter-valley scattering. Therefore, for the time being, we proceed our analysis for the K point. Let us recapitulate the major results [10] in the vicinity of the Fermi level E = 0 for a point K in the presence of the curvature-induced spin-orbit interaction in an armchair CNT. In this case, the eigenvalue problem is defined aŝ with the following definitions: Here,σ x,y,z are standard Pauli matrices, and the spinors of two sub-lattices are The following notations are used: pp and V π pp are the transfer integrals for σ and π orbitals, respectively, in flat graphene. a = √ 3 2.46 Å is the length of the primitive translation vector, where is the distance between atoms in the unit cell. For numerical illustration, we assume that γ 0 ≈ 3 eV and γ 1 ≈ 8 eV (see, e.g., Ref. [9]). Note that by means of a similar method, we can find the solution for the K point. The intrinsic source of the SOC δ = ∆/(3 πσ ) is defined as where V is the atomic potential, and πσ = π 2p − σ 2p . The energy σ 2p is the energy of σ-orbitals, localized between carbon atoms. The energy π 2p is the energy of π-orbitals, directed perpendicularly to the curved surface. By means of the unitary transformation where I is 2 × 2 identity matrix. One removes the θ dependence in the Hamiltonian (4), transformed in the intrinsic frame, and obtainŝ Here, the operatorsτ x,y,z are the Pauli matrices that act on the wave functions of Aand B-sublattices (a pseudo-spin space), and are the strengths of the SOC terms. The term ∼ λ x conserves, while the the term ∼ λ y breaks the spin symmetry in the Hamiltonian (9) of the armchair CNT. The operatorĴ y , being an integral of motion Ĥ ,Ĵ y = 0, is defined in the laboratory frame asĴ while in the intrinsic frame it iŝ Finally, we obtain for the eigenvalues of Equation (4): where κ = +1(−1) is associated with the conductance (valence) band, and the energies E ± are defined as Here, t m = mγ/R, t y = γk y , and the magnetic quantum number m = ±1/2, ±3/2, . . . is an eigenvalue of the angular momentum operatorĴ y . The eigenfunctions of the Hamiltonian (4) take the following form where and The normalization constant N ± is defined by the following expression: In general, the relations |A ± | = |D ± | and |B ± | = |C ± | are fulfilled. The obtained results for the armchair CNT will be used to describe the properties of the concave and convex surface Hamiltonians. In principle, the scattering problem that we are faced with can be considered as a scattering of a ballistic electron on two potential barriers: a scattering problem at the first barrier (the concave surface), with a subsequent scattering at the second barrier (the convex surface). Therefore, in order to resolve the eigenvalue problem for our unit, it is convenient to first solve this problem for the concave surface, and for the convex surface second. It is convenient to describe the rippled graphene region with the concave surface ripple in the laboratory frame as half of the nanotube Hamiltonian in the following form (see Equations (4) and (5)): where ξ x = 2δγp/R, and its eigenfunction is determined by Equation (15). In virtue of the approach developed by Ando [9], we obtain the HamiltonianĤ d , associated with the convex surface ripples (details will be given elsewhere): where, for the K-valley, we have to use η = 1, while for the K -valley, η = −1. Note the transformation θ → −θ and ξ x → −ξ x in Hamiltonian (23), yields Hamiltonian (22). We found that the symmetry transformation where I p is 2 × 2 identity matrix, acting in the pseudospin (sublattice) space, yields the following relation between the Hamiltonians: Once the eigenproblem for the Hamiltonian (4) is solved, in virtue of the transformation (24), we can define the eigenstates for the Hamiltonian (23). Taking into account that the electron energy should be same in the concave and the convex surfaces of the unit, we have the following relations: The above-described eigenfunctions (15), (26) are used to calculate the electron transmission through the curved graphene system (the superlattice) connected to the planar graphene sheets. See Section 3, below. At a fixed value of the carrier flow E ⇐⇒ ±E ± (Equations (13) and (14)), there are four possible values of the quantum number m: In the corrugated graphene system, the angular momentum is no longer the integral of motion. As a result, we have to consider the mixture of the eigenfunctions with all possible values at a given energy. Hereafter, we consider the positive solutions E > 0 only, since the negative solutions are symmetrically reverted. As an example of the spectrum (14), a few positive energy branches are shown on Figure 2 as a function of the quantum number m. For the sake of illustration, the positive energies (14) are crossed by two horizontal lines that mimic the incoming electron energies. The crossing points determine quantum numbers m that have nonquantized values when the curved surface is connected to the flat one. There is an anticrossing effect between energy states characterized by the same m + quantum number, which yields an energy gap. This anticrossing is caused by the term λ y in the Hamiltonians (22), (23), which creates the energy gap 2λ y near the energy E = λ x at k x = 0, k y = 0 (see [10][11][12]). Let us analyze the upper and the lower limits of the energy gap, in which the evanescent modes exist in the case k x = 0, k y = 0. Since the energy of incoming electron E = γ k 2 x + k 2 y , we have The condition of existence for evanescent modes associated with imaginary values t m is subject to the equation E + = E − (see Equation (14)), such that In this case, the common energy will be This equation generalizes the mid-slit position of the energy gap for the case k x = 0, k y = 0. We recall that at k y = 0 this position is determined by E × = λ x solely (see Figure 2 in Ref. [11]). Further, let us consider the situation when the energy E of the incoming electron is equal either to E + or to E − . In virtue of the relation t y = E · sin ϕ → E ± · sin ϕ, we obtain the following from Equation (14): Squaring of the above equation yields A second squaring leads us to the biquadratic equation which the roots of are defined by the following equations Once the energy E − = 0, one finds that t 2 m = λ 2 x − λ 2 y does not depend on ϕ. Thus, for the energy branch E − (the dashed line in Figure 2), we obtain the following expressions as a function of t m (⇔ m): By means of Equation (32), it is possible to define the middle of the energy gap: The "central" energy is constant for |t m | ∈ 0, λ 2 x − λ 2 y ! The energy gap between the energy branch E + and the energy branch E − is defined as which is an increasing function at t m ≥ 0. Indeed, the gap increases from It is notable that the energy gap at m = 0 becomes much larger with an increase in the ratio k y /k x , in comparison to the case k y = 0. For larger t m values, the gap remains constant. The wave numbers m ± are determined by the equation where E = γ k 2 x + k 2 y is the energy of incoming electron. Transmission through the Superlattice As was mentioned above, we assume that the incident ballistic electron moves from the left planar graphene sheet (L) through the superlattice to the right planar graphene sheet (R) along the x axis, and its energy is the integral of motion (see Figure 1). Hereafter, we consider a graphene sheet, in which width W along the y axis is much larger the length M along x axis, i.e., W M. In other words, we keep the translational invariance along the y axis and neglect the edge effects. By means of the continuity condition of the wave functions at the boundaries between the flat and corrugated graphene regions, we determine the unknown reflection and transmission amplitudes r σ α , t σ α (α, σ =↑, ↓). In these amplitudes, the upper (bottom) index denotes the spin polarization of the incoming (outgoing) (reflected and transmitted) electron. More specifically, we have the following condition at the boundary between the regions L (the flat graphene sheet) and the concave arc (the region I, the concave surface): The boundary condition between the concave arc (I) and the convex arc (II) provides the following equation: Thus, regions I and II characterize two subelements of the superlattice unit, which repeats N times. Note that we have to consider the boundary condition between the region II with the next unit. Consequently, the boundary condition between the convex arc (II) and the concave arc has the following form: Taking into account that the last Nth block, ending with the convex surface (arc) connected to the right flat graphene sheet (the region R), we obtain Eliminating the unknown coefficients a, b, c, d from Equations (36)-(39), we obtain the key equation  where the matrix M 0 (ϕ) is defined as The matrix transformation X has the following structure: where we introduce the following definitions In the definition (45), the following conditions hold: The energy E × is defined by Equation (28). We consider the following situations for ballistic electrons (moving from the left flat graphene sheet and described by Equation (3)) that are incident on the superlattice with a certain polarization in Equation (40) It is quite certain that the matrix X (Equation (42)) can be diagonalized where λ k , v k are the eigenvalues and the eigenvectors of the matrix X, respectively. Using this fact, we transform the matrix X N into the form where the matrix U consists of the eigenvectors {v}, and UU −1 = I. Evidently, the eigenvalues {λ} can be written in a very general form as λ k = a k exp iψ k , k = 1, . . . , 4. Consequently, the amplitudes r σ α , t σ α (α, σ =↓, ↑) of Equation (40) become the functions of the eigenfunctions λ N k = a N k e iNψ k . It results in probabilities that will depend periodically on the number of units in superlattice through the functions a ±N k a ∓N j , Discussion From the analysis of ballistic electron transport through a superlattice that consists of concave arcs (semiripples) interconnected by flat graphene sheets [30] it was shown that a periodically repeated rippled graphene structure leads to the suppression of the transmission of electrons with one spin orientation in contrast to the other, depending on the direction of the incoming electron flow. In this case, it was assumed that electrons are injected to the curved surface in a perpendicular direction, i.e., k y = 0. In contrast to the above case, our superlattice unit contains both the convex surface connected continuously with the concave surface, and k x = 0, k y = 0. To gain a better insight into the effect of the superlattice on ballistic transport, we numerically study its dependence on: (i) the number N of the superlattice units; (ii) the incident angle ϕ of ballistic electrons; and (iii) the radius of the curved surface of the unit (see Figure 1). While our approach enables us to analyze the effect for the arbitrary unit angle, in this paper, all calculations are performed for the unit angle φ = π. The calculation of the spin-flip probabilities are performed on the N × ϕ mesh with ∆N = 1 for N = 1, . . . , 500 units, and ∆ϕ = 0.01 • for ϕ = 0 • , . . . , 50 • . The results are shown in Figure 3 for various values of the number of N units at different values of ϕ = arctan k y /k x of incident electrons with an energy 0 < E ≤ 1 eV. Hereafter, we consider only the results that provide the maximal probability P N,ϕ ≥ 0.9999. It appears that our device operates most efficiently at the incident beam energy, defined in the intervals 0 ≤ E ≤ 0.18 eV, and 0.5 ≤ E ≤ 1.0 eV (see Figure 3). In particular, at energy E = 0.1 eV (see Figure 4), we find a set of bands that provide the spin-flip effect for incident ballistic electrons for minimal N units of the superlattice. Each band is limited by the boundaries with the probability P = 0.99. Each solid point in the band is characterized by a set {N, ϕ} variables that corresponds to P ≥ 0.9999. For example, at the incident energy E = 0.1 eV of the electron that enters to the superlattice at the angle ϕ = 20 • , the latter must consist of N = 163 units to invert the polarized beam (↑ or ↓) of ballistic electrons to the opposite polarization. It is notable that, with an increase in the incident electron energy, the separate points transform to the dense points (see the right panel, Figure 4). The higher the incident electron energy, the wider the set of {N, ϕ} that yields the inversion effect of entrance electrons with a given polarization. The functional dependence of the bands leads us to conclude that there is a remarkable relation that allows to determine the number of N units to obtain the maximal spin flip effect for all considered energies. The index i characterizes the band number, namely that the lowest band has the index i = 1, etc. The least squares fitting of our results provide another interesting result with high accuracy, where the constant c 1 = 59.149. It is notable that, at a given ϕ, it is possible to relate the number N i (ϕ) of the ripple units in the band i with the aid of the number of units N j (ϕ) in the band j and, consequently, to exclude the constant c 1 . Indeed, in virtue of relations (50)-(51), we can formulate the following result As a result. we obtain the units number periodicity between the position of the maximum probabilities in different bands: Thus, the knowledge of the minimal number of ripples in the first band provides the number of the superlattice units that yields the effect of the periodicity of the spin-flip phenomenon at a fixed value of the angle ϕ of the incident beam. We recall that the results discussed above are valid at R = 12 Å of the ripple radius (see Figure 1b) for all considered energies. It is noteworthy that our results remain true for various values of the ripple radius as well (see Figure 5). The presence of the band structure is found for the set of different ripple radii. Although the band structures manifest themselves for a particular choice of {R, E} in the panels (a-d) in Figure 5, the results hold for all energy intervals considered in our analysis (see Figure 3) at the fixed values of the radii. Conclusions The curvature-induced spin-orbit coupling in rippled graphene structures opens a broad avenue for spintronic applications in graphene based nanodevices. In this paper, we consider the most general case of the incident angle ϕ = arctan k y /k x of a ballistic electron beam, injected from the plane graphene sheet on the superlattice that consists of the curved graphene units. In contrast to semiripple configurations (a concave arc) considered in [11,12,30], our superlattice consists of the concave surface continuously connected to the convex surface. This unit is repeated N times (see Section 2). The cooperative effect of our superlattice leads to almost perfect spin inversion phenomenon for the injected through this superlattice ballistic electrons with a chosen spin polarization (see Section 4) without any external field. We found the optimal set of angles and the minimal number of corresponding N ripples (see Figures 4 and 5) that yield the spin-flip operation. Such an operation (without use of the magnetic field) may be useful for production of spin-based logic elements (see [31] for a review). In particular, at a fixed energy of the injected ballistic electrons, one can choose the fixed number of ripples and obtain the spin-flip operation at specific angles of the injected electrons, i.e., the angle ϕ (see Figure 5d). The obvious advantages are low switching energies and low power dissipation. On the other hand, once the set {N 1 , ϕ i } (which provides the spin-flip operation in the first band) is chosen, this phenomenon can take place with the periodicity of 2N 1 in the other bands (see Equation (53)). It is notable that this effect holds for all specific intervals of energies that create a type of conductance bands in the superlattice, which are independent of the ripple radius (see Figure 5). We hope that presented results could be useful for various spintronic devices once nanotechnology provides rippled graphene structures with a controlled periodicity.
5,927.6
2023-08-16T00:00:00.000
[ "Physics" ]
Formation dynamics of black- and white-hole horizons in an analogue gravity model We investigate the formation dynamics of sonic horizons in a Bose gas confined in a (quasi) one-dimensional trap. This system is one of the most promising realizations of the analogue gravity paradigm and has already been successfully studied experimentally. Taking advantage of the exact solution of the one-dimensional, hard-core, Bose model (Tonks-Girardeau gas) we show that, by switching on a step potential, either a sonic (black-hole-like) horizon or a black/white hole pair may form, according to the initial velocity of the fluid. Our simulations never suggest the formation of an isolated white-hole horizon, although a stable stationary solution of the dynamical equations with those properties is analytically found. Moreover, we show that the semiclassical dynamics, based on the Gross-Pitaevskii equation, conforms to the exact solution only in the case of fully subsonic flows while a stationary solution exhibiting a supersonic transition is never reached dynamically. Introduction Analogue models of gravity are useful tools bridging gravitation to other branches of physics and they have been intensively investigated since their proposal [1] in order to study effects whose experimental achievement is hardly possible in the cosmological context. In the past years a great scientific effort was dedicated to the observation and characterization of the Hawking effect [2,3] but several different phenomena can be investigated within the analogue gravity paradigm [4]. In particular, the physics of sonic horizons [5] has been vastly studied, both from a theoretical and an experimental point of view and analogue black-hole and white-hole horizons have been experimentally achieved in optics [6][7][8], in Bose-Einstein condensates (BECs) [9][10][11][12][13], in water [14][15][16][17][18], in superfluid Helium [19] and a in few other setups. In the context of BECs, a few recent experiments [10][11][12][13] successfully recreated sonic horizons in order to probe the elusive Hawking radiation but the results are still under debate in the scientific community [20][21][22][23][24][25][26][27][28]. Nonetheless, these experiments represent a remarkable achievement and they have stimulated a variety of theoretical studies on sonic black holes in 1D BECs. One feature, though, which has not been deeply investigated in these kind of configurations is the formation dynamics of an analogue black/white hole; this may be due to the fact that analogue models can hope to reproduce only the kinematical aspects of the gravitational phenomena, as the analogy breaks down for the dynamical properties. For this reason, often theoretical studies start from an "eternal" (analogue) black-hole, i.e., from a configuration which already shows a sonic horizon, neglecting the question of how it may have formed. In this work we will study the dynamics of formation of a sonic horizon for a simple, realistic model of BEC which mimics existing experimental configurations [9][10][11][12][13]. In particular, we will exploit a recently-proposed exact model [26,27] in order to study the dynamics of the analogue of a gravitational collapse in the absence of external confining traps and to investigate whether the system only forms black-hole-like horizons or if a dynamical process could, in principle, lead to a formation of a white-hole horizon too. Furthermore, we will compare the results with those obtained with a semiclassical approach -commonly adopted in the theoretical interpretations of the experiments -in order to highlight possible inadequacies of the semiclassical paradigm. This will give insights on the formation process of analogue horizons and, interestingly, will also lead to challenge the "eternal" black-hole configuration as a tool to study the Hawking effect. Tonks-Girardeau gas We consider a one-dimensional Bose fluid of hard-core, point-like particles in an external potential V (x). The model hamiltonian is written, in first quantization, as where the hard-core constraint corresponds to the strong interaction limit c → ∞. In the absence of external potential V (x) = 0, eigenvalues and eigenfunctions of this hamiltonian have been found for any value of c by Lieb and Liniger [29] using Bethe Ansatz techniques. In the hard-core limit, the Lieb-Liniger solution can be more easily obtained by performing a Jordan-Wigner transformation which maps the hard core Bose fluid into a non-interacting, spinless, fermion model [30]. A simple argument allows to understand this exact result: in one dimension the spatial ordering of hard-core bosons cannot be modified by a local hamiltonian. Therefore, the different phases acquired by bosons and fermions upon particle exchange do not come into play and the matrix element of the hamiltonian between two arbitrary bosonic configurations coincides with the corresponding matrix element of the free spinless Fermi gas. Interestingly, this result remains valid also in the presence of an arbitrary external potential, but clearly the argument fails in more than one dimension, where a natural ordering cannot be defined. Due to this mapping, the full spectrum of the hamiltonian (1) and the many-particles dynamical density correlations coincide with those of a non-interacting Fermi fluid in the same external potential V (x). Note that, instead, the momentum distribution of the Bose gas differs from that of the corresponding Fermi system due to the non-locality of the Bose-Fermi mapping defined by the Jordan-Wigner transformation. Exactly solvable models represent a unique tool as a test ground for approximate theories, which are the only possible option in most cases. Among the approximate descriptions of a weakly interacting Bose fluid, the celebrated Gross-Pitaevskii equation (GPE) [31] stands out as a simple and accurate method to portray the dynamics of the condensate wavefunction. The inclusion of a cubic non-linearity in the Schrödinger equation suffices to account for weak interparticle interactions, providing a satisfactory description of many experiments in cold atom physics [31]. The Tonks-Girardeau (TG) gas is a strongly interacting Bose fluid and therefore the usual semiclassical approaches for the approximate description of the dynamics are not expected to provide accurate results. However, it is known that a simple generalization of the Gross-Pitaevskii equation does indeed faithfully reproduce the physics of a Tonks-Girardeau gas [32,33], at least in homogeneous systems. Such a "strong coupling limit" of the usual (cubic) GPE displays a quintic non-linearity with a specific value of the coupling constant g = 2 π 2 2m : In the Gross-Pitaevskii approximation, the global properties of the flowing Bose gas are identified with those derived from the single particle wavefunction ψ(x, t). In particular, in the homogeneous case (V (x) = 0), the phonon excitation spectrum derived from Eq. (2) coincides with the exact one, defined by a sound velocity c, related to the fluid density n by the same expression valid for a one dimensional Fermi gas: c = π n m . The dynamics described by the Gross-Pitaevskii equation will be compared with the exact results in order to verify the accuracy of the widely adopted semiclassical approximation in the context of analogue gravity experiments in BECs. The model The Tonks-Girardeau gas model has been recently studied in the framework of analogue gravity [26,27] because it allows to describe, from a microscopic point of view, the dynamics of formation of a sonic horizon and to investigate the physical conditions leading to the expected Hawking-like phonon flux emerging from the horizon. The analytical and numerical studies of Ref. [26,27] proved that a precise correspondence between the physics of the analogue model and that of the Hawking effect can be obtained only when the TG gas flows against an extremely smooth potential barrier. The rationale for this result is that the identification of the quasiparticles (phonons) as a free quantum field is possible only for very low excitation energies and (almost) homogeneous states. Instead, sharp potentials, like those usually adopted in the experiments, give rise to a steady state characterized by a rapidly varying density profile, whose elementary excitations cannot be expressed as a free quantum field with a locally varying sound velocity, as requested by the analogy to the gravitational problem. Remarkably, a sharp external potential does not seem to suppress the Hawking effect: in the steady state a phonon flux may indeed be present but it is not described by a thermal distribution and therefore it cannot be associated to a well-defined Hawking temperature. We stress that these results have been obtained [26,27] for the Tonks-Girardeau gas in one dimension, where the occurrence of thermalization after a quantum quench is hampered by the integrability of the model [34]. On the other hand, at variance with the case of a potential barrier, an extremely smooth potential step does not lead to the formation of a sonic horizon, making such a configuration unsuitable for the investigation of the Hawking effect [26,27]. In this work we will examine the formation dynamics of black-and white-hole horizons when a flowing TG gas is perturbed by the sudden switching-on of a sharp step potential. In the experiments, this procedure has been adopted in quasi one-dimensional BECs in order to create either a sonic black-hole horizon [11][12][13] or a black/white hole pair in order to trigger the so-called "black-hole laser" effect [10]. According to the experimental configurations and to our previous theoretical studies [26,27], we choose a left-moving gas flowing over a "waterfall" potential as a preferred setup but we will generalize our investigation to also describe the opposite flow. Thus, we start from a flowing TG gas in a stationary state which is described, via the Bose-Fermi mapping, by a Slater determinant of plane waves 1 √ 2π e ipx ; the wavevectors p belong to the interval −k F − k 0 ≤ p ≤ k F − k 0 , where k F > 0 is the Fermi momentum in the co-moving reference frame and k 0 is the momentum shift due to the fluid flow; these two parameters, characterizing the initial state of the system, are related to the uniform particle density n = k F π , sound speed c = k F m and fluid velocity v = − k0 m . We therefore adopt the convention that a positive value of k 0 corresponds to a left-moving flow. Then, at t = 0, we suddenly turn on the step potential where V 0 is the potential height which is conveniently parametrized through the wavevector Q: V 0 = 2 Q 2 2m . Immediately after the quench the fluid is not in a stationary state any more and its many particle wavefunction evolves in time. Adopting the fermionic representation, at any time t > 0 the wavefunction is given by the Slater determinant of the time evolution of the initial plane waves: where φ k (x) are the exact single particle eigenfunctions in the presence of the external potential, k = ω k the corresponding energy, and η → 0 + is the usual convergence factor. For a step potential k = 2 k 2 2m and the eigenfunctions φ k (x) are easily obtained by elementary methods: • for k > Q The explicit expressions of the reflection and transmission coefficients are reported in [27]. Steady states While the full dynamics after the quench requires a numerical analysis starting from Eqs. (4)(5), the long time behavior can be analytically evaluated. As shown in Ref. [27], at any given fixed position x, the local physical properties of the model approach those given by the stationary state built upon the wavefunctions ψ p (x, t) given by: As a consequence, for each choice of the two parameters (k F , k 0 ) a steady state exists, characterized by the number density profile (10) n and by the mass current which is constant in space and time. By direct substitution of the scattering eigenfunctions in Eq. (9) we get the explicit results: for x < 0 and for x > 0, where q = p 2 − Q 2 and k = p 2 + Q 2 . The occupation number f (p) identifies the region in momentum space occupied by fermions: f (p) = 1 for −k F − k 0 ≤ p ≤ k F − k 0 and f (p) = 0 elsewhere. These expressions allow to evaluate several important quantities in the framework of the analogue gravity: the local fluid and sound velocity, v(x) and c(x) respectively, together with their ratio, i.e. the Mach number: The analytical expressions show that in the limiting cases k 0 ≥ k F the density profile is rigorously flat in the region x < 0, leading to a uniform fluid velocity. The same can be said for the region x > 0 when −k 0 ≥ Q + k F . At first, tough, we will consider only the cases of |k 0 | ≤ k F as the condition ensures that the flow is initially subsonic. The integrals can be easily evaluated for both the particle current and the asymptotic densities at x → ±∞ and allow to evaluate the range of parameters leading to a steady state characterized by a single supersonic transition. Figure 1 summarizes the results: the blue curve refers to a left moving fluid (k 0 > 0) with subsonic velocity at x → +∞ which gets accelerated by "falling into the waterfall" and becomes supersonic at x → −∞ if k F − k 0 lies below the blue line. Instead, the red curve shows that a right moving fluid (k 0 < 0), which starts subsonic at x → −∞, can get supersonic flowing against the step potential if k F + k 0 belongs to the region below the red line. In this case, however, k F must exceed the minimum value Q 2 . In both cases, a black-hole horizon forms close to x = 0, only for sufficiently high initial velocities: |k 0 | k F . The steady states corresponding to an initially still Fermi gas, i.e. a vanishing k 0 , display a finite particle current due to the sudden appearance of the waterfall potential, leading to a leftwards motion (i.e. with negative velocity) of the fluid. For this case, two velocity profiles are shown in Figure 2 for k F = π 2 Q and k F = π 4 Q: while in the former case the steady state is always subsonic, in the latter a supersonic flow sets in a finite region around x ∼ 0, giving rise to a pair of black-hole/white-hole horizons. When we start from a fluid which is already moving (i.e. k 0 = 0), the long time characteristics of the flow velocity and of the sound speed show different possible behaviors. Illustrative plots of the stationary velocity profiles for a few representative choices of k 0 at fixed k F = π 2 Q are shown in Figure 3 for both cases of a fluid falling into the waterfall (k 0 > 0, left panels) and flowing against the step potential (k 0 < 0, right panels). While for k 0 = 0 Figure 2 shows that the fluid motion is subsonic, when k 0 is increased (i.e. when the fluid falls from the potential step) a small supersonic region appears close to the step (see the case k 0 = 3π 10 Q). Further increasing k 0 (see k 0 = 9π 20 Q) the flows becomes supersonic in the whole downstream (i.e. left) region. At k 0 = k F = π 2 Q the velocity becomes constant for all x < 0, as previously discussed. The case of a fluid flowing against the step potential (right panels) is qualitatively similar, showing a fully subsonic flow for k 0 = − 3π 10 Q, a black-hole/white-hole pair at k 0 = − 9π 20 Q and a single supersonic transition at k 0 = −k F = − π 2 Q. In the latter case, however, the fluid velocity is not constant in the downstream (i.e. right) region. In all the cases, the undulations present both in the upstream and downstream regions are due to the presence of a sharp Fermi surface. Note that the steady state solutions discussed here are not invariant by time reversal. Being the hamiltonian real, the complex conjugate of the single particle wavefunctions (9) are still exact eigenfunctions for the free Fermi gas; the time reversal of a scattering state, however, is not a scattering state but is written as a particular linear combination of the two scattering states. Therefore, the Slater determinant of the time-reversed states describes a different stationary state of the system with the same density profile and opposite current j, as shown by Eqs. (10,11): wherever the original solution displays a black-hole (BH) horizon, the time-reversed state predicts a white-hole (WH) horizon. However, we stress that, according to the analysis leading to Eq. (9), the time reversed state does not describe the stationary solution spontaneously reached by the system after a quench. It is interesting to compare these exact results with the stationary states of the corresponding Gross-Pitaevskii equation, to check whether the semiclassical approach is able to capture the variety of behaviors displayed by the TG gas model. Looking for solutions to Eq. (2) of the form ψ(x, t) = Φ(x) e − i µt we obtain a non-linear differential equation whose solutions can be found analytically in the case of a step potential V (x) = V 0 Θ(x), where Θ(x) is the Heaviside function. The generic stationary solution depends on two parameters: the chemical potential µ and the current j. The equation has real coefficients and then each stationary solution has a time reversal partner with opposite current: the direction of the particle flow is reversed by keeping the same density profile. Restricting our analysis to solutions leading to density profiles asymptotically flat 1 for x → ±∞, we generally find only monotonic decreasing density profiles and fully subsonic flows. Only for a special relationship between µ and |j| a different solution appears, displaying a supersonic transition, which might represent either a BH or a WH horizon according to the sign of j. The density (and velocity) profile is flat for x < 0 while is monotonically increasing for x > 0: this solution corresponds to the "half soliton" profile in full analogy with the known results valid for the standard (cubic) GPE [35]. The analytical form of the half soliton for the quintic GPE is Φ(x) = n(x) e i 2 ϕ(x) where the modulus and the phase are expressed in terms of the unique parameter δ Along the positive semi-axis x > 0 we have 1 A different class of solutions of the cubic GPE, characterized by an oscillating density profile, was discussed in Ref. [35]. Although this class of solutions is also present in the quintitc GPE, we do not discuss their properties in detail. where α = (4 − 3 δ 2 ) 1 2 (2 − 3 δ 2 ) −1 and the asymptotic density n ∞ is expressed in terms of the parameter δ by the algebraic equation: For x < 0, as previously stated, the density is constant and the phase linear in x. This analysis shows that the GPE is not able to reproduce, even qualitatively, the exact solution when a supersonic transition is present and the supersonic flows have non-monotonic profiles with a well defined limit for x → ±∞ (see Figure 3). Only the special case k 0 = k F appears to be related to the "half soliton" solution of the GPE equation, although the agreement is not quantitative (see Figure 4): if the parameter δ is chosen so that the x → +∞ limit of n(x) coincides with the exact value, the uniform density in the supersonic region is underestimated and the profile close to the transition point is not correctly reproduced. Black hole formation We now examine the formation dynamics of an event horizon in this analogue model. We first note that all the steady state solutions we have found, together with their time reversal, are dynamically stable because the exact dynamics is linear, being that a free Fermi gas. A direct numerical integration of the Schrödinger equation starting from the analytical steady state solution confirms this expectation: a small random noise superimposed to the initial condition does not grow in time. This result might appear puzzling because we have seen that, for suitable choices of the parameters (k F , k 0 ) a black-hole/white-hole pair appears (see, for instance, the case k 0 = 3π 10 Q in Figure 3). Under these circumstances a dynamical instability is generally expected due to the black-hole laser mechanism [36,37], although detailed studies [38][39][40] found that additional requirements must be met in order to trigger the instability. The stability of the exact stationary states is shared by the half soliton solution of the GPE: a Bogoliubov-de Gennes analysis shows that, analogously to the known result valid for the dark soliton solution in the cubic GPE [31,35], no complex eigenfrequency is present. The stability of the half soliton solution can be also easily checked numerically by direct integration of the GPE equation. Exact dynamics. Given the previous considerations, we might expect that a uniformly flowing TG gas perturbed by the sudden switching on of a step potential should evolve spontaneously towards one of the steady state solutions investigated in Section 4. Starting with a left moving fluid, this is indeed the case for the exact dynamics in all the different regimes previously identified as reported in Figure 5, where a few snapshots of the evolving density profile are reported for k F = π 2 Q and k 0 ≥ 0. As argued in our previous studies [26,27] and shown in Figure 5, the exact dynamics approaches the stationary state which we have determined by analytical means in the previous Section: i.e., the Slater determinant of the scattering states identified by the same parameters (k F , k 0 ) of the initial condition. The time scale necessary to reach the steady state depends on the size of the region we are interested in; we see that a few tens of the natural time unit (τ = m Q 2 ) are enough to converge to the analytic solution in the range |xQ| 10 and no shock waves appear during the exact time evolution. Thus, the dynamics drives a homogeneous, left moving TG gas to a known steady state which may exhibit the presence of sonic horizons, depending on how the initial density and velocity of the gas are chosen. Note again that the particular case of k 0 = k F shows a constant density downstream and the half-soliton shape discussed in the previous Section. Semiclassical dynamics. Remarkably, the case is quite different for the semiclassical dynamics, as shown in Figure 6. We know that only the fully subsonic stationary states and the half soliton solution have a direct analogue in the GPE, while there is no semiclassical solution corresponding to non monotonic density profiles and a well defined asymptotic limit at |x| → ∞. For an initial condition leading to a subsonic steady state (see for instance the k 0 = 0 case), the GPE evolution does indeed proceed smoothly, converging to a stationary state as expected. Furthermore, if we compare Figure 6 with Figure 2 and 5, we see that the agreement between the semiclassical solution and the exact one is also quantitative, as the values of the density, of the fluid velocity and of the speed of sound show good agreement. Instead, when the initial velocity increases, the semiclassical evolution does not approach a steady state even in the case k 0 = k F (which is shown in Figure 6), where we have shown that the half soliton solution is stable and closely mimics the exact result; here, instead, a sonic transition is formed during the evolution, as expected, but the density profile shows oscillations in space and time in the supersonic region and the flattening of the profile does not occur. Moreover, by carefully examining the dynamics, it can be noticed that the envelopes constituting this pattern are emitted one at the time, they have equal width, they move downstream with a constant velocity and their shape does not change during the evolution: they thus resemble the soliton trains present in the cubic GPE [41,42]. This may not be evident from Figure 6, where the downstream behavior looks more like a regular oscillation than a collection of solitons, but the structure becomes much clearer if we take lower values of k 0 : in these cases the single envelopes are well separated and the succession of solitons is well defined. On the other hand, when we increase k 0 the interval between the emissions decreases but we recover the same qualitative behavior for all the intermediate cases k 0 < k F which feature a sonic horizon: there, in fact, the semiclassical equation does not allow for stationary solutions and indeed the soliton emission prevents to attain a steady state up to the k 0 = k F limiting case, where a stationary solution of the GPE equation (the half soliton) is present but is not reached by the GPE dynamics. In search of the white hole A white hole is the time reversal of the black hole [43] and we already noted that the corresponding stationary state is indeed a stable solution of the exact dynamical equations, which is in agreement with previous studies on analogue white holes in BECs [44,45]. So one might expect that for each initial condition leading to the formation of a black-hole horizon, starting from a time reversed initial state, the dynamical evolution of a free uniform TG gas would give rise to a white hole. In other words, if an initial condition defined by the two parameters (k F , k 0 ) leads to the formation of a black hole, does the time evolution of the state (k F , −k 0 ) give rise to a white-hole horizon? 6.1. Exact dynamics. In order to test this claim, we have to investigate whether the dynamics drives a right moving fluid flowing against a step potential towards a white-hole configuration. This is not the case, as already shown in Figure 3, where the properties of the steady states identified by the pairs (k F , ±k 0 ) are compared. To better illustrate this point, in Figure 7 we plot the exact dynamical evolution of a TG gas starting from different initial conditions defined by k F = π 2 Q and k 0 < 0, i.e. a right moving uniform fluid. When the potential step is switched on the density profile smoothly converges towards the analytical steady state shown in the right panels of Figure 3. The stationary flow may then be fully subsonic (see the case of k 0 = − 3π 10 Q), may display a finite supersonic region (see k 0 = − 9π 20 Q) or give rise to a black-hole horizon (see k 0 = −k F = − π 2 Q). We never observe the spontaneous formation of a white-hole horizon, i.e. a flow going from supersonic to subsonic when passing through the potential step: the external potential always impresses an acceleration on the flow, irrespective of the direction of motion. The observed dynamics can be understood in terms of scattering states: a white hole is formed by taking the time reversal of a black hole, which, due to the dynamics induced by the step, is a particular superposition of different states. Thus, a initially homogeneous fluid (i.e. a single "incoming" state) can never lead to a white-hole configuration. 6.2. Semiclassical dynamics. Also in this case, the GPE dynamics of a right moving fluid differs significantly from the exact result. In Section 4 we showed that, while the GPE admits a class of stable subsonic stationary solutions, only the "half soliton" wavefunction displays a sonic horizon, which may correspond either to a BH or a WH horizon according to whether the mass current is negative or positive. Figure 8 shows the main features of the dynamics according to the GPE: when the initial condition corresponds to a generic velocity in the range −k F < k 0 < 0, the evolution proceeds smoothly. Density waves are ejected towards x → −∞ and a subsonic stationary flow is reached at long times (see left panels). In the limiting case k 0 = −k F (right panels) the stationary flow becomes asymptotically sonic, with v = c at x → +∞. It is interesting to notice how, also in this case, the semiclassical picture agrees with the exact one also from a quantitative point of view only in the subsonic regimes: indeed, by comparing Figure 8 with Figure 3 and 7 it is clear how the values of the density, the gas velocity and the sound speed are comparable whenever the flow is subsonic. Yet, when a supersonic transition sets in, the quantitative agreement is spoiled. Finally, we want to address an important point regarding the semiclassical treatment: recalling Eq. (2), the question may arise whether the dynamical behavior of the gas in presence of a sonic transition may be due to the form of the non-linearity, while an approach based on the canonical GPE would have given a different picture. Figure 9 shows that this is not the case, as the main features (horizon formations, oscillations etc.) observed in the semiclassical dynamics are not a peculiarity due to the precise form of Eq. (2). Indeed, the same qualitative behavior is retrieved when the above analysis is repeated with the standard (cubic) Gross-Pitaevskii equation and the quantitative differences between the two cases are irrelevant to the conclusions drawn in this study. This is important as theoretical studies on sonic horizons in BECs usually rely on the standard GPE and a description of the dynamical behavior of a condensate in these regimes has been deemed useful several times in this context [35,46,47]. To the best of our knowledge, only two other works investigated the formation dynamics of the sonic horizons [48,49] but the results only partially agree with ours due to the particular configurations chosen 2 . Figure 9. The two figures compare the long time behavior of the density when the usual (cubic) GPE is used (blue curves) in contrast with the modified (quintic) form of Eq. (2) (red curves). In both cases, the same dimensionless coupling constantg = π 2 has been used. The profiles are qualitatively the same for both plots. The top plot shows a subsonic flow with k 0 = 0 and k F = π 2 Q; the bottom plot, instead, is the case in which a sonic horizon form, i.e. k 0 = k F = π 2 Q. Similar results are obtained in the case of a right moving fluid. All the curves refer to a time t = 20 m Q 2 . Further possible configurations The analysis carried forward in the previous Sections starts from the assumption that |k 0 | ≤ k F , which means that initially, when the potential is switched off, the fluid has a subsonic (at most sonic) velocity. For completeness, we will now relax this hypothesis and investigate what happens if the gas initially flows at supersonic speeds. Eqs. (12,13,14) provide the expected density profile and the particle current at stationarity also for |k 0 | > k F . Following the previous discussion, we can study the dynamics of the gas after the quench numerically and we can investigate the possible formation of a sonic horizon both within the exact dynamics and according to the semiclassical (GPE) approximation. One would expect that a supersonic homogeneous gas subject to an external waterfall potential would not give rise to a sonic transition, but, instead, the switching-on of the step potential introduces various interesting configurations as we will now describe in some detail. Let us start with a left moving gas flowing over the potential step. If we choose a value of the initial gas velocity which is slightly bigger than the sound speed (k 0 k F ), the exact evolution drives the system towards Figure 10. Absolute value of the fluid velocity (red) and of the sound speed (blue) according to the GPE dynamics. The initial state is a homogeneous fluid with k F = π 2 Q and different values of k 0 . Left panels correspond to a left moving fluid (k 0 > 0), right panels to right moving fluid (k 0 < 0, i.e. against the step). Velocities are in unit of Q m . The snapshots are taken at a time t = 60 m Q 2 for the right moving cases, where the GPE slowly drives the system towards a stationary state. For a left moving fluid, the snapshots correspond to t = 15 m Q 2 . In these cases, the system reaches a steady state in the upstream region, while the velocity profiles oscillate in time (and space) downstream. Snapshots taken at a time t = 14 m Q 2 are also shown (magenta and cyan lines) to emphasize the temporal oscillations. The black dashed line represents the stationary state reached by the exact evolution in each case. a stationary state displaying a sonic horizon. By further increasing k 0 this region shrinks until the fluid is supersonic on the whole axis. The dashed line of Figure 10 illustrates this behavior: indeed, the upper left panel shows a value of the initial velocity for which the subsonic region consists in one point, while in the central and bottom left panel the flow is always supersonic. Instead, the long time behavior of the semiclassical (GPE) solution shows an oscillation pattern in the upstream region and a train of solitons in the downstream region, thus never leading to the exact stationary state. This is found to be true for any value of k 0 > k F , as Figure 10 shows. Also, as noted in Section 4, under this circumstances the exact solution is always flat in the downstream region. The right panels of Figure 10, instead, report the long time evolution of a right moving fluid. The exact results (dashed lines) show that in a range of velocities |k 0 | > k F a BH horizon is formed: the flow upstream is subsonic and a supersonic transition occurs near the position of the step. By further increasing the value of the initial velocity the subsonic region becomes finite until it disappears, leaving a supersonic flow everywhere. Interestingly, it can be seen that both the exact and the semiclassical solution tend to a finite value in the downstream region while the two are significantly different in the upstream region: while the exact solution has smooth behavior, the semiclassical one develops spatial oscillations. Also, as already noted in Section 4, the exact solution is always flat in the downstream region if −k 0 > Q + k F (bottom panel). Conclusions We have studied, both analytically and numerically, an exactly solvable model describing a flowing onedimensional Bose gas, in order to shed light on the physics underlying the analogue gravity paradigm. This model of hard-core bosons enables to follow the formation dynamics of black-and white-hole horizons after an external step potential is switched on: the analogue of a gravitational collapse event. Indeed, the fluid starts from a situation of constant density and velocity and, after a quench, it reaches another stationary configuration. From a curved spacetime point of view, this process is the exact analogue of Hawking famous gedankenexperiment [3]. Furthermore, the shape of the external potential represents a realistic reproduction of existing experimental setups [9][10][11][12][13] and a model used in various theoretical works (for example [46][47][48][49]). We analytically found the stable stationary states of this model which allows for the existence of both blackand white-hole horizons, as well as black/white hole pairs. By numerical integration of the exact evolution equations we found that if a horizon is formed after the quench, either an isolated black-hole horizon or a tight black/white hole pair is formed but a white-hole horizon can never be reached dynamically. This somewhat conforms with the analogue gravity paradigm: the positive black-hole entropy allows for the spontaneous formation of an horizon, while the negative white-hole entropy does not. Moreover this study suggests that a sufficiently tight black/white hole pair is not necessarily unstable, at least in analogue models, while our solutions confirm that both a single black/white hole is a stable solution of the dynamical equations. Furthermore, we compared these findings with the solutions of a semiclassical equation usually adopted to describe this class of models: i.e., the Gross-Pitaevskii equation, here suitably generalized to deal with a strongly interacting Bose gas. We found that only a subset of all possible stationary states are described by the GPE, including the subsonic solutions (with no event horizons) and the single black hole case, for a specific value of the mass current. However, the dynamics described by the GPE does not lead to the formation of a stationary solution whenever a sonic transition is present: indeed, starting from a uniformly flowing Bose gas, the semiclassical evolution gives rise to density waves propagating outwards which are not damped in time, preventing the convergence towards the stable stationary solution of the GPE. This behavior, which is a general feature also for the usual cubic GPE, suggests that results based on the integration of non-linear Schrödinger equations should be critically considered, at least as far as the long time dynamics is concerned. Finally, to complete the description, we show other possible configurations leading to the formation of one (or more) sonic horizon which can be achieved by further increasing the initial velocity of the gas. On a final note, the results obtained in this work provide an important step for future possible investigations as they give insights for new possible experimental configurations, they represent a key ingredient for the study of the hydrodynamical instability (and gravitational effect) known as black-hole laser and they help understanding the bounds on validity of semiclassical methods. To this extent, a question which arises from these results is what the optimal black-hole configuration should be in order to study the analogue of the Hawking effect. In fact, on one hand, as Hawking already pointed out [2], the dynamical history of the gravitational collapse should not affect the emitted radiation, but, on the other hand, his line of thought heavily rests on the comparison between a precise initial and final stationary state, which we are able to identify through the exact solution of the model but not by means of semiclassical approaches.
8,995
2020-07-05T00:00:00.000
[ "Physics" ]
i Contents Table of Contents Some decision problems can be formulated as sorting models which consist in assigning alternatives evaluated on several criteria to ordered categories. The implementation of a multiple criteria sorting model requires to set the values of the preference parameters used in the model. Rather than fixing directly the values of these parameters, an usual approach is to infer these values from assignment examples provided by the decision maker (DM), i.e. , alternatives for which he/she specifies a required category or interval of acceptable categories. However, the judgments expressed by DMs through assignment examples can be inconsistent, i.e. , may not match the sorting model. In such situations, it is necessary to support the DMs in the resolution of this inconsistency. In this paper, we propose algorithms that calculate different ways to modify the set of assignment examples so that the information can be represented in the sorting model. The modifications considered are either the removal of examples or the relaxation of existing assignments. These algorithms incorporate information about the confidence attached by the DMs to each assignment example. These algorithms aim at finding and ranking the solutions to solve inconsistency that the DMs are most likely to accept. Introduction Many real-world decision problems can be represented by a model stating explicitly the multiple points of view from which alternatives under consideration should be evaluated, through the definition of n crit criterion functions g 1 , g 2 , . . ., g j , . . ., g n crit .Given a set A = {a 1 , a 2 , . . ., a i , . . ., a n alt } of potential alternatives evaluated on the criteria, the analyst conducting the decision aiding study may formulate the problem in different terms.B. Roy [14] distinguishes among three problem statements, i.e., problem formulations (choosing, sorting and ranking) that may guide the analyst in structuring the decision problem (see also [1]).Among these problem statements, a major distinction concerns relative vs absolute judgments of alternatives.This distinction refers to the way alternatives are considered and to the type of result expected from the analysis. In the first case, alternatives are directly compared one to each other and the results are expressed using the comparative notion of "better " vs. "worse".Choosing (selecting a subset of the best alternatives) or ranking (defining a preference order on A) are typical examples of comparative judgments.The presence (or absence) of an alternative a i in the set of the best alternatives results from the comparison of a i to the other alternatives.Similarly, the position of an alternative in the preference order depends on its comparison to the others. In the second case, each alternative is considered independently from the others in order to determine its intrinsic value by means of comparisons to norms or references; it consists of assigning each alternative to one of the pre-defined categories C 1 , C 2 , ..., C k , ..., C ncat .The assignment of an alternative a i results from its intrinsic evaluation on all criteria with respect to the norm defining the categories.Several methods have been proposed to handle multiple criteria sorting problems (MCSP), e.g., Trichotomic Segmentation [9], N-TOMIC [8], ORCLASS [6], ELECTRE TRI [15], PROAFTN [2], UTADIS [16] and a general class of filtering methods [12]. One of the main difficulties that an analyst must face when interacting with a decision maker (DM) in order to build a sorting model is the elicitation of various preference parameters used by the method.Even when these parameters can be interpreted, it is difficult to fix directly their values and to have a clear understanding of the implications of these values in terms of the output of the model.In order to avoid direct elicitation of the parameters, several authors have designed disaggregation procedures which allow to infer parameter's values from holistic judgments (such a disaggregation approach was first introduced in the UTA method [4]).Such procedures have been defined for MCSP e.g., [16] for UTADIS and [11] for ELECTRE TRI. The holistic judgments required to infer sorting models are called assignment examples and correspond to alternatives (actual or fictitious) for which the DM can express a desired assignment, e.g., "a i should be assigned to C 3 " (a i → C 3 ), or "a i should be assigned to i.e., imprecise assignment examples can be considered).In some sorting methods (namely UTADIS and ELECTRE TRI when only the weights of criteria are inferred) such assignment examples define linear constraints on the model parameters. In order to minimize the differences between the assignments made by the method and the assignments made by the DM, a mathematical program infers the values for these parameters that best restore the DM's judgments.Such a methodology requires from the DM much less cognitive effort than a direct elicitation of parameters (the elicitation of parameters is done indirectly using holistic information given by the DM) and provides a factual justification for the values assigned to the parameters. Inference procedures are usually not designed as a problem to be solved only once, but rather several times in an interactive learning process, where the DM continuously revises the information they provide as they learn from the results of the inference programs (see [3]).At each iteration, the DM has the opportunity to revise assignment examples.This interactive process stops when the DM is satisfied with the values of the parameters and when the results of the model (i.e., assignment of alternatives to categories) match their view of the decision problem. During this interactive process, the DM might provide inconsistent judgments, i.e., a set of assignment examples that cannot be satisfied simultaneously by the sorting model.Such inconsistencies can arise for several reasons (cognitive limitations, evolution of preferences during the process, ...).In such a situation, it is not always easy for the DM to identify the reasons for inconsistencies.Moreover, there usually exists more than one way to restore consistency.Hence, the DM need support in inconsistency analysis. Consider a problem in which a DM has interactively specified assignment examples inducing linear inequalities on the preference parameters.This is namely the case with UTADIS [16] and ELECTRE TRI [3] when only the weights of criteria are inferred.Let x 1 , x 2 , . . ., x j , . . ., x n denote the n parameters of the considered sorting model.The assignment examples define a polyhedron of possible values for the parameters, T = {x ∈ R n : n j=1 α ij x j ≥ β i , i = 1, . . ., m}; when an inconsistent set of assignment examples is provided by the DM, this polyhedron is empty.There exist various ways by which the set of assignment examples can be modified so that the polyhedron T becomes non empty. The problem is then to identify all the "minimal" subsets (minimal in the sense of the inclusion) that resolve inconsistency, i.e., subsets among which the DM must choose in order to make his/her information consistent.In [10] two algorithms are proposed to identify all the minimal subsets S q , q = 1, . . ., Q to be deleted (sorted by cardinality) that resolve inconsistency and whose cardinality is lower than (or equal to) maxcount (maxcount is an input to the algorithms that states the maximum number of solutions to be computed). In this paper we propose alternative ways to resolve inconsistencies stemming from a set of assignment examples; namely, instead of deleting assignment examples, we consider relaxing them, i.e., enlarging the interval of the possible assignments for an alternative.Moreover, we consider that the DM may provide confidence levels associated with the assignment examples; such information can be exploited to find a way to solve inconsistency that best them. The paper is organized as follows.Section 1 defines inconsistency relaxation and shows that the algorithms proposed by [10] still apply when considering constraints relaxation rather than constraints deletion.The section 2 considers the case where the DM is able to provide confidence levels associated to the assignment examples.We provide two ways to account for such information in order to rank the solutions according to the confidence levels provided by the DM.Section 3 provides an illustrative example within the context of the ELECTRE TRI. Inconsistency resolution via constraints relaxation Resolving the inconsistencies can be performed by deleting a subset of constraints.Let us denote I = {1, 2, ..., m} the set of indices of the constraints and T ∅ = {x ∈ R n : n j=1 α ij x j ≥ β i , ∀i ∈ I} the initial empty polyhedron, i.e., with all the constraints.Let S ⊆ I denote a subset of indices of constraints.We will say that S resolves the inconsistency if and only if the polyhedron In [10] two algorithms are proposed to compute alternative ways to restore consistency by constraints deletion.We consider here the case where consistency can be solved by relaxing constraints rather than deleting them.Considering an infeasible system of linear inequalities (that can correspond to assignment examples), relaxing constraints rather than deleting them (in order to restore feasibility) has already been studied in the general case by (e.g., [13] and [7]).The relaxations considered by these authors are continuous and deal with the right-hand-side of the constraints only.In our case, we will define the relaxations differently: • the relaxations will be performed by changing the technical coefficients of the constraints rather than the right-hand-side; • a discrete set of relaxations will be considered which have a meaning in the sorting model, namely increasing the interval of categories to which an alternative can be assigned. Suppose the DM has specified a set of assignment examples, i.e., a subset of alternatives A * ⊆ A such that each a i ∈ A * is associated with max(a i ) (min(a i ), respectively) the index of the maximum (minimum, respectively) category to which a i should be assigned according to their holistic preferences (a i → [min(a i ), max(a i )], a i ∈ A * ).From the DM's perspective, max(a i ) represents the statement "a i should be assigned at most to category C max(a i ) " and, min(a i ) express that "a i should be assigned at least to category C min(a i ) ".For each a i ∈ A * , min(a i ) and max(a i ) induce (when considering UTADIS and ELECTRE TRI) two linear constraints.Note that trivial constraints such as min(a i ) = C min and/or max(a i ) = C max do not need to be taken into account. Let us consider the assignment example a , with at least one strict inequality.Let us consider the system of inequalities containing the constraints corresponding to all the possible relaxations of the assignment example a i → [min(a i ), max(a i )] (it also contains the constraints corresponding to the original assignment example).It should be noticed that all the constraints corresponding to a relaxation of one of the two initial constraints are redundant.Consider S the set of all indices of constraints induced from a set of assignment examples corresponding to a relaxation of the initial assignment examples.Therefore we can note that S contains many redundancies. If we apply the algorithms proposed by [10] (i.e., inconsistency resolution via constraints deletion) to the set S, the solutions correspond to constraints relaxation and/or deletion.It follows from the preceding remark that it is possible to use the algorithm proposed by [10] to solve inconsistencies by relaxation (rather than deletion) of assignment examples.Hence, in the rest of the paper, we will talk in terms of constraints deletion, knowing that it embraces the case of constraints relaxation. Attributing confidence levels to assignment examples In the course of the interactive process that aims at inferring the parameters of a sorting model, the DM provides assignment examples.For each assignment example, DMs might be more or less confident in their statements.Let us suppose that they are able to express confidence judgments during the interactive process.Such confidence judgments can be taken into account when an inconsistency arises.More precisely, algorithms that identify alternative ways for solving inconsistencies may use such information.Intuitively, these algorithms should provide solutions in an order such that the least confident constraints are relaxed/deleted with a higher priority than solutions relaxing more confident statements. From the DM's perspective, a i → max(a i ) represents the statements "a i should be assigned at most to category C max(a i ) " and, min(a i ) express that "a i should be assigned at least to category C min(a i ) ".These two statements induce two constraints.The DM can attach a confidence level to each of the above mentioned statements.This information will be interpreted as confidence levels attached to the corresponding constraints (for example, a 1 → C 2 implies "a 1 should be assigned at least to C 2 " and "a 1 should be assigned at most to C 2 " and the DM may have different confidence levels concerning these two statements, e.g., they may say that if a 1 is not assigned to C 2 , then it is more likely to be assigned to a higher category than to a lower one).For each relaxed constraint, the attached confidence level corresponds to the confidence level of the original constraint from which it was derived (unless the DM provides specific information). A lexicographic ranking procedure Let us consider an inconsistent set of assignment examples provided by the DM and the set of linear constraints associated with these examples.Any relaxation of these assignment examples (see §1) will be also considered here.Following the notation introduced previously, m denotes the total number of constraints and I = {1, 2, . . ., i, . . ., m} denotes the set of indices of these constraints.The resulting polyhedron is, Let I p denote the subset of constraints whose confidence level is equal to ψ p .Hence, I 0 , I 1 , ..., I p , ..., I τ define a partition of I. Furthermore, we will denote I ≤p = p l=0 I l the set of constraints whose confidence level is lower than or equal to ψ p .Now, consider S l ⊆ I ≤l a subset of indices of constraints whose confidence level is lower than or equal to ψ l .We will say that S l resolves the inconsistency at a confidence level ψ l if and only if A simple way to account for the confidence level attached to each constraint is to proceed as follows: 1. Identify (by increasing order of cardinality) all minimal subsets S 0 1 , S 0 2 , ..., S 0 q 0 that resolve the inconsistency at level ψ 0 (i.e., relaxations whose confidence level is equal to ψ 0 that make the original system of inequalities feasible). 3. Proceed in the same way until finding minimal subsets S τ 1 , S τ 2 , ..., S τ qτ that resolve the inconsistency at level ψ τ or finding a total number of subsets equal to maxcount. The program P M 0 1 identifies the smallest set of constraints S 0 1 whose confidence level is equal to where, M is a positive large number; the variables y i , i ∈ I ≤0 , are the binary variables assigned to each constraint index in S whose confidence index is lower than or equal to ψ 0 .The indices of constraints for which y * i = 1 (at the optimum of P M 0 1 ) constitute the subset S 0 1 . P M 0 2 is defined in order to compute S 0 2 .This new program is derived from P M 0 1 by adding the single constraint i∈S 0 1 This constraint makes it impossible to find S 0 1 (the optimal solution of P M 0 1 ) or any solution that includes this set.A third program P M 0 3 is then defined by adding the constraint i∈S 0 2 y i ≤ |S 0 2 | − 1, and so on until we reach an infeasible program, meaning that there are no more solutions in I ≤0 .When all the solutions in I ≤0 are found, the algorithm starts to search for solutions in I ≤1 .The first solution S 1 1 is found by solving P M 1 1 which is derived from the previous program by replacing the constraints The algorithm continues until finding maxcount solutions (or no more solution exists). Begin p ← 0 count ← 0 While (p ≤ τ ) and (count ≤ maxcount) q ← 1 moresol ← true While moresol Solve P M p q If (P M p q has no solution) or (count>maxcount) Then moresol ← false Else S p q ← {i ∈ I : End while End This algorithm requires to solve several 0-1 programs.Note that it is possible to design an algorithm for the same purpose using only linear programming (see the second algorithm presented in [10]). Defining a penalty function Another approach to our problem consists of defining a penalty function π(S) associated to each subset of constraints indices S ⊆ I, and to rank the subsets that resolve the inconsistency by decreasing penalty order: the larger π(S), the greater the insatisfaction of the DM in removing S from I.This approach generalizes the lexicographic ranking as it is possible to define the penalty function π in a way that the penalty ranking coincides with the lexicographic ranking. Given a subset S ⊆ I, S ∩ I p denotes the subset of S corresponding to constraints indices whose confidence level is equal to ψ p , p = 0, ..., τ .Let |S ∩ I p | denote the cardinality of S whose confidence level is equal to ψ p . In order to define the semantic of the penalty function π, we impose a few suitable conditions on π: These conditions express natural properties for a function π to define a consistent penalty function: • Condition 2.1 (non-negativity) states that π has a lower bound (arbitrarily set to 0).Although it is not necessary, it would be possible to impose also an upper bound on the penalties, for instance π(I) = 1.In such a case, the penalty function π could be understood as a "disutility" function related to a utility function u(.) = 1 − π(.).This would be of interest in that the questioning techniques used to elicit multi-attribute utility functions [5] from the DM can be used in this context. • Condition 2.2 (anonymity) states that the penalty of a set S only depends on the number of constraints of each confidence level contained in S regardless of the constraints "label". • Condition 2.3 (confidence monotonicity) states that considering a solution S, if the confidence level of one constraint in S decreases, then π(S) should also decrease. • Condition 2.4 (cardinality monotonicity) states that if a solution S contains less (or equal) constraints than another solution S ′ , for each confidence level, the penalty should be lower (or equal) for S than for S ′ . Among the possible penalty functions, one of the simplest can be defined considering that each constraint in S of a given confidence level ψ p contributes to increase π(S) by an amount ∆ p (the values ∆ p are to be defined by the DM): Considering condition 2.3, the amounts for ∆ 0 , ..., ∆ τ should be such that u < v ⇒ ∆ u < ∆ v .This simple model can be generalized by considering that the penalty does not increase linearly with respect to the number of constraints: where π p (n) is a function (to be defined by the DM) denoting the penalty of removing n constraints of confidence level ψ p (given the assumed conditions, π p must be an increasing function). More sophisticated non-additive models may be envisaged, namely those taking into account preference dependencies among different confidence levels. Ranking solutions according to the penalty function Given a penalty function π, it is necessary to define an algorithm to rank by increasing penalty the subsets of I that, if removed, yield a consistent system.In order to design an algorithm to identify the maxcount subsets of constraints that solve inconsistency and rank them according to the penalty function π, we will adapt the algorithms presented by [10] which rank by increasing cardinality all minimal subsets that resolve inconsistency : S 1 , S 2 , . . ., S q , . . . The algorithms presented in [10] provide the set of the solutions ordered by cardinality without any consideration about confidence levels.In our case, we want to provide the set of (at most) maxcount solutions ordered by increasing penalty.It should be noticed that the smallest cardinality solutions might not correspond to those of the smallest penalty.Therefore, we can proceed by computing the solutions by increasing cardinality and stop when we are sure that the solutions of a higher cardinality have a greater penalty than the ones we already obtained. Let S x,p denote an arbitrary set of x constraints, all of confidence levels equal to ψ p . , the penalty of any solution after the q-th is not lower than the penalty that would be awarded to the q-th solution if all the constraints indexed by S q were of the lowest confidence. In the following algorithm, TOP-N denotes a list of solutions of at most maxcount elements, ordered by increasing penalty, and S tail denotes the last solution (i.e., the solution with the highest penalty) of list TOP-N.At the end of the algorithm, TOP-N contains the maxcount first solutions ordered by increasing penalty. Begin q ← 0 TOP-N ← empty list S tail ← first solution repeat S q ← q th solution provided by the algorithm in [10] If (q ≤ maxcount) or (π(S q ) < π(S tail )) Then S q enters TOP-N End If Until S q = ∅ or (q ≥ maxcount and π(S tail ) ≤ π(S |Sq|,0 )) End In this algorithm, when S q enters TOP-N it does so respecting the penalty ranking.If the list is full (when it contains maxcount elements), this implies removing the highest penalty element (S tail ), and the variable S tail will be updated.Proposition 2.1 allows us to define the stopping condition π(S tail ) ≤ π(S |Sq|,0 )). Illustrative example Let us consider a situation in which a set of 40 alternatives has to be assigned to 5 categories using the ELECTRE TRI pessimistic method.Each alternative is evaluated on the basis of a set of 7 criteria (see Appendix A).The limits of categories are known but the criteria importance coefficients are to be defined (see Appendix B).Suppose the DM provides assignment examples with associated level of confidence on a scale (absolutely confident ≻ quite confident ≻ not so confident), where means that alternative a i must be assigned to a category between C k and C k ′ , (k ≤ k ′ ): From these assignment examples, it is possible to define relaxations as defined in Section 1.In this example, we will suppose that the relaxations have the same confidence level than their corresponding assignment examples.For instance, the relaxations corresponding to the assignment example amounts at removing the assignment example a 1 → C 5 ). These assignment examples and their relaxations generate a set of 41 constraints on the criteria weights w j , j = 1, ..., 7 and cutting level λ which are presented in Appendix C. The first 17 constraints correspond to the original assignment examples and the remaining ones correspond to the relaxations.The linear system associated with the assignment examples is infeasible, which means that the information provided by the DM is inconsistent, i.e. there is no way to represent this information in the ELECTRE TRI sorting model. Considering this infeasible linear system, there exist 11 minimal subsets of constraints that resolve the inconsistency, where I = {1, 2, ..., 41}.These 11 subsets (ordered by cardinality) are listed below.Let us remark that due to the limited size of this example, we have computed all the solutions.Such a way of proceeding is time consuming when dealing with real world problems of large size . When interacting with the DM to solve an inconsistency, it is not reasonable to propose him/her a large number of alternative solutions.It is convenient to propose a limited number of solutions that might be interesting for him/her.In our case, we wish to propose approximately 5 solutions to the DM (maxcount=5). Conclusion In this paper, we considered the problem of supporting the DM in the resolution of inconsistent judgments expressed in the form of assignment examples in multiple criteria sorting model.We have proposed the concept of relaxation of an assignment example, which is helpful in this context.To resolve the inconsistency, it is useful to obtain from the DM confidence statements associated with the assignment examples.We have proposed procedures that account for this information to assist the DM in finding the most relevant ways to restore consistency. An illustrative example has been provided to show how the proposed procedures can be used within the context of the ELECTRE TRI sorting method.However, our procedures are general and apply to any sorting method for which assignment examples generate linear constraints on the preference-related parameters. An interesting extension of this work consists in considering the possibility of associating different confidence levels to the original assignment examples constraints and their corresponding relaxation.This extension amounts at considering that the various relaxations of an assignment example are not judged as equivalent as regards their confidence levels. Appendices Appendix A: Evaluation matrix S original assignment relaxed assignment S original assignment relaxed assignment
6,025.6
2004-01-01T00:00:00.000
[ "Computer Science", "Economics" ]
A Review of Approaches for Mitigating Effects from Variable Operational Environments on Piezoelectric Transducers for Long-Term Structural Health Monitoring Extending the service life of ageing infrastructure, transportation structures, and processing and manufacturing plants in an era of limited resources has spurred extensive research and development in structural health monitoring systems and their integration. Even though piezoelectric transducers are not the only sensor technology for SHM, they are widely used for data acquisition from, e.g., wave-based or vibrational non-destructive test methods such as ultrasonic guided waves, acoustic emission, electromechanical impedance, vibration monitoring or modal analysis, but also provide electric power via local energy harvesting for equipment operation. Operational environments include mechanical loads, e.g., stress induced deformations and vibrations, but also stochastic events, such as impact of foreign objects, temperature and humidity changes (e.g., daily and seasonal or process-dependent), and electromagnetic interference. All operator actions, correct or erroneous, as well as unintentional interference by unauthorized people, vandalism, or even cyber-attacks, may affect the performance of the transducers. In nuclear power plants, as well as in aerospace, structures and health monitoring systems are exposed to high-energy electromagnetic or particle radiation or (micro-)meteorite impact. Even if environmental effects are not detrimental for the transducers, they may induce large amounts of non-relevant signals, i.e., coming from sources not related to changes in structural integrity. Selected issues discussed comprise the durability of piezoelectric transducers, and of their coupling and mounting, but also detection and elimination of non-relevant signals and signal de-noising. For long-term service, developing concepts for maintenance and repair, or designing robust or redundant SHM systems, are of importance for the reliable long-term operation of transducers for structural health monitoring. Introduction Structural Health Monitoring (SHM) [1] is roughly the periodic or continuous application of technical methods implemented in a structure or structural element with the aim to assess its integrity, fitness for use, remaining service-life under specified operating conditions, or to optimize the maintenance required for this.In an era of limited resources, SHM is gaining in importance for extending the service life of ageing infrastructure and transportation structures.Standardization of SHM applications in guidelines or test procedures has been rather slow, but recently is increasing, especially in the construction industry, see, e.g., [2].A specific type of SHM is Condition Monitoring (CM) of machinery for which several standard guidelines have been developed, see, e.g., [3][4][5][6][7].There are different sensor technologies for acquiring SHM data, see, e.g., [8][9][10].For structural wave or vibration based SHM, piezoelectric transducers and fiber optics are widely investigated and implemented [11,12].For piezoelectric transducers, PZT (lead-zirconate-titanate) is one of the main transducer materials, see, e.g., [13].The integration of SHM transducers of any kind into the monitored objects is a challenge, and requires careful design of the SHM system.Integration of piezoelectric transducers into structures may benefit from special types of planar, thin transducers.These comprise so-called Active and Macro Fiber Composites (AFC and MFC, respectively) that are flexible [14,15] (see Figure 1), piezoelectric patches [16], piezoelectric wafer active sensors (PWAS) [17,18] or piezoelectric disks that are commercially available. Sensors 2023, 23,7979 2 of 20 and implemented [11,12].For piezoelectric transducers, PZT (lead-zirconate-titanate) is one of the main transducer materials, see, e.g., [13].The integration of SHM transducers of any kind into the monitored objects is a challenge, and requires careful design of the SHM system.Integration of piezoelectric transducers into structures may benefit from special types of planar, thin transducers.These comprise so-called Active and Macro Fiber Composites (AFC and MFC, respectively) that are flexible [14,15] (see Figure 1), piezoelectric patches [16], piezoelectric wafer active sensors (PWAS) [17,18] or piezoelectric disks that are commercially available. Figure 1.Photos from the authors' laboratory: (left) AFC transducer with actuating and sensing capability manufactured according to the design described by [19] compared with a commercial 150 kHz resonant acoustic emissions sensor, (right) AFC with three independently controllable electrode segments on one device. The durability of such devices has been investigated and discussed by, e.g., [18,20].Mechanical strains of the order of typical engineering strains (up to about 0.25%) or impact of foreign objects may be detrimental to the performance of piezoelectric transducers.Hence, various "packaging" approaches for AFC, MFC, or PWAS may improve their durability when integrated into structures.Advantages and disadvantages of different embedding methods (Figure 2) are discussed in detail by, e.g., [11,12]. Depending on the application, performing SHM with different sensor types, often with complementary measurement principles, is advantageous, see, e.g., [21].Data fusion for the analysis of signals from different transducer types reviewed, e.g., by [22], looks promising for future SHM.Artificial intelligence is also playing an increasingly important role in the related signal and data analysis, see, e.g., [23][24][25]. Effects from different operational environment on piezoelectric transducers are reviewed by the following papers: buildings by [26], vibrational condition monitoring by [27], wind turbines by [28], railway vehicles by [29], hydrogen pressure vessels by [30], or structures in low earth orbits by [31].The aspects of long-term operation of SHM system and transducer durability, however, has, so far, only received scant attention at best [32].Variable operating environments lead to different damage mechanisms in the components of the SHM system, i.e., the transducers, signal transmission, data acquisition, data storage and analysis, often acting on different time scales.Identifying potential synergistic effects, both positive or negative for the service life of SHM systems, can hence be quite challenging. Depending on the demand for specific types of transducers, e.g., some Acoustic Emission sensors made in low numbers (as shown by their serial numbers), manufacturing may be still mostly manual rather than fully or partly automated [33,34].Besides the manufacturing processes, the quality of the piezoelectric material is another important factor.Overall, both may result in some variability in transducer properties that, in principle, Figure 1.Photos from the authors' laboratory: (left) AFC transducer with actuating and sensing capability manufactured according to the design described by [19] compared with a commercial 150 kHz resonant acoustic emissions sensor, (right) AFC with three independently controllable electrode segments on one device. The durability of such devices has been investigated and discussed by, e.g., [18,20].Mechanical strains of the order of typical engineering strains (up to about 0.25%) or impact of foreign objects may be detrimental to the performance of piezoelectric transducers.Hence, various "packaging" approaches for AFC, MFC, or PWAS may improve their durability when integrated into structures.Advantages and disadvantages of different embedding methods (Figure 2) are discussed in detail by, e.g., [11,12]. Depending on the application, performing SHM with different sensor types, often with complementary measurement principles, is advantageous, see, e.g., [21].Data fusion for the analysis of signals from different transducer types reviewed, e.g., by [22], looks promising for future SHM.Artificial intelligence is also playing an increasingly important role in the related signal and data analysis, see, e.g., [23][24][25]. Effects from different operational environment on piezoelectric transducers are reviewed by the following papers: buildings by [26], vibrational condition monitoring by [27], wind turbines by [28], railway vehicles by [29], hydrogen pressure vessels by [30], or structures in low earth orbits by [31].The aspects of long-term operation of SHM system and transducer durability, however, has, so far, only received scant attention at best [32].Variable operating environments lead to different damage mechanisms in the components of the SHM system, i.e., the transducers, signal transmission, data acquisition, data storage and analysis, often acting on different time scales.Identifying potential synergistic effects, both positive or negative for the service life of SHM systems, can hence be quite challenging. Depending on the demand for specific types of transducers, e.g., some Acoustic Emission sensors made in low numbers (as shown by their serial numbers), manufacturing may be still mostly manual rather than fully or partly automated [33,34].Besides the manufacturing processes, the quality of the piezoelectric material is another important factor.Overall, both may result in some variability in transducer properties that, in principle, could affect their long-term durability.However, to the best knowledge of the author, no information on such effects exists in the public domain. This review consists of four parts: Section 2 summarizes different operational environments and identifies critical influences for piezoelectric transducers.In Section 3, a short review summarizes published experience with piezoelectric transducers for long-term SHM.Section 4 first presents effects from ambient climate (temperature and humidity) and from mechanical loads and the respective mitigation approaches in detail, complemented by a discussion of selected special operational environments.Section 5 briefly summarizes the main aspects and adds a brief outlook.Section 6 then provides conclusions. could affect their long-term durability.However, to the best knowledge of the author, no information on such effects exists in the public domain. This review consists of four parts: Section 2 summarizes different operational environments and identifies critical influences for piezoelectric transducers.In Section 3, a short review summarizes published experience with piezoelectric transducers for longterm SHM.Section 4 first presents effects from ambient climate (temperature and humidity) and from mechanical loads and the respective mitigation approaches in detail, complemented by a discussion of selected special operational environments.Sections 5 briefly summarizes the main aspects and adds a brief outlook.Section 6 then provides conclusions. Figure 2. Examples of flexible AFC or MFC mounted directly on curved surfaces without waveguides from the authors' laboratory, (top left) AFC mounted on a CFRP strut for compensation of thermal expansion, (top right) AFC mounted on an aluminum pipe for leak monitoring [35], (bottom) MFC mounted on a glider plane landing wheel gear for electro-mechanical impedance (EMI) monitoring [36]. Relevant Operational Environments Operational environments for piezoelectric transducers comprise a variety of service conditions.Primary factors determining the service life are (1) ambient temperature, elevated or low and either roughly constant or variable, and (2) mechanical loads, either constant, variable (e.g., structural vibrations) or short-term (e.g., impact of foreign objects) or variations in ambient pressure that act on the objects to which the transducers are coupled and/or to the transducers and their mounting devices directly.Secondary factors are (3) ambient humidity, both in gaseous and liquid form, (4) other chemical species present as (bottom) MFC mounted on a glider plane landing wheel gear for electro-mechanical impedance (EMI) monitoring [36]. Relevant Operational Environments Operational environments for piezoelectric transducers comprise a variety of service conditions.Primary factors determining the service life are (1) ambient temperature, elevated or low and either roughly constant or variable, and (2) mechanical loads, either constant, variable (e.g., structural vibrations) or short-term (e.g., impact of foreign objects) or variations in ambient pressure that act on the objects to which the transducers are coupled and/or to the transducers and their mounting devices directly.Secondary factors are (3) ambient humidity, both in gaseous and liquid form, (4) other chemical species present as fluids or gases in the operating environment, e.g., seawater or oxidizing media, (5) alternating electromagnetic fields with different frequencies (energies), (6) particle radiation, and, of course, (7) combinations of these factors. Variable temperature environments can affect piezoelectric transducers in different ways.Elevated temperatures and large temperature variations may change the polarization of piezoelectric materials.The Curie-temperature (T C ) is the upper temperature limit for operating piezoelectric transducers, but in practice, a significantly lower temperature limit is necessary for long-term operation.Short, ten minute annealing at temperatures between 30% and 80% of the Curie-temperature [37] on two types of piezo-material samples (one "soft", one "hard", i.e., with higher and lower domain wall mobility, respectively [38]), yielded clear indications of performance degradation for both types.The d 33 piezoelectric charge coefficient showed an increasing drop in the value with increasing annealing temperature compared with material before annealing.The drop was more significant for soft than for hard piezoelectric materials.This behavior was interpreted as an increasing ageing effect of the piezoelectric charge constant d 33 , i.e., the polarization in a piezoelectric material generated per unit of mechanical stress applied parallel to the polarization direction.Simultaneously, relaxation times for the polarization decreased with increasing annealing temperature as well (Figure 3).fluids or gases in the operating environment, e.g., seawater or oxidizing media, (5) alternating electromagnetic fields with different frequencies (energies), ( 6) particle radiation, and, of course, (7) combinations of these factors. Variable temperature environments can affect piezoelectric transducers in different ways.Elevated temperatures and large temperature variations may change the polarization of piezoelectric materials.The Curie-temperature (TC) is the upper temperature limit for operating piezoelectric transducers, but in practice, a significantly lower temperature limit is necessary for long-term operation.Short, ten minute annealing at temperatures between 30% and 80% of the Curie-temperature [37] on two types of piezo-material samples (one "soft", one "hard", i.e., with higher and lower domain wall mobility, respectively [38]), yielded clear indications of performance degradation for both types.The d33 piezoelectric charge coefficient showed an increasing drop in the value with increasing annealing temperature compared with material before annealing.The drop was more significant for soft than for hard piezoelectric materials.This behavior was interpreted as an increasing ageing effect of the piezoelectric charge constant d33, i.e., the polarization in a piezoelectric material generated per unit of mechanical stress applied parallel to the polarization direction.Simultaneously, relaxation times for the polarization decreased with increasing annealing temperature as well (Figure 3).[37], indicating increased ageing rates and decreasing relaxation times with increasing annealing temperatures, respectively. In operational environments without climate control, there are daily and seasonal temperature and humidity variations.For aircrafts, as an example, operational temperatures vary between about −50 • C and +70 • C within a few hours, and the respective relative humidity ranges between a few and 100 percent.In space, operational temperatures do vary even more, but there are no humidity effects.On the one hand, temperature variations may yield thermal stresses in the transducers from differences in the coefficient of thermal expansion leading to fatigue damage, where specifications for commercial transducers define the allowable operating temperature range in order to mitigate this; and on the other hand, temperature variations may directly affect the sensitivity of the PZT-element.There are two mechanisms.The first is temperature-induced depolarization of the PZT resulting in lower sensitivity.The second mechanism is pyroelectricity, inducing spurious signals in the PZT-elements directly [13].Pyroelectricity usually does not degrade the transducer performance, but non-relevant signals in the SHM data may yield "big" data sets requiring higher computational power and data storage and more elaborate analysis. Variable mechanical loads may cover a broad range of frequencies and amplitudes.Load types can be tensile, compressive, shear, or combinations.Impact loads, high acceleration or deceleration outside specified operating limits or test object failure can also induce damage in the transducers.Examples of impact loads by foreign objects are hail, bird strikes, falling rocks or trees or tool dropping.Improper set-up or operation, e.g., causing collisions with vehicles or other moving structures, is also feasible.Depending on the location of the SHM system, unintentional contact by people or vandalism may also result in damage, e.g., in the SHM of civil engineering structures accessible to the public. There are special operating environments, among them, nuclear installations (e.g., nuclear power plants or particle accelerators for research or health treatments), structures in space, and processing facilities handling potentially explosive materials, or yielding such materials as products or waste.The latter comprise, e.g., oil or gas production facilities and refineries, chemical production plants, and processes yielding large amounts of fine grained powders or dust particles (e.g., mills with grinding or abrasion processes).Fire is also a potential hazard in many of these facilities.Underground installations or spaces, e.g., tunnels or mines, often hold mechanical loads, temperature and humidity roughly constant, but may yield damage to SHM transducers and systems by exposure to corrosive media.Earthquake prone locations require special attention if the SHM system is able to monitor structures during or after seismic events of a specified magnitude [39,40]. Experience Published from Long-Term Monitoring with Piezoelectric Transducers A comprehensive review recently compiled published experience from long-term SHM with piezoelectric transducers [32].With "long-term" defined loosely as continuous monitoring periods of about half a year or more, a search was made for quantitative service life data of the components of the SHM system.The main conclusion of this review was that only scant quantitative data on the performance of SHM systems and piezoelectric transducers is publicly available so far.Some SHM service providers have likely accumulated experience on damage and failure of transducers and other components in various operational environments.However, even if documented, failure listings are usually not published.One exception is a bridge monitoring report on Acoustic Emission [41], which lists all component damage and failures in detail.A questionnaire distributed within the Committee on Acoustic Emission of the German Society for Nondestructive Testing in 2021, see [32] for detailed discussion, yielded further information.The limited number of responses received so far (six) may be too small for statistically relevant conclusions, but they provide, nevertheless, valuable indications of critical issues. One issue identified in the questionnaire is the significantly reduced sensitivity of PZT-transducers operated at an elevated temperature (around +110 • C), but still within the specified operating range for the sensor type.The observation came from a short-term preliminary experiment for defining the maximum allowable sensor distance (Figure 4).Another case of observed transducer failure was tearing signal transmission cables or mounting devices, likely due to unintentional contact by third party people.This indicates that transducer mounting and signal transmission require appropriate design and installation.Controlling and limiting access to the SHM system, if feasible, is likely the Sensors 2023, 23, 7979 6 of 19 best mitigation approach.Other damage and failures reported in the questionnaire consisted of several cases of component failure, either in the data acquisition, data storage or data analysis equipment (e.g., computer screen failure, power source failure), sometimes leading to a loss of data.It is hence recommended to set-up data acquisition, data storage, and analysis equipment for SHM performed outdoors in a cabin or container providing protection against environmental effects and limiting access to authorized personnel, as discussed, e.g., by [42]. One issue identified in the questionnaire is the significantly reduced sensitivity of PZT-transducers operated at an elevated temperature (around +110 °C), but still within the specified operating range for the sensor type.The observation came from a short-term preliminary experiment for defining the maximum allowable sensor distance (Figure 4).Another case of observed transducer failure was tearing signal transmission cables or mounting devices, likely due to unintentional contact by third party people.This indicates that transducer mounting and signal transmission require appropriate design and installation.Controlling and limiting access to the SHM system, if feasible, is likely the best mitigation approach.Other damage and failures reported in the questionnaire consisted of several cases of component failure, either in the data acquisition, data storage or data analysis equipment (e.g., computer screen failure, power source failure), sometimes leading to a loss of data.It is hence recommended to set-up data acquisition, data storage, and analysis equipment for SHM performed outdoors in a cabin or container providing protection against environmental effects and limiting access to authorized personnel, as discussed, e.g., by [42].After the publication of the review [32], additional examples of long-term SHM were found, see, e.g., [43][44][45].Examples of commercial bridge monitoring in Germany, running up to six years now (Thalaubachtal Bridge since 2017) are noted in a summary presentation (in German) at a technical meeting [46].The listing also includes a project planned from 2022-2030 with 248 Acoustic Emission sensors monitoring a highway tunnel in Munich (Germany).The continuous SHM of bridges is likely the fastest growing monitoring application.However, it may be limited by resources, both with respect to the availability of measurement systems and technical personnel for installation and maintenance, but also by a potential lack of civil engineers with sufficient know-how for interpretation of the various SHM data.No problems were reported for the monitoring projects in [46] of the SHM system induced by the operating environment.However, this does not necessarily mean that none occurred.Publishing documented company internal problem reports on damage or failure of all components of SHM systems, as well as systematically After the publication of the review [32], additional examples of long-term SHM were found, see, e.g., [43][44][45].Examples of commercial bridge monitoring in Germany, running up to six years now (Thalaubachtal Bridge since 2017) are noted in a summary presentation (in German) at a technical meeting [46].The listing also includes a project planned from 2022-2030 with 248 Acoustic Emission sensors monitoring a highway tunnel in Munich (Germany).The continuous SHM of bridges is likely the fastest growing monitoring application.However, it may be limited by resources, both with respect to the availability of measurement systems and technical personnel for installation and maintenance, but also by a potential lack of civil engineers with sufficient know-how for interpretation of the various SHM data.No problems were reported for the monitoring projects in [46] of the SHM system induced by the operating environment.However, this does not necessarily mean that none occurred.Publishing documented company internal problem reports on damage or failure of all components of SHM systems, as well as systematically collecting such data during operation of current SHM projects, would be essential for improving the performance of future SHM systems. Mitigation of Effects from Variable Service Environments This section discusses mitigation of the major ambient conditions affecting the performance of piezoelectric transducers in the long-term, i.e., temperature variations and mechanical loads.Before discussing the different environments, two general mitigation procedures, namely preventive and predictive maintenance that apply to a wide range of operational environments, are worth mentioning.Predictive maintenance is a subset of preventive maintenance, for which data from the SHM systems' performance enable pre-dicting the optimal time for maintenance action.Signal-to-noise ratio and "noise" signals from non-relevant sources are general aspects in all applications of SHM and, hence, are discussed here as well.Selected examples of other environments with specific conditions and combinations are electromagnetic interference, nuclear radiation exposure (in nuclear facilities and in space), potentially explosive environments (oil, gas and chemical industries, and food and wood processing plants), underground facilities (mines and tunnels) and, finally, perspectives for piezoelectric energy harvesting. Preventive and Predictive Maintenance Preventive maintenance without a predictive tool is practiced by one company answering the questionnaire of the German Society for Nondestructive Testing [32].Transducers (piezoelectric and others) and certain components of the measurement chain are replaced regularly after a service duration defined by experience (if possible from the respective operational environment), but are essentially independent of their effective remaining service life.This has the advantage that availability of human resources and of material or components for maintenance is projectable.Further, the schedule is adaptable to the production or operating cycles of the clients' organization.During the down-time, software updates can be installed and a full operational check of the SHM system can be performed.This guarantees a high technical availability of the SHM system and minimizes the probability of unexpected failures at a comparatively lower cost.The definition of the maintenance intervals, of course, requires experience with the specific ambient conditions, in order to minimize the related hardware and personnel cost.Important aspects for this are the accessibility to the monitoring site and modularity of the SHM system.These do have implications for designing the SHM system and planning the client specific set-up, see, e.g., [32].Nuclear installations, discussed in more detail below, are one example where preventive maintenance may make sense. Predictive maintenance, see, e.g., reviews by [47][48][49], as a special case of preventive maintenance, also aims at achieving a high technical availability of the objects monitored by SHM.There are several approaches for predicting the time at which equipment maintenance is "best" performed."Best" is typically an optimization criterion based on technical and/or cost considerations.Predictive models and, more recently, digital twins, also in combination with artificial intelligence, play increasingly important roles in this, see, e.g., [49][50][51].The effort for implementing predictive maintenance approaches is higher than that for preventive maintenance, since the SHM data evaluation includes continuous or periodic comparison with defined limit parameters, predictive models, or digital twins.Such limits, models and digital twins have to be developed, validated, implemented, periodically assessed for their performance, and adapted, if necessary.There is copious literature on this topic.Predictive maintenance is a highly promising and possibly the most effective mitigation process for avoiding problems induced by the operational environment during long-term SHM. Signal-to-Noise-Ratio and Signals from Non-Relevant Sources Spurious signals from pyroelectricity induced by temperature variations are one example of signals from non-relevant sources.In any case, it is necessary to identify signals coming from sources not related to degradation and damage in the monitored structures.Once identified, removing them from the data set is desirable, either during acquisition or later in the signal analysis.Preferably, this shall be a first step in the analysis or even implemented as so-called front-end filter in the data acquisition, in order to reduce the computational effort in subsequent signal analysis.However, this only works if the nonrelevant signals are identified unambiguously and clearly separated from the relevant signals, indicating changes in the monitored objects.Increasing the computational power handling of large signal acquisition rates may soon allow for sufficiently fast and reliable analysis, eliminating non-relevant signals on-line during acquisition. Signal noise reduction, i.e., improving the signal-to-noise ratio if there is either stochastic or continuous noise present is different from the identification and elimination of the discrete, so-called burst-type signals coming from non-relevant sources, see, e.g., [52,53].Sufficient signal-to-noise ratio is essential, hence, noise sources have to be identified and eliminated, or at least mitigated to acceptable levels.Signal noise reduction is discussed in many publications, see, e.g., [54][55][56].Such procedures can be implemented into the data analysis.However, whenever feasible, elimination of the noise sources is always the best mitigation approach.The variation of temperature may induce thermal stresses in materials and components as discussed above.Hence, both the transducers and the objects monitored may show effects from that. Figure 5 shows an example of a field test with acoustic emission on an agricultural silo made from GFRP.With a height of about 10 m, the daily difference of the thermal expansion in height between the side heated by sunlight and the back side shielded from direct exposure to the sun amounted to about 10 mm (measurements were performed in Spring and early Summer).Depending on the size of the monitored object and the variation in operating temperature, such differences may affect the sensitivity of the transducer array, especially for highly damping materials.Indeed, the respective movements may affect the mounting devices and the coupling of the transducers.For transducers, the mechanical stresses induced by temperature variations within their specified operating temperature range and possibly specified rates of temperature change are likely tolerable, leading to "normal" ageing.Effects of temperature on the piezoelectric material, however, are not limited to mechanical stresses.Electric effects, as noted above, may reduce the performance of the transducers and require mitigation if the performance reduction is significant. Mitigation of Temperature Effects on Piezoelectric Transducers Higher operating temperature or temperature variations of the monitored object or its ambient reducing transducer sensitivity require lower distances between transducers if mounted in an array on structures, such as those noted above. The basic mitigation approaches against temperature-induced effects are (1) removing the PZT-transducers from the elevated or variable temperature environment, or (2) implementing piezoelectric transducers with improved temperature-tolerant design, specifically with piezoelectric materials with a higher Curie-temperature. For monitoring objects operated at elevated temperatures, where surface temperatures exceed the specified transducer operating range or would induce rapid performance degradation, the approach used most often are so-called waveguides.Often, these are rodlike metal components with contact plates at both ends, one fitting the structure, the other the transducer.Depending on their shape, waveguides may act as a kind of filter for certain wave-modes.A modelling study [57] explored different shapes, sizes and materials for waveguides and included temperature effects as well.One conclusion was that temperature effects, at least for the range of waveguides investigated, were not significant.The model is useful for simplifying the selection waveguide shapes and materials. Piezoelectric materials for transducers with higher T C less affected by elevated temperature environments have been reviewed by, e.g., [58,59].A disadvantage of these materials is often lower performance compared with PZT.Availability of commercial transducers made with alternative sensing materials may be limited, however, government regulations banning the use of lead in PZT-transducers [32] may initiate the wide-spread development of alternative piezoelectric materials in the near future. Piezoelectric transducers for low-temperature applications have received scant attention so far.Some commercial transducers, specified for operation at low temperatures, perform even down to liquid nitrogen at 77 K, or liquid helium at 4 K.However, there is no long-term experience published yet.In [60], the results after 20 cooling cycles showed no significant change in sensitivity within the experimental scatter.Until Spring of 2023, the sensor had accumulated roughly 100 cycles [61]. Theoretically, approaches for implementing transducers protected in some type of thermally shielded box, or even with active climate control with a suitable contact surface acting as thermal barrier, in principle, seem feasible.However, to the best knowledge of the author, such devices are not reported nor discussed in the literature.Possibly, the development effort and cost for such a device, as well as its operation, are prohibitive compared with alternative solutions. Alternative Approaches without Piezoelectric Transducers Of course, mitigation of temperature effects on piezoelectric transducers is also feasible by implementing transducers based on alternative measurement methods.Optical fiberbased SHM systems are less sensitive to temperature and insensitive to electromagnetic interference, see, e.g., [62].Optical fiber-based transducers for Acoustic Emission at elevated temperatures have been developed, e.g., for measurements of metal corrosion processes by [63,64].Non-contact measurements for SHM are also an option, see, e.g., [65][66][67][68].Laser-based methods, e.g., scanning laser vibrometry, and of "structured light", e.g., DIC, Shearography, or thermography or laser ultrasound excitation and measurement for noncontact methods, are available for many SHM applications.Non-contact optical methods may require specific surface qualities, e.g., limited roughness and sufficiently constant reflectivity, see, e.g., [69], or pose limits on shape, e.g., no "sharp" edges/corners or no complex curved shapes inhibiting full view.Therefore, which method may provide the "best" solution for SHM requires careful consideration. Mechanical Loads Mechanical loads may act on the transducers and their mounting devices directly or on the test objects, which then transmit the loading to the transducers.Such loads may also act on other components of the measurement chain and induce damage or failure, e.g., in signal and power transmission lines, in data acquisition and data storage equipment, or in power production or supply components. In the survey of the German Committee on Acoustic Emission, the transducers seem less affected than the other parts of the measurement chain [32].Mitigating the impact effects of foreign objects on transducers and transducer mounting devices is feasible by choosing transducer locations that are less likely hit or by suitable mounting devices protecting the transducers.Transducer holders with permanent magnets provide constant contact pressure on magnetic test objects, e.g., steel pressure vessels, but may yield lower forces at elevated temperatures [32].Another question is the choice of coupling agent between transducer and text object surface; mounting with an adhesive seems to be the preferred option for long-term SHM depending on the test object, e.g., conical transducers mounted without any coupling agent avoided ageing effects in the coupling medium [32]. Special cases of mechanical loads are coming from natural hazards, e.g., earthquakes, hurricanes, tsunamis, landslides, flooding, or falling rocks.Earthquakes, depending on their magnitude, are either hardly felt or can cause anything between insignificant damage and large-scale collapse of infrastructure.Debris from failing infrastructure or the ambient caused by any of the above hazards can hit monitored objects and SHM systems, even if the test objects themselves remain essentially intact.Critical infrastructure resilience in general, as discussed in [70], comprises aspects of robustness, redundancy, resourcefulness and rapidity.The complexity and interdependency of modern infrastructure [40] makes mitigating damage from natural hazards difficult.Besides the damage due to mechanical loads, the SHM system operation may be affected by power failures, interruption of signal lines or telecommunication problems. For embedded piezoelectric transducers, such as AFC, MFC, PWAS, different "packaging" of such devices (Figure 6) has yielded improved performance, both with respect to quasi-static and cyclic fatigue loads with strains (e.g., typical engineering strains up to about 0.25%) exceeding the failure strain of the piezoelectric materials (clearly below 0.2%) [71,72].However, not all approaches worked as expected, e.g., silicon rubber packaging resulted in uneven thickness and in high attenuation, since no sensor response was observed during testing.However, long-term SHM applications with such devices, to the best knowledge of the author, have not been reported yet.On the one hand, embedding transducers inside structures provides some protection against effects from the operating environment, but on the other hand, may make replacement or repair more difficult.Mounting transducers on easily accessible surfaces and protecting them by suitable covers may be preferable.However, as noted in [73], the cover or the mounting devices shall not interfere with signal propagation paths or, in general, reduce transducer sensitivity below acceptable levels. 3, 7979 11 of 20 about 0.25%) exceeding the failure strain of the piezoelectric materials (clearly below 0.2%) [71,72].However, not all approaches worked as expected, e.g., silicon rubber packaging resulted in uneven thickness and in high attenuation, since no sensor response was observed during testing.However, long-term SHM applications with such devices, to the best knowledge of the author, have not been reported yet.On the one hand, embedding transducers inside structures provides some protection against effects from the operating environment, but on the other hand, may make replacement or repair more difficult. Mounting transducers on easily accessible surfaces and protecting them by suitable covers may be preferable.However, as noted in [73], the cover or the mounting devices shall not interfere with signal propagation paths or, in general, reduce transducer sensitivity below acceptable levels.Pressure waves propagating in air may induce structural waves in the monitored objects and hence in the measurement chain.Even if this does not cause damage to the transducers and the measurement chain, it may yield large amounts of non-relevant signals in the data files.Such air-coupled ultrasound [74,75] may come from many different ambient sources.Examples are dropping objects, e.g., tools, traffic noise, operations or processes Pressure waves propagating in air may induce structural waves in the monitored objects and hence in the measurement chain.Even if this does not cause damage to the transducers and the measurement chain, it may yield large amounts of non-relevant signals in the data files.Such air-coupled ultrasound [74,75] may come from many different ambient sources.Examples are dropping objects, e.g., tools, traffic noise, operations or processes nearby creating audible and ultrasound noise.Even clapping hands close to the setup excited signals in the piezoelectric transducers, as observed by the author.Of course, this only happens if the source mechanisms produce waves of sufficient intensity with frequency content above the high-pass frequency threshold of the preamplifier or of the data acquisition channel.Hence, running the data acquisition and recording typical ambient noise and non-relevant signals before starting the SHM is essential.Such data then allow for defining the threshold and/or the frequency filter range.This may mitigate the problem to some extent, but will not successfully eliminate all such signals in all cases.A disadvantage of such front-end filters is the potential loss of "real" damage signals with characteristics similar to signal noise or signals from non-relevant sources.Service environments with large variations in ambient noise or slowly changing with time during long-term SHM may require periodic analysis for the identification of changes in signal noise or in sources of non-relevant signals.Analogous to mitigating temperature effects discussed above, considering non-contact SHM can also mitigate the effects from mechanical loads acting on the test objects. Electromagnetic Interference Many fracture phenomena in materials are known to yield short-term electromagnetic emissions [76,77].Short term variable electromagnetic fields present in many SHM environments (e.g., from turning on electric power for heavy machinery or lights) usually do not induce damage in the piezoelectric transducers [17,18].Such fields, however, may yield spurious, non-relevant signals in piezoelectric transducers.Full electromagnetic shielding of the transducers and of the signal transmission lines is difficult, especially for high-frequency fields (MHz to GHz).Identifying such signals prior to testing and eliminating them by front-end filters or in the data analysis is likely the best approach, see, e.g., [54,56]. Electrostatic discharges with the potential to damage electronic chips are likely more relevant for data acquisition, data storage and computers than for the transducer.A potential exception may be transducers with integrated preamplifiers.Measures for preventing loss of data or of test control by such mechanisms, but also reliability of electronic components and equipment, in general, are discussed, e.g., in [78]. Lightning striking the monitored object directly or objects nearby is potentially detrimental, but to the best of the authors' knowledge, such cases are not found in the published literature. Electro-magnetic interference or discharges may also affect wireless electronic signal transmission devices.Reliability assessment comparing the long-term performance of cable-based versus wireless signal transmission is necessary for deciding which of the two is better suited for a specific SHM application.Depending on the power consumption of the SHM system and the storage capacity, respectively, it may be advisable to combine the batteries of wireless devices with local energy harvesting devices.Energy harvesting modules based on piezoelectric elements for power generation are discussed in more detail below.Of course, other types of energy sources may be necessary depending on the power consumption requirements of the SHM system or the specific application. Nuclear Facilities The best-known examples are nuclear power plants; others are nuclear or particle research facilities (e.g., accelerators), or medical radiation therapy equipment.The major limitation for mitigation measures performed by personnel, such as repair or replacement, is due to radiation safety regulations.Depending on the radiation levels, access during operation may be prohibited for humans or at best feasible for short time intervals.The use of robots for maintenance and monitoring is feasible [79][80][81], but may also be limited in highnuclear radiation environments, depending on the design of the electronic robot control. In [18], high-temperature and nuclear radiation effects on PWAS are reviewed, and the conclusion is that no significant changes in the microstructure of the PZT material were found after both exposure types.However, temperature effects occurred in frequency, e.g., in resonance and anti-resonance frequencies.Nevertheless, piezoelectric transducers looked suitable for SHM in harsh, high-temperature and nuclear radiation environments.Nevertheless, other research reports depolarization effects and sensitivity reductions in piezoelectric materials after various irradiation exposures, see, e.g., [82][83][84].Likely, the SHM of nuclear facilities will benefit from preventive or, if sufficiently developed, predictive maintenance.Essentially, this is due to the limited access during operation, with scheduled shut-offs as only option for access to the SHM system for maintenance. Potentially Explosive Operating Environments A special and partially regulated environment are facilities or processes that produce gases or small-size particles (e.g., dust, powders) with the potential to explode under certain ambient conditions.Regulations by the European Union (EU) go by the designation of "ATEX" (from the French "Atmosphères Explosives") or on the international level by "IECEX System" from ("International Electrotechnical Commission System for Certification to Standards Relating to Equipment for Use in Explosive Atmospheres").Examples are oil and gas exploration, production, transport and refining processes [85,86], chemical plants [87,88], but also sugar production plants [89,90] and wood-processing facilities [91,92]. ATEX documents provide a classification of potentially explosive environments and specify the requirements for measurement technology deployed and operated in such environments.IECEX certificates confirm conformance with the respective requirements.However, these are not mandatory per se, different from ATEX requirements within the EU.Commercial manufacturers offer transducers, signal transmission and data acquisition equipment conforming to these regulations, see, e.g., [93][94][95].Implementation of such equipment for SHM in potentially explosive environments will essentially eliminate the risk of explosions induced by the SHM system.Of course, this does not necessarily mitigate all damage to the transducers or the SHM system by explosions or fires caused by other mechanisms. Space Applications Space, as an operating environment, comprises different ranges.There are so-called Low Earth Orbits (LEO), defined as altitudes below 1000 km with a lower limit around 160 km [31].Further, geosynchronous orbits (GEO, around 36,000 km) are between medium and high earth orbit (MEO and HEO); all subject to different operating environments [96].Deep Space, on the other hand, defined by NASA [97], extends beyond our moon or at least two million kilometers from earth by the International Telecommunication Union [98]. According to [31] the LEO, the space environment consists of vacuum with elemental and molecular gases or plasmas, ultraviolet and ionizing radiation, electro-magnetic fields, solar flux, and (micro-)meteoroids.In [96], the authors state (cite) "The dominant environmental components and their effects on spacecraft in different orbits, i.e., the geosynchronous orbit (GEO), the low earth orbit (LEO), the medium earth orbit (MEO), and the high earth orbit (HEO), are investigated, respectively.The space environment that should be taken into particular consideration is summed up to facilitate the design of the spacecraft in a specific orbit.It can be seen that various space environmental components have different impacts on the spacecraft operation, which could lead to numerous anomalies.It is noticeable that the specific environment analysis for different orbits is the very demanding basis of spacecraft maintenance". The complexity and variability of the space environment and the interaction between environmental factors make modelling potential effects on satellites and SHM systems extremely difficult, even if data for specific environmental parameters, such as temperature or irradiation, are available from experiments under controlled laboratory conditions.In [31], the authors emphasize (cite) ". . . the necessity of a thorough understanding of the space environment for spacecraft designers and engineering performance issues that may arise from space environment exposure to materials.Flight experiments from the European Space Agency (ESA), Japan Aerospace Exploration Agency (JAXA), and the National Aeronautics and Space Administration (NASA) have been presented that focus on space environmental exposure of materials in order to design and operate future space systems successfully.In designing spacecraft, we should reflect on the results from these experiments and emphasize the importance of continuing to accumulate long-term measurement data of the space environment and its effects". Access to space structures for in-service maintenance or repair is even more limited than in nuclear facilities and hence costly.Spacecrafts in LEO were serviced or repaired during several space missions [99], but repair for satellites at higher altitudes is not yet feasible.Repairable spacecraft designs are under development [100], but so far, damage essentially has to be limited at the design stage.This implies a sufficiently robust design of all relevant components in order to achieve the expected service-life.A prime example of robust design is a two Voyager spacecraft launched by NASA in 1977, which achieved the longest operation time of any deep space mission so far, reaching the edge of our solar system after 45 years in 2022 [101].How simpler and less-costly repairability, once available, will affect satellite design, and hence, SHM, in space in the future, remains to be seen. Mines and Other Underground Facilities Salt mines are potential long-term storage for radioactive waste from nuclear power plants and other sources.They provide examples of long-term monitoring over many years [32,102].Information on problems with transducers and other components of the SHM systems are scarce.The environment is typically rather dry at roughly constant temperature.Observed failures caused by corrosive media damaged the preamplifiers.Mitigation includes use of corrosion-resistant materials as far as possible and sealing of all connections. Approaches and issues for monitoring of car or railway tunnels in [103] discuss experiments investigating the response of acoustic emission and vibration monitoring data during rock block collapse in the tunnel.Conclusions are (cite) "In order to apply the AE-Vibration joint monitoring in practical tunnel engineering, it is necessary to find the key blocks using detection equipment, which can be realized through existing technology and block theory.This study proposes to replace low-frequency microseismic sensors with high-frequency acoustic emission sensors to obtain signals from key blocks in the range of 5-10 m.".With respect to signals from non-relevant sources and alternative measurement methods, the authors note (cite) "The AE sensor must be installed above a certain height over the ground to effectively shield the noise signals from human activities and vehicles, which requires preparation before monitoring.Natural frequency monitoring can be performed by mounting a three-component wireless acceleration sensor on the surface of the key block or using a non-contact laser vibrometer for rapid testing.The latter is more convenient and can be operated at a certain distance".The publication, however, does not mention problems relating to long-term SHM. Piezoelectric Energy Harvesting in Various Oprating Environments Piezoelectric devices, besides the transducer function for SHM, can also operate as an energy harvester [12,104,105].For providing electric power to SHM systems or wire-less transducer nodes, piezoelectric energy harvesters compete with other energy sources.A review [104] notes various energy sources, including photovoltaic, thermoelectric, piezoelectric, and radio frequency.Another review [106] on energy harvesters for railway applications notes multiple energy sources, including vibration, wind, solar, thermal, magnetic field and acoustic energy.The authors conclude that vibration energy harvesting has potential, but the effective performance depends on the structure and its design.For the long-term, PVDF (polyvinylidene fluoride) may have advantages over PZT in spite of a lower harvesting performance [107].A review of piezoceramic materials for energy harvesting [105] notes lead-free piezoelectric materials as a perspective choice, with the recent improvement of piezoelectric properties, e.g., BaTiO 3 , for better performance during service compared with PZT. Most of the published information on such energy harvesters comes from research and development studies.To the best knowledge of the author, no literature on the longterm performance of piezoelectric energy harvesters documenting damage or failures is yet available.Recent developments in energy conversion and storage may change the perspectives for piezoelectric energy harvesting for SHM, but predicting these still remains difficult. Discussion and Outlook The range of effects on piezoelectric SHM-transducers from the operating environment is quite broad.Identifying the causes for deterioration of transducer performance and choosing the appropriate mitigation approach requires periodic or continuous monitoring of the SHM system.An improved and more detailed analysis of such data, e.g., with data fusion and AI, will yield a better understanding of damage mechanisms and their relevant time-scales, providing a basis for the development of specific and cost-efficient mitigation measures.Improving signal-to-noise ratio and easily detecting and eliminating signals from non-relevant sources in the analysis rather than by preset filters before data acquisition may be advantageous in strongly varying operational environments.Miniaturization of electronic hardware and simultaneous software developments have significantly increased the computing power of the SHM systems and thus provided approaches for faster, more detailed and accurate signal analysis.This allows for efficient detection and elimination of non-relevant signals from other sources and for signal de-noising, thus improving the quality of the SHM data.However, miniaturization has also made the hardware more vulnerable and, thus, may reduce the long-term reliability of the equipment.Even if PZT-transducers reliably operate for many years, hardware problems may require more maintenance or even replacement.It is questionable whether state-of-the-art hardware nowadays will reach the service life of, e.g., that of the Voyager spacecraft, still operating after 45 years under the harsh deep space conditions. As an outlook, a paper by Cawley and coauthors [108] deserves attention.Successful implementation of SHM requires what the authors call "closing the gap" between research and industrial application (cite): "Reasons for the slow transfer from research to practical application of structural health monitoring include lack of attention to the business case for monitoring, insufficient attention to how the large data flows will be handled and the lack of performance validation on real structures in industrial environments".The business case, i.e., cost for SHM versus potential financial loss in case of system failure and the validation on real applications are key aspects.Besides the points summarized in the conclusion in [108], the following factors will likely play a role in future developments toward this goal; PZT currently is the main piezo-ceramic materials used in transducers for SHM with structural waves or vibrations.However, the lead content is problematic for health and environmental reasons and there are plans to ban its use.Therefore, recent research efforts are aimed at the development of lead-free piezo materials with comparable performance.These new developments, independent of which materials will become the main replacement of PZT, first pose the problem that no experience on their long-term service performance is available.Further, many known lead-free piezoelectric materials are less efficient than PZT, especially in sensitivity.In addition, Curie-temperatures of the lead-free piezoelectric materials may be lower than that of PZT.Therefore, the proposed ban of PZT may result in making SHM methods with alternative measurement principles more attractive, both with respect to performance and cost. Conclusions Variations in temperature or elevated temperatures and mechanical vibrations or impact likely constitute the major effects causing degradation of the performance of piezoelectric transducers in long-term SHM monitoring.Placing the transducers in an environment at sufficiently low and, if possible, stable temperature, mitigates the effects to some extent."Sufficiently low" depends on the Curie-temperature (T C ), i.e., the transition temperature, of the PZT type of the transducer.The larger the margin between maximum service temperature and T C, the higher the probability is for the safe, long-term operation of transducers.Stable temperature environments will further reduce or even eliminate pyroelectric effects causing spurious signals.There are commercially available piezoelectric transducers operating at elevated and lower temperatures (with ranges specified by the manufacturers).For low temperatures, operational experience is still quite limited.Mitigation of mechanical effects requires suitable mounting devices for the transducers, providing adequate protection of transducers and of the signal transmission and power cables connected to them.Non-contact SHM with either laser-or imaging-based methods are on the way to become a competitor for piezoelectric-based SHM.Development of ever increasing computational power at relatively low cost is one of the drivers.New inspection methods, e.g., employing drones or unmanned aerial vehicles (UAV), increasingly perform periodic SHM and the potential of this technology is not yet fully explored [109].Drones may be equipped with several complementary NDT methods, also providing coarse first surveys to be followed-up by local inspection with higher resolution.Artificial intelligence methods look promising for fast image analysis with a high probability of detection of (potential) damage.Preventive maintenance, i.e., exchanging SHM system components before the end of their service life, is one mitigation approach that results in the high technical availability of the system.Implementing redundancy, e.g., additional transducers, is another approach.Preventive maintenance will profit from improved predictive models, e.g., digital twins for defining the best time, the least effort, or the lowest cost for such actions.Predictive maintenance is a current research topic and definitely worthwhile to follow.Both approaches, however, imply higher cost.Non-contact inspection methods may provide solutions avoiding several of the problems related to piezoelectric transducers in variable service environments with research in that area also being worthwhile to be followed.In commercial SHM, the performance requirements versus the budget of the client essentially define the "optimal" solution for mitigation of any potential problems, independent of the technology.Designing SHM-systems for easy and cost effective maintenance and repair, as well as implementing self-monitoring, is crucial.Software "life-time" and respective support are issues that also deserve attention for long-term SHM, especially with respect to support and upgrades for commercial codes.New regulatory interventions, e.g., analogous to that requiring lead-free transducers, may also affect SHM services in the future.Making SHM mandatory for critical structures or infrastructure and specifying probabilities of detection for damage may have a significant impact on technology developments and the related cost.Therefore, the perspectives of the different SHM-technologies are difficult to predict in the long-term. Figure 2 . Figure 2. Examples of flexible AFC or MFC mounted directly on curved surfaces without waveguides from the authors' laboratory, (top left) AFC mounted on a CFRP strut for compensation of thermal expansion, (top right) AFC mounted on an aluminum pipe for leak monitoring [35],(bottom) MFC mounted on a glider plane landing wheel gear for electro-mechanical impedance (EMI) monitoring[36]. Figure 3 . Figure 3. Thermal ageing effects from 10 min annealing on piezoelectric PZT material, graphical presentation of data in tables 1 (top) and 2 (bottom) in[37], indicating increased ageing rates and decreasing relaxation times with increasing annealing temperatures, respectively. Figure 4 . Figure 4. Thermal effects on PZT transducers mounted on a steel plate from lead-pencil breaks discussed in [32] (data courtesy of M. Löhr). Figure 4 . Figure 4. Thermal effects on PZT transducers mounted on a steel plate from lead-pencil breaks discussed in [32] (data courtesy of M. Löhr). Sensors 2023, 23 , 7979 9 of 20 Figure 5 . Figure 5. SHM field-test with acoustic emission near the manholes of a GFRP agricultural silo (photos from an advertisement flyer of the former Polymer Composites Laboratory at Empa); daily variation of exposure to sunlight yields height differences between sun exposed and shaded side of about 10 mm.4.3.2.Mitigation of Temperature Effects on Piezoelectric Transducers Higher operating temperature or temperature variations of the monitored object or its ambient reducing transducer sensitivity require lower distances between transducers if mounted in an array on structures, such as those noted above. Figure 5 . Figure 5. SHM field-test with acoustic emission near the manholes of a GFRP agricultural silo (photos from an advertisement flyer of the former Polymer Composites Laboratory at Empa); daily variation of exposure to sunlight yields height differences between sun exposed and shaded side of about 10 mm. Figure 6 . Figure 6.Packaging approaches for piezoelectric wafers and AFC: (left) wafer packaged in plain or pre-stressed thin polymer films (showing the specimen label and cracks labelled by "b" in red) [71]; (middle) AFC packaged in silicone rubber; (right) AFC packaged between pre-stressed carbon fiber laminates [72]. Figure 6 . Figure 6.Packaging approaches for piezoelectric wafers and AFC: (left) wafer packaged in plain or pre-stressed thin polymer films (showing the specimen label and cracks labelled by "b" in red) [71];(middle) AFC packaged in silicone rubber; (right) AFC packaged between pre-stressed carbon fiber laminates[72].
12,339.6
2023-09-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Collaborative Filtering Recommendation Algorithm Based on TF-IDF and User Characteristics : The recommendation algorithm is a very important and challenging issue for a personal recommender system. The collaborative filtering recommendation algorithm is one of the most popular and effective recommendation algorithms. However, the traditional collaborative filtering recommendation algorithm does not fully consider the impact of popular items and user characteristics on the recommendation results. To solve these problems, an improved collaborative filtering algorithm is proposed, which is based on the Term Frequency-Inverse Document Frequency (TF-IDF) method and user characteristics. In the proposed algorithm, an improved TF-IDF method is used to calculate the user similarity on the basis of rating data first. Secondly, the multi-dimensional characteristics information of users is used to calculate the user similarity by a fuzzy membership method. Then, the above two user similarities are fused based on an adaptive weighted algorithm. Finally, some experiments are conducted on the movie public data set, and the experimental results show that the proposed method has better performance than that of the state of the art. Introduction With the advent of the big data era, information on the Internet has grown exponentially. People have entered the era of information explosion from the past when information was scarce. However, most of this massive amount of information is worthless. The information explosion has made it more and more difficult for people to obtain valuable information from the Internet [1]. To improve the efficiency of production and life, people need some information filtering technologies to filter out useless information. The recommender systems are software tools and techniques providing suggestions for items which are useful to a user. As one of the effective information filtering tools, the personalized recommendation system can help users efficiently obtain information that meets their needs when their needs are unclear [2]. The core of a personalized recommendation system is the recommendation algorithm, which mainly includes the content-based recommendation algorithm, collaborative filtering recommendation algorithm, and hybrid recommendation algorithm [3,4]. Among them, because of the high efficiency, accuracy, and personalization, the collaborative filtering recommendation algorithm has become one of the most effective and extensive application recommendation algorithms [5]. For example, Nakagawa and Ito [6] proposed a recommendation system which can recommend interesting document files to users by collaborative filtering. Yu et al. [7] presented the application of a collaborative filtering algorithm in the field of E-commerce. Park et al. [8] presented a fast collaborative filtering algorithm with a k-nearest neighbor graph. Wu et al. [9] used a collaborative filtering algorithm to improve the prediction accuracy of large-scale recommendation system. Bartolini et al. [10] implemented a personalized recommendation. Although the collaborative filtering algorithm has been widely used, there are still some problems such as data sparsity, cold start, and information expiration, etc [11]. To solve the problems above, a series of improvements based on the traditional collaborative filtering algorithm were made and achieved some success. For example, Piraste et al. [12] alleviated the sparsity and cold start problems of the matrix using the film type label and director genre. Kumar et al. [13] used matrix decomposition technology to reduce the dimension of the matrix and improve the accuracy of the recommendation result. Sun and Dong [14] proposed a dynamic time drift model considering the influence of user interest changes on similarity in different time periods. Wang et al. [15] proposed a collaborative filtering algorithm combining the KNN model and XGBoost model. Zarzour et al. [16] presented a new effective model-based trust collaborative filtering to improve the quality of recommendation. In addition, there are some collaborative filtering algorithms based on clustering [17], neural networks [18], and various probability models [19]. The above studies optimized the recommendation model to a certain extent and improved the accuracy of the recommendation results, but there are still some problems to be further studied. For example, most of the existing collaborative filtering algorithms only consider the rating information among users, but ignore the user characteristics and the impact of popular items on user similarity, which leads to poor recommendation results. To further improve the accuracy of recommendation, a collaborative filtering algorithm based on the TF-IDF method and user characteristics is proposed in this paper. In the proposed method, both the rating information and the characteristics of the users are fully considered. The contribution of this paper can be summarized as follows: (1) Based on the rating data, the TF-IDF method is used to calculate the user similarity matrix to punish the impact of popular items on user similarity, and to improve the ability of mining unpopular items. (2) The user characteristics are fully considered in the proposed method, which is used to calculate user similarity based on a fuzzy membership function, to deal with the cold start problem by combining different dimension characteristics information of users. (3) An adaptive weighted algorithm is presented to fuse the two kinds of similarities of users obtained on the above two steps, to form a new user comprehensive similarity for recommendation algorithm. At last, experiments are carried out on real data sets to evaluate the accuracy of the proposed recommendation model. Experimental results show that the proposed algorithm is better than the state-of-the-art algorithms in accuracy. This paper is organized as follows. Section 2 gives out an overview of related work. The proposed algorithm is presented in Section 3. Section 4 provides the experiments and results analysis. Discussions on the parameters and performance of the proposed algorithm are carried out in Section 5. Section 6 gives out the conclusions. Related Work The basic idea of the collaborative filtering algorithm can be simply summarized as recommending items of interest to target users who have similar interests [20,21]. As shown in Figure 1, the collaborative filtering algorithm is mainly divided into three steps, namely establishing the user-item rating matrix, finding other users with similar interests to the target users, and finally making recommendations by rating and predicting based on similar users. Traditional collaborative filtering (CF) algorithms are mainly divided into user-based collaborative filtering (UCF) and item-based collaborative filtering (ICF) (see Figure 2). There are many improvements in the collaborative filtering recommendation algorithms, to solve the data sparsity and cold start problem. These existing methods give a good research basis for the recommendation system. In this paper, the user-based collaborative filtering algorithm is focused on, which is more suitable for responding to the favorite items for groups with similar interests, and the recommendation results are more social. The proposed collaborative recommendation algorithm is similar to those existing TF-IDF-based methods. However, there are many differences between the proposed methods with those existing methods. In the proposed method, the TF-IDF method is applied to rating data, and the user characteristics are fused to optimize the user similarity and improve the accuracy of rating prediction. It is different from those methods that use a time-dependent similarity measure to compute the user similarity without considering user characteristics [22]. It is also different from those methods that directly calculate the user similarity through the TF-IDF method [23]. Generate recommendations Nearest neighbor Similarity matrix Data set preprocessing User-item rating matrix The user-based collaborative filtering algorithm first needs to calculate the similarity between the target user and other users. Then, some users with high similarity are selected as the nearest neighbor set. Finally, aim at items in the neighbor set and predict all ratings of the target user. The main process of the traditional user-based collaborative filtering algorithm will be described as follows. Data Preprocessing Suppose the data set of a recommender system is D{U, I, R}, where U = {u 1 , u 2 , . . . , u n } is the user set of the system, I = {i 1 , i 2 , . . . , i m } is the item set of the system, and R is a user-item rating matrix. For a data set with m users and n items, the data are preprocessed to obtain a m × n user-item rating matrix R(m × n), which is shown as follows: where r ij represents the rating data of user U i for item I j . Similarity Calculation In the recommender system, there are three main methods used to calculate the similarity between two users: the cosine similarity, adjusted cosine similarity, and pearson similarity [24]. In this study, the pearson similarity will be used, which is calculated through the common rating items between any two users. The pearson similarity is shown as follows: where R u,i and R v,i represent the ratings of user u and user v on the i-th item, respectively; R u andR v represent the average of all the ratings of user u and user v, respectively. Generate Recommendation Set Before rating prediction to generate recommendation, it is necessary to determine the target user's similar neighbor set. A similar neighbor set refers to the set of users who have similar preferences with the target user. In the recommendation system, the most K similar users are usually selected as the nearest neighbor set to form the similar neighbor set of the target user [25]. After the neighbor set of the target user is selected, it combines with all the neighbors' ratings of the items and the similarity between the users to predict the target user's ratings on the test set. The rating prediction is calculated as follows [26]: where P u,i represents the prediction rating of user u for unknown item i; S(u, K) is the set of K users most similar to user u; and N(i) represents the set of users who have rated item i. After rating prediction, select the N items with the highest rating from the predicted rating set as the recommendation results to the target users, and the recommendation process ends [27]. Proposed Method As introduced in Section 2, the traditional user-based collaborative filtering algorithms usually only use the user's rating information, but ignore the impact of other aspects of user information and popular items on user similarity. To deal with these problems, an improved collaborative recommendation algorithm (defined as ICFTU) is proposed by combining the Term Frequency-Inverse Document Frequency (TF-IDF) method and user characteristics model. The overall framework of the proposed algorithm is shown in Figure 3, which has three main parts, namely the improved TF-IDF-based method, the improved user characteristics model, and the proposed fusion strategy. The proposed method will be presented as follows. Improved TF-IDF Based Method The traditional collaborative filtering algorithm calculates the user similarity matrix based on the user's rating data of items, which is easily affected by popular items. For example, "Shawshank Redemption" is a very good movie. If the user A and user B both gave the movie "Shawshank Redemption" 5 points, the traditional collaborative filtering algorithm will come to the conclusion that the user A and user B have high similarity. However, the fact is not necessarily the case. As we know, the same behavior of users on popular items does not mean that they have similar interests. On the contrary, if two users have taken the same behavior on unpopular items, it is more likely that their interests are similar. For example, if both users A and B have watched a relatively small number of movies, such as musicals, then they can be considered to have similar interests. Therefore, in order to eliminate the impact of popular items on the user similarity, the Term Frequency-Inverse Document Frequency (TF-IDF) method is applied to the traditional collaborative filtering algorithm in this paper, which is used to punish the popular items in the user behavior list. The main reason to use the TF-IDF method is that it is suitable for the problem of weight extraction. In addition, the TF-IDF method is simple and easy to calculate [28]. TF-IDF is a statistical method, which is often used to evaluate the importance of a word to a file. The importance of a word is directly proportional to the number of times it appears in the file, but at the same time, it is inversely proportional to the frequency it appears in the file library [28]. Based on the principle of TF-IDF, an improved user similarity calculation method is proposed to reduce the weight of the impact of popular items on the user similarity. If an item appears in the user's behavior list, but it also appears many times in other users behavior list, this item is regarded as a popular item, and its impact on the user similarity should be punished. The weight of the i-th item in this paper is calculated as: where f req(i, u) represents the number of times that the i-th item appears in the behavior list of user u; | u | represents the length of behavior list of user u; | U | represents the total number of users; and popular(i) represents the number of times that the i-th item appears in all of the user behavior lists. Then the weight of the item is introduced into the equation of Pearson similarity (see Equation (2)), and an improved similarity calculation method is obtained as: Improved User Characteristics Model In real life, people living in the same area tend to have similar lifestyle and eating habits, while people in different areas may show greater differences. Similarly, if two people's characteristics are more similar, such as gender, age, and occupation, then their interests are more likely to be similar. For example, there will be more common topics between students, but students and teachers may have different interests due to their different work and social experiences. Therefore, it is reasonable to recommend a user preference item to other users similar to their characteristics when making recommendations. There are some improved collaborative filtering algorithms, which have used the user's characteristics information. However, there are still some problems in the existing method, for example, the similarity of age and occupation is calculated in a crude way, which makes the recommendation results have some limitations [29]. To deal with these problems above, an improved user characteristics similarity model is set up in this paper, which is based on the fuzzy membership method. The proposed user characteristics similarity model can alleviate the cold start problem of the recommendation system caused by the lack of rating data for new users. The user characteristics similarities in this study are defined as follows: (1) Age similarity Suppose that if the age difference is less than 5 years, the similarity is regarded as 1, and if the age difference is more than 25, the similarity is regarded as 0. The fuzzy membership for the age similarity of users is defined as follows: (2) Occupation similarity The traditional method of occupation similarity calculation is that: if the occupation is the same, the occupation similarity is set as 1, otherwise it is set as 0. Although it can measure the similarity of two users to a certain extent, the user's characteristics are not fully exploited. In this paper, a tree diagram for the classification of occupations is set up first based on the international standard classification of occupations [30], which is shown in Figure 4. The distance between the two nodes is 1 The distance between the two nodes is 2 In this occupation classification tree, the distance between two nodes is defined as the number of edges between these two nodes. The distance between the parent node and child node is 1, and the distance between adjacent brother nodes is 2. The distance between the farthest two occupations in the occupation classification tree is defined as D max . Then the fuzzy membership for the occupation similarity of users is defined as follows: where d u,v is the occupation distance between users A and B; and τ is the correction coefficient, which is adjusted dynamically according to the occupation. (3) Gender similarity Different gender users have different preferences for items, so the gender should be taken into account when calculating the similarity of user characteristics [29]. Assuming that the gender of user u is G u , and the gender of user v is G v , the gender similarity of users is calculated as: (4) User characteristics similarity Combining the mentioned characteristics similarity of users in different dimensions based on the age, gender, and occupation, the final characteristics similarity of users is calculated as: where α + β + δ = 1 and α, β, δ ∈ (0, 1) are the similarity weights for the user's age, gender, and occupation. For different recommender systems, these weights can be adjusted dynamically to achieve the optimal recommender effect. Proposed Fusion Strategy to Generate Recommendation Based on the improved similarity calculation method above, the final user comprehensive similarity calculation method can be obtained by weighted fusion, namely: where ξ + µ = 1, and ξ, µ ∈ (0, 1) represent the weights for the similarity obtained based on the TF-IDF method and the user characteristics. For different recommender systems, ξ and µ should be optimized. In this paper, a searching algorithm is proposed to obtain the optimal values of ξ and µ, which is shown in Algorithm 1: µ, ξ = f (µ, ξ); 14: end After obtaining the user's comprehensive similarity Sim p (u, v), K users which are most similar to the target user are selected as the nearest neighbors to form the similar neighbor set of the target user. Combined with the rating information of all neighbors and the similarity with the target user, the target user's rating on the i-th unknown item is predicted. In this study, the rating prediction in (3) is changed to: The total work processing of the proposed collaborative recommendation algorithm is summarized as follows: • Step 1: Preprocess the rating data and construct the user-item rating matrix R(m × n); • Step 2: Use TF-IDF method and rating data to calculate the user similarity matrix Sim User (u, v); • Step 3: Use user characteristics information to calculate the user characteristics similarity matrix Sim Character (u, v); • Step 4: Fuse the similarity matrices from Step2 and Step3 to generate the final user comprehensive similarity matrix Sim p (u, v); • Step 5: After the comprehensive similarity matrix of a user is obtained, the nearest neighbor set of the target user is selected to make rating prediction and generate recommendations. Dataset and Metrics To verify the effectiveness of the improved algorithm, this paper uses a dataset from the MovieLens recommender system [31]. The Movielens dataset is a public movie dataset released by the GroupLens Laboratory of the University of Minnesota. At present, there are eight versions with different sizes. The dataset mainly includes the following information: users ID, items ID, user's rating information of the items, and time stamp of the rating, etc. The Movielens-100k (ML-100K) data set and Movielens-1M (ML-1M) data set are used in this paper, and the basic information of the two datasets are shown in Table 1. In the experiment, the dataset is randomly divided into training set and testing set according to the ratio of 8:2 for comparative analysis. There are many evaluation indexes of recommender systems [32]. Because the ultimate goal of the improved collaborative filtering recommendation algorithm is to improve the accuracy of the recommendation results, this paper mainly considers the accuracy of the algorithm. To evaluate the recommendation accuracy of the improved recommendation algorithm, the root mean square error (RMSE) and the mean absolute error (MAE) are used to measure the effect of recommended systems [33]. MAE and RMSE are the measurement of the deviation of recommendations from their true user-specified. MAE and RMSE values can be obtained by calculating the rating deviation between the actual rating and the predicted rating between users. The lower the values of RMSE and MAE, the higher the accuracy of the algorithm recommended. The calculation methods for MAE and RMSE are defined as: where N is the total number of rating forecast items in the testing set, r u,i represents the actual rating of user u for the i-th item, and p u,i is the prediction rating of user u for the i-th item. Comparison Experiment To evaluate the performance of the proposed algorithm (ICFTU), the traditional userbased collaborative filtering algorithm (UCF), the collaborative filtering algorithm with user characteristics (CFUC), the collaborative filtering algorithm based on clustering (K-MCF), and the algorithm based on optimizing similarity calculation (ICFOS) [34] are selected for comparison. These four algorithms used for comparison are classical and often used in the recommender systems. The above five algorithms are trained in the training set of two data sets, respectively, and the ratings prediction is carried out in the test set to compare the MAE and RMSE value of different algorithms. The nearest neighbor K is set as 35 for all of the algorithms used in this experiment, the comparison of recommendation accuracy is show in Table 2 and Figure 5. It can be seen from the figure that the ICFTU proposed in this paper has a better recommendation effect on both datasets. Among them, the traditional user-based collaborative filtering UCF algorithm has the largest error and the lowest prediction accuracy. The CFUC method (a collaborative filtering algorithm with user characteristics) combines the user's characteristic information, which makes up for some defects of the traditional algorithms and improves the accuracy of recommendation. The K-MCF algorithm based on clustering and ICFOS algorithm based on optimizing similarity calculation both improve the accuracy of recommendation to a certain extent. The ICFTU algorithm combines the TF-IDF method and user characteristics, reduces the impact of popular items on user similarity, improves the calculation of user characteristics similarity, improves the accuracy of recommendation, and still has some advantages in large-scale data sets. Parameter Discussion In this section, the influence of the parameters involved in the algorithm mentioned in this paper will be discussed. The experiments are carried out on ML-100k data set, where about 20,000 rating data are used to test the influence of the parameters in the proposed algorithm. (1) About the nearest neighbor First, the reasonable number of the nearest neighbor K is discussed, which is one of the key factors for the recommendation algorithm to achieve good results. The MAE and RMSE of the proposed method under different K are shown in Figure 6. The results in Figure 6 show that with the increase of the number of nearest neighbors of the target user, the MAE and RMSE of the algorithm show a downward trend and gradually tend to be saturated. Therefore, the nearest neighbor K is set as 35 in this study, to keep a relatively high accuracy and low computation. (2) About the user characteristic parameters Secondly, the user characteristic parameters α, β, and δ are discussed. In this experiment, the parameter adjustment step is set as 0.1. Because α+β+δ = 1, to keep all the three parameters bigger than 0, all the three parameters are set between [0.1, 0.8]. The MAE and RMSE of the proposed method under different α, β, and δ are shown in Figure 7. It can be seen that the MAE and RMSE are the minimum when α = 0.5, β = 0.2, and δ = 0.3. (3) About the model fusion parameters Thirdly, the similarity weighted fusion parameters ξ and µ are discussed. In this experiment, the parameter adjustment step is set as 0.05. The MAE and RMSE of the proposed method under different ξ and µ are shown in Figure 8. It can be seen that the MAE and RMSE are the minimum when ξ = 0.8 and µ = 0.2. This proves that the user similarity calculated by the TF-IDF method has the main influence on the recommendation algorithm, and the recommendation effect can be improved by properly fusing the user characteristics similarity. Ablation Experiment In this paper, an improved collaborative filtering recommendation algorithm based on the TF-IDF model and user characteristics model is proposed. To discuss the influence of the two main improvements of the proposed method, two ablation experiments are carried out. The experiment is carried out on ML-100k data set, and the results of the proposed method (ICFTU) in Section 4.2 are used as reference. The method which is only based on the TF-IDF model is called ICFTU-TI, and the method which is only based on the proposed user characteristics model is called ICFTU-UC. The results of this ablation experiment are shown in Tables 3 and 4 and Figure 9. The results show that: (1) The ICFTU algorithm has the best performance and the smallest error, which shows that the method using both the TF-IDF and the improved user characteristics models are effective; (2) The error of ICFTU-TI is smaller than that of ICFTU-UC, and it is close to that of ICFTU. This shows that the TF-IDF method is the main factor to improve the accuracy of the model, and the improved user characteristics are the secondary factor, which is consistent with the discussion results on similarity weighted fusion parameters ξ and µ in Section 5.1. Compared with Other Methods To further show the performance of the proposed collaborative recommendation algorithm (ICFTU), it is compared with two other state-of-the-art recommendation approaches. The first one is a new collaborative filtering framework based on a gauss core and an extension classification method (known as GCEDA) [26]. The second one is an advanced approach, which is based on Deep Feed-Forward Neural Networks (known as DFFN) [35]. This comparison experiment is carried out on ML-100k data set, and the results are shown in Table 5 and Figure 10. The results show that the proposed ICFTU in this paper has a better recommendation effect compared to GCEDA and DFFN. The results in Table 5 show that the performance of the GCEDA and DFFN methods are close to the K-MCF method (see Tables 2 and 5), the main reason is that all the three methods use the information of the input data by different strategy. However, the comprehensive performance of the proposed model is the best. The MAE value of the proposed model is 3.50% and 1.03% lower than that of GCEDA and DFFN. Meanwhile, the RMSE value of our model is 3.88% and 1.88% lower than that of GCEDA and DFFN. Conclusions To solve the impact of popular items on user similarity, the TF-IDF statistical method is used in this paper, and optimizes the formula to adapt to the recommendation model. At the same time, an improved user characteristics similarity calculation method is proposed, which makes use of the user characteristics information and alleviates the cold start problem. Finally, this paper conducts off-line experiments on Movielens data sets. Experimental results show that the proposed algorithm is more accurate than the comparison algorithm. There are still some problems that should be further studied in future, such as some new user similarity models by fusing the item tag and user characteristic, and the deep learning technology to mine the potential information of user and item.
6,277.6
2021-10-14T00:00:00.000
[ "Computer Science" ]
Carbon Emissions of Quantum Circuit Simulation: More than You Would Think The rapid advancement of quantum hardware brings a host of research opportunities and the potential for quantum advantages across numerous fields. In this landscape, quantum circuit simulations serve as an indispensable tool by emulating quantum behavior on classical computers. They offer easy access, noise-free environments, and real-time observation of quantum states. However, the sustainability aspect of quantum circuit simulation is yet to be explored. In this paper, we introduce for the first time the concept of environmental impact from quantum circuit simulation. We present a preliminary model to compute the CO2e emissions derived from quantum circuit simulations. Our results indicate that large quantum circuit simulations (43 qubits) could lead to CO2e emissions 48 times greater than training a transformer machine learning model. I. INTRODUCTION Quantum computing, recognized as the next technological revolution, holds immense potential to transform a wide range of areas, including cryptography, materials science, and AI.Currently, however, quantum hardware is both limited and expensive to access, making its widespread adoption challenging.While quantum circuit simulation has emerged as a supporting tool.It employs classical computing resources to emulate the behavior of quantum circuits, thereby bypassing the need for physical quantum hardware.Quantum circuits comprise quantum bits (qubits) and quantum gates, which manipulate these qubits.For example, State Vector Simulators, simulate a quantum circuit by computing the wavefunction of the qubit's statevector as gates and instructions are applied.There are also cloud-based simulators from most of the major quantum cloud providers such as IBM. The importance of quantum circuit simulation can be attributed to several reasons: Limited Quantum Hardware Availability: Quantum computing resources are scarce, expensive, and often accessible only through cloud-based platforms with waiting times.And this situation could be worse as the increased demand for cloud resources [1].Noise Influence: In the "Noisy Intermediate-Scale Quantum"(NISQ) era [2], where quantum computers have limited qubits and high error rates, quantum circuit simulators play a vital role.They enable researchers to develop, test, and optimize quantum algorithms in an ideal, noise-free environment before deploying them on physical quantum hardware.Algorithm Testing and Development: Quantum circuit simulators, unlike actual quantum systems, allow real-time observation of a computation's state.Measuring states in quantum systems is challenging due to Activity CO 2 e (kg) Household electricity use for a day [12] 12.44 Driving a car for 100 km [13] 24.80 Transatlantic flight, 1 passenger [14] 313.90 Transformer base training [15] 11.79 Transformer large training [15] 87.09 ELMo training [15] 118.84Quantum Circuit Simulation (43 qubits) 568.77 the disruption of computation and partial view of the quantum state.Simulators, however, mimic quantum states on classical computers, enabling developers to inspect the full quantum state at any moment, facilitating the development of algorithms such as Variational Quantum Circuit(VQC) [3]- [9], and deepening quantum programming understanding.Quantum Error Mitigation: Quantum systems are inherently noisy, and quantum error correction is still in its infancy.Simulations can help model the noise characteristics of quantum devices and develop error mitigation techniques [10]. Conventionally, the evaluation metrics for quantum circuit simulations include simulation fidelity, computational speed, and resource usage.However, as with any computational process, quantum circuit simulation comes with an energy cost, which translates into substantial CO 2 e emissions.Considering the global need to reduce greenhouse gas emissions, understanding the carbon footprint of quantum circuit simulations is as crucial as enhancing their performance.A balance needs to be struck between the pursuit of technological advancement and environmental sustainability.This sphere of interest encompasses diverse stakeholders: researchers and developers focused on quantum algorithm design, hardware companies like NVIDIA who are advancing quantum simulation projects such as cuQuantum [11], and organizations advocating for sustainable practices due to the environmental implications.To further illustrate the environmental impact of quantum circuit simulations, we compare their energy consumption and emissions with those of common life activities and classical machine training, as shown in Table I.The result reveals that the CO 2 e emissions resulting from quantum circuit simulations can exceed those of other activities and processes.For instance, the simulation can generate up to 1.81 times the CO 2 e emissions of a one-way flight from New York to London.Notably, when compared with classical computing, the quantum circuit simulation demonstrates an even more pronounced environmental impact; it produces approximately 48 times the CO 2 e emissions of training a standard transformer base model.The main contributions of this paper are: (1) Bring the notion of environmental impact from quantum circuit simulation.(2) Build the initial model for calculating the CO 2 e emissions of simulation. QUANTUM CIRCUIT SIMULATION The carbon emissions from quantum circuit simulations derive from a multitude of sources.This includes Embodied Emissions, encompassing the carbon footprint from the manufacturing and disposal of hardware, Idle Power Consumption, representing the emissions when the system is powered but not actively processing, and Dynamic Power Consumption, which relates to active processing and data transfer.Dynamic Power Consumption is affected by the properties of a given quantum circuit, such as the number of qubits, the circuit depth, etc. Besides, it also hinges on other factors, such as the computational resources utilized, their efficiency, and the simulation duration.These resources include processors (CPU/GPU), memory modules, cooling systems, and a multitude of peripheral devices.Note that this investigation primarily focuses on Dynamic Power Consumption for quantum circuit simulations, but the proposed model can be easily extended to support other factors such as the load on the processors, the utilization of memory, and the efficiency of the cooling systems. To formulate the simulation-emission model, we first define the system to run the simulation as follows.For a granular estimate of the energy consumed, we factor in the number of processors ('n'), the average power per processor ('Pp' in kW), and the simulation duration ('T' in hours).The environmental impact, measured as Carbon Dioxide Equivalent (CO 2 e) emissions, is then determined by the Carbon Intensity ('CI'), a measure of CO 2 e emissions per unit of electricity consumed (in kg/kWh), and the Power Usage Effectiveness ('PUE').The PUE, a measure of data center efficiency, describes the proportion of total power consumption utilized directly by the computing equipment. Incorporating the above definitions, the CO 2 e emissions are calculated as: CO 2 e = n × P p × T × CI × P U E. This comprehensive approach allows us to estimate the precise environmental implications of quantum circuit simulations.For example, consider a quantum circuit simulation on a personal computer with an average power draw of P = 0.04276 kW over an execution time of T = 0.01861 hours.The energy consumption for this simulation would be E = 0.04276 kW × 0.01861 hours = 0.00080 kWh.Furthermore, using a Carbon Intensity value of CI = 0.429 kg/kWh, as per the average datacenter carbon emissions [16], and a Power Usage Effectiveness of P U E = 1.58, reflecting the average industry datacenter PUE [17], the CO 2 e emissions can be computed as CO 2 e = 0.00080 kWh × 0.429 kg/kWh × 1.58 = 0.00054 kg. A. Small Quantum Circuit Simulations We first conducted our own simulations on small quantum circuits to examine the impact of certain parameters on energy consumption and emissions.However, due to that a single run of quantum circuit simulation was rather too fast, which would make less sense in the context of analyzing the energy cost and emission, we set the experiments on the task of quantum machine learning.That is a very promising field that takes the power of quantum computing in machine learning tasks.In our experiment, we used the MNIST dataset for training purposes, comprising 20,000 training samples.The training was set for 20 epochs with a batch size of 256.We tracked the execution time during these runs, alongside the average power consumption for each processor core, to calculate the final CO 2 e emission.The results of these measurements are depicted in Figure1.In addition, we conducted parallel experiments with classical neural network models, ensuring that the number of parameters corresponded with those in the quantum circuits.The outcomes from these classical models are represented by the red bars in Figure 1. B. Large Quantum Circuit Simulations In this section, we analyzed existing data on large quantum circuit simulations.Due to the limitation of the device, we can't now scale up the quantum circuit simulation, however, we can still collect the experiment results from large companies such as NVIDIA or AWS.Here is the emission result based on the classic simulations for quantum circuits from AWS [18].To be noticed, the result here in Table II is a single run of simulation instead of a training VQC.It is evident that the CO 2 e emissions escalate as the number of qubits increases, meriting attention and concern during the development of quantum circuit simulation. IV. CONCLUSION By analyzing the energy consumption and carbon emissions, we can better comprehend the environmental footprint of quantum circuit simulations, thus enabling us to devise strategies to reduce their impact.
2,052.4
2023-07-04T00:00:00.000
[ "Physics" ]
Recent Advances in using Lipomyces starkeyi for the Production of Single-Cell Oil The clean energy demand and limited fossil fuel reserves require an alternate source that is sustainable and eco-friendly. This demand for clean energy steered the introduction of biofuels such as bioethanol and biodiesel. The third-generation biodiesel is promising as it surpasses the difficulties associated with food security and land usage. The third-generation biodiesel comprises biodiesel derived from oil produced by oleaginous microbes. The term oleaginous refers to microbes with the ability to accumulate lipids to about 20% of the biomass and is found in the form of triacylglycerols. Yeasts can be grown easily on a commercial scale and are amenable to modifications to increase single-cell oil (SCO) productivity. The oleaginous yeast L. starkeyi is a potential lipid producer that can accumulate up to 70% of SCO of its cell dry weight under optimum conditions. Compared to other oleaginous organisms, it can be grown on a wide range of feedstock and a good part of the lipid produced can be converted to biodiesel. This review presents the recent advances in single-cell oil production from L starkeyi and strategies to increase lipid production are analyzed. INTRODUCTION In the pursuit to achieve net zero emissions of greenhouse gases by 2050, the energy sector needs to switch to clean energy options to meet their demand.Currently, fossil fuels meet about 80% of the world's demand and other energy sources are nuclear power, renewable sources and biofuels. 1On the other hand, the limited oil reserves also drive the implementation of alternative energy sources. Biofuel is derived from biomass and the most established biofuels are bioethanol and biodiesel.It is estimated that the global biofuel market size will exceed nearly 201.21 billion US$ by 2030. 2 Bioethanol (C2) is usually blended with gasoline (C4-C9) 3 while biodiesel is a substitute in diesel engines without any engine modification.Bioethanol is synthesized by the alcoholic fermentation of sugars that are derived from the hydrolysis of biomass.Biodiesel is an excellent choice because of its renewability, safety to use in any diesel engine, high efficiency and engine durability.Moreover, it is nontoxic, nonflammable and has a greater biodegradability. 4The firstgeneration biodiesel has been commercialized and its current production is about 50 billion litres on a global scale. 5Though it has good combustion quality in IC engines, it is a threat to food security as produced from vegetable oil and animal fat. 6Usually, vegetable oil such as peanut oil, soybean oil, sunflower oil, corn oil, rice bran oil, palm oil, coconut oil, olive oil, and rapeseed oil are used as feedstock. 7Nevertheless, currently, about 95% of the biodiesel demand is met by first-generation biodiesel. 8As an alternative to first-generation biodiesel, second-generation biodiesel that is derived from nonedible oil sources emerged. 9It is mainly derived from cheap, inedible and unconventional sources such as crops (e.g.jatropha, mahua), inedible oil (e.g.jojoba oil), inedible sources (e.g.wood, husk, tobacco seed). 10,11econd-generation biodiesel is recognized to be an efficient and eco-friendly alternative to first-generation biodiesel.However, the crops' cultivation requirements of fertile land and other resources led to its limited implementation. 8iodiesel derived from oil produced by oleaginous microbes led to the discovery of third-generation biodiesel.It offers advantages as it is a renewable source, eco-friendly, has no threat to food and land usage, and has ease in manipulation to high cellular lipid accumulation.However, it poses challenges in the production of inadequate biomass on a commercial scale, high investments required for facility and setup at large scale. 12n oleaginous microorganisms (OM), over 20% of the dry weight constitutes lipid under stress conditions of high carbon and low nitrogen nutritional sources. 13The lipid accumulation can be achieved up to 70% or more with appropriate stress conditions to the microbes. 14OMs are constituted by microbial families viz.bacteria, fungi, yeast, and microalgae. 15The oil obtained from OMs has equivalent composition, thermal properties and low viscosity as that of oil obtained from plant and animal sources. 16Oleaginous yeasts are frequently a superior choice for lipid production at a commercial scale because of the higher growth rate, lipid accumulation and productivity. 17bout 160 native yeasts have been reported as oleaginous and the most studied oleaginous yeasts are Cutaneotrichosporon oleaginosus, Rhodotorula toruloides, Yarrowia lipolytica, Rhodotorula glutinis, L. starkeyi, Trichosporon oleaginosus, and Candida tropicalis. 18,19his paper aims to present the recent advances in single-cell oil production from L. starkeyi for its potential as biodiesel feedstock and the benefits of L. starkeyi in sustainable development.Moreover, the opportunities in the strategies to increase lipid production are discussed. Oleaginous Organisms The single-cell oil refers to microbial oil and its first commercial production dates back to 1985 by the filamentous fungus Mucor circinelloides. 20The microbial lipids include triacylglycerols (TAG) and glycolipids.The energy reserves, sterol esters and phospholipids constitute the former while membrane constituents the latter.The TAG part has been identified as the major portion of SCO and it is chemically identical to vegetable oils. 21However, the amount and type of lipid composition (Table 1) depends on the genotype of the microbe, its culture conditions and the substrate employed. 22The oleaginous microbes are capable of exploiting low-cost feedstocks such as agro-industrial residues for a higher lipid synthesis. 23The lipid accumulated can be converted to biodiesel by transesterification reactions that are catalyzed by acid/ alkali/ enzymes for the conversion to fatty acid methyl esters. 24n oleaginous yeast, the lipid accumulation happens by de novo (from carbon substrate in nitrogen limitation) or ex novo (from a lipid source or hydrophobic substrates) pathways. 25igh TAG accumulation is observed among yeast and fungi compared to the bacteria as the latter is composed of polyhydroxyalkanoates as storage molecules. 26Among 100 genera with about 1500 species of yeasts, about 30 have been reported to accumulate lipids excessive 25% of their dry biomass. 27,28icroalgae as a source of microbial oil have been investigated recently and it holds advantages in easiness in cultivation and product manufacturing. 29Above all, the lipid obtained from algae is equivalent to vegetable oils for the saturated and low-unsaturated long-chain fatty acids content. 30The commercial production of algal lipids is challenging as the outdoor cultivation systems for photoautotrophic algae gave lower than 20% (dry base) of lipid content. 31,32In addition, the high moisture content of algae cultivated in open ponds or photobioreactors requires dewatering and drying equipment during the downstream processing.These additional investments make the overall cost of algal biofuel much higher. 33,34 Lipomyces starkeyi The genus Lipomyces belongs to the Lipomycetaceae family and about 16 species have been recognized in the genus so far. 35Among the species, L. starkeyi, unicellular eukaryotic yeast, is the most widely studied because of its potential for lipid synthesis.L. starkeyi was isolated by R. L Starkey (strain number 74) from the soil in the USA. 36L. starkeyi grows in glucose mineral medium at pH=5 and biotin supplementation is required at pH 5.5 to 6.5.Biotin enhances cell growth and its synthesis is inhibited at pH more than 5. 37 In the fermentation media, ions such as Mg 2+ , Mn 2+ , and Zn 2+ are required for cell growth and metabolism, Cu 2+ and Fe 2+ are required as cofactors and phosphate and sulphate are vital for structural components and cell physiology respectively. 38 biomass yield of 1.6 fold was achieved with an appropriate amount of Mn2+ whereas a high lipid yield was obtained at a lower concentration of Zn2+. 39 Fermentation Strategies For Sco Production Fermentation feedstock Agro-industrial residues are rich in lignocellulosic biomass and are an abundant natural polymer to serve as a fermentation substrate.Pretreatment of the cellulosic biomass is required to make it accessible to the hydrolytic enzymes of the microbes.Pretreatment leads to the solubilization or separation of cellulose, hemicellulose and lignin to facilitate the digestion of lignocellulosic material. 40Approaches for the pretreatment include chemical, mechanical, and biological methods and their different combinations. 41The pretreatment methods influence the long-term storage of the biomass, the concentration of the pretreated biomass and creation of inhibitors in the medium. 42An alternative method employing ionic liquid for the pretreatment of lignocellulosic biomass has proven successful. 43A study on employing seawater-based ionic liquid in the pretreatment of lignocellulosic biomass concluded that the use of seawater renders no negative effect on the pretreatment as well as the enzymatic hydrolysis of biomass obtained.It yielded 54−72% of reducing sugar and lipid yield of 4.5 gL -1 after cultivation of Trichosporon fermentans on wheat straw hydrolysate. 44The metal ions present in the feedstock also influences the lipid formation as observed by Zhang et al. 45 in the utilization of municipal wastewater sludge as the feedstock in fermentation.The presence of Cd 2+ in the fermentation reduced the lipid content from 51% to 41%.However, on the removal of metals from the sludge, lipid content was about half of the one with metals.This is due to the removal of metal ions such as Zn 2+ that enhanced lipid accumulation.L. starkeyi can assimilate a wide range of feedstock as the substrate for lipid production and various feedstock have been reported (Table 2).In the production cost of biodiesel, more than 70% is contributed by the raw materials used and the use of cheap feedstock can greatly reduce this cost. 46 large amount of raw glycerol produced during the process can also be recycled sustainably by using it as a substrate for fermentation. 47 Enhancement of SCO production by L. starkeyi Fermentation strategies Biphasic fed-batch fermentation strategy For L. starkeyi ATCC 56304, the biphasic system with a supply of glucose during the growth phase and xylose (at 120h) during the lipid accumulation phase resulted in 0.13 g L -1 h -1 oil productivity.It is higher than the fermentation with carbon source as glucose (0.06 g L -1 h -1 ), xylose (0.12 g L -1 h -1 ) and mixed sugars (glucose: xylose at 1:1)(0.09g L -1 h -1 ) in a single phase.However, both mixed (carbon) and biphasic cultures resulted in similar productivity (0.14 g L -1 h -1 ) at a longer fermentation time. 48The mixed carbon sources (glucose and xylose) in a nitrogen-limited mineral medium resulted in the highest lipid content (84.9%) compared to the fermentation using a single carbon source using L. starkeyi NBRC10381. 49 Mode of fermentation The carbon to nitrogen ratio (C: N ratio, mol mol -1 ) in the media is vital in determining the cellular metabolism state of oleaginous yeast.Fed-batch fermentation is often followed to maintain the cells in the growth phase initially and subsequently in an oil accumulation phase. 50ed-batch cultivation of L. starkeyi yielded an oil content of 27% while batch cultivation resulted in 23.7%. 51In a fed-batch study with two feedings using L. starkeyi, oil accumulated increased from 0.05 g g -1 to 0.11 g g -1 compared to the batch mode cultivation. 52A similar dependence as of low C: N ratio was observed for L. starkeyi for growth and lipid accumulation at a higher agitation rate.As in the case of nitrogen limitation in the culture media, oxygen limitation enhances lipid accumulation although it decreases growth in L. starkeyi. 53he repeated batch cultivation of L. starkeyi DSM 70296 resulted in high cell (85.4gL -1 ) and lipid concentration (41.8 gL -1 ) compared to the other cultivation modes such as batch, fed-batch, and continuous cultures with glucose and xylose as substrate.In the same study, the continuous cultivation (dilution rate = 0.03 h1) with hemi cellulose hydrolysate resulted in high biomass and lipid yield compared to media with glucose and xylose. 52he substrate-feeding strategy affects cell growth and lipid production.As observed in study by Amza et al. 54 that L. starkeyi D35 (Ls-D35 strain) achieved high-density culture on feeding with mixed glucose and xylose as substrates (0.15 w/w substrate) after 96 h while high lipid accumulation was observed in single xylose feeding (0.13 w/w substrate) after 120 h. Mixed culture of the microbes Co-culturing oleaginous yeasts and microalgae The mutualistic interaction between oleaginous yeast and microalga has been greatly explored for the production of metabolites.The microalga synthesizes oxygen for the yeast, whereas yeast generates carbon dioxide for the microalga.Additionally, microalga converts the dissolved carbon dioxide in the medium to bicarbonate that on consumption, releases OH− ions and converts media to alkaline.On contrary, yeast growth makes the medium acidic. The co-cultivation of L. starkeyi and Chloroidium saccharophilum resulted in lipid accumulation of 0.064, 0.064 and 0.081 g lipid•g biomass -1 when grown on YEG (Yeast extract, glucose, (NH 4 ) 2 SO 4 , MgSO 4 .7H 2 O, KH 2 PO 4 ), BBM + G (Bold Basal Medium+ glucose) and medium with Arundo donax hydrolysate respectively. 55he symbiotic relationship between microalga Chlamydomona reinhardtii and L. starkeyi was demonstrated as algae growth was observed in media in which, the organic carbon in the feedstock was utilized in the absence of air showing the dependence on the CO 2-O 2 exchange between the two organisms. 56Co-culturing of L. starkeyi and microalgae (wastewater native majorly Scenedesmus sp. and Chlorella sp.) with a 2:1 inoculum ratio was utilized for lipid production from urban wastewater.The easily assimilated organic substrates in the wastewater were absorbed during the first 3 days of fermentation and it limited the yeast growth.However, it resulted in 15% lipid accumulation at the end of cultivation time. 57 Co-culturing oleaginous yeasts and bacterium The mutual relationship between yeast and bacterium results in the improvement of metabolic activities leading to enhanced biomass production and lipid accumulation.Karim et al. 58 co-cultured yeast (L.starkeyi) and bacterium (Bacillus cereus) for the simultaneous lipid production and palm oil mill effluent treatment.After the optimization of process parameters using statistical tools, lipid accumulation and COD removal efficiency was observed as 2.95gL -1 and 86.54%, respectively.In a separate study, the synergistic relationship between yeast (L.starkeyi) and a bacterium (Bacillus cereus) resulted in high lipid accumulation.The co-culture on palm oil mill effluents resulted in 25.53% lipid yield and was greater than monoculture. 59 Consolidated bioprocessing In the utilization of agro-industrial residues in lipid synthesis, a consolidated bioprocessing strategy proved to be technically and economically feasible.In this approach, L. starkeyi fermentation exhibited a starch hydrolysis mechanism as well as lipogenesis with a yield of 18.7% (w/w) of lipid with cassava starch as the substrate. 60In the utilization of rice straw as the substrate, a consolidated bioprocessing consisting of L. starkeyi and Aspergillus oryzae resulted in lipid accumulation of 8.5 g/100 g oven-dry weight of rice straw.The rice straw was pretreated with lime at conditions of Ca(OH) 2 concentration of 12 gL -1 and hydrolysis temperature of 110°C within 60 min. 61or the conversion of lignocellulose to lipid by L. starkeyi, the deficiency of β-glucosidase contributed to the utilization of cellobiose and hence facilitated the simultaneous saccharification and enhanced lipid production. 62The two-step process utilizing the cellulosic paper mill waste as the substrate resulted in lipid accumulation of 37 wt%.In the first step, enzymatic hydrolysis (Cellic® CTec2 25 FPU/g glucan, 48 h, biomass loading 20 gL -1 ) yielded hydrolysates containing glucose and xylose.Subsequently, L. starkeyi was cultivated on the undetoxified hydrolysate in the second step. 63 i p i d y i e l d i n t h e c o n v e r s i o n of wastewater sludge is low as the limited availability of easily consumed nutrients in the sludge.The increase in soluble chemical oxygen demand after the pretreatment increased lipid accumulation.lipid accumulation on the pretreated sludge on cultivation with L. starkeyi was 36.67%g/g, 18.42%, 21.08% and 26.31% for ultrasonication pretreatment, acid pretreatment, a l ka l i n e p ret re at m e nt a n d m i c rowave pretreatment, respectively. 64 Two-stage cultivation In two-stage cultivation, cell growth and lipid production are spatially separated so that independent optimization of both stages can be achieved.It is a cost-effective approach with a relatively low C/N ratio during the first stage followed by a second stage with a relatively high C/N ratio for the fermentation of L. starkeyi.The nitrogen limitation during the second stage induces lipid accumulation.A two-stage fermentation using L. starkeyi AS 2.1560 with non-sterile glucose solution without additional nutrients resulted in 64.9% of lipid yield. 65In another study, the fermentation media composed of 50% YPD + 50% Orange peel exhibited larger internal droplets in the yeast cells and two-stage operations increased the lipid yield by 18.5-27.1%. 66he co-fermentation of glucose and xylose using L. starkeyi AS 2.1560 in a two-stage fermenter under unsterile conditions was studied by Liu. 67he fed-batch operation in the second stage with co-utilization of non-sterile lignocellulose-derived sugars accumulated high lipid (63.8%) after 46h of incubation.High cell density fermentation by fed-batch mode using L. starkeyi AS 2.1560 on unsterile xylose demonstrated 65.5% lipid content after 48h incubation. 68In the two-stage cultivation of L. starkeyi InaCC Y604 in a nitrogen-limited mineral medium, a mixed carbon source (glucose and xylose) resulted in the highest cell biomass compared to the single carbon source.However, the highest lipid accumulation (65.05% (w/w)) was observed when cellobiose was the carbon source. 69 Immobilization L. starkeyi DSM 70296, immobilized on de-lignified porous cellulose with 30°C and pH 5.0 as the optimum conditions resulted in enhanced SCO production.In glucose media, the lipid accumulation was increased by 44% while 85% enhanced lipid accumulation was achieved in agro-industrial waste suspensions (orange juice and molasses) based media compared to free cell cultures. 70 Integrated cascade bioprocesses Giant reed (Arundo donax L) as the substrate for SCO production has been studied by Fidio. 71The lignocellulosic feedstock from perennial grasses was treated by microwaveassisted hydrolysis and enzymatic hydrolysis.L. starkeyi DSM 70,296 cultivation on both the detoxified and partially-detoxified hydrolysates in the integrated cascade process achieved about 8 g SCO from 100 g biomass. 71Sugar cane bagasse on acid hydrolysis in the Parr reactor resulted in hemicellulose fraction (about 82% conversion) that on cultivation with L. starkeyi resulted in a 27.8% (w/w) lipid content. 72 Genetic engineering approach Genetic engineering and metabolic approaches are widely used for improved lipid production by yeasts.The technique is to increase the metabolic flux rate by overexpression of enzymes associated with acyl-CoA synthesis and Kennedy pathways from glycerol-3-phosphate to TAGs. 73An alternative to enhance lipid accumulation is to inhibit the b-oxidation. 74Transformation methods based on metabolic engineering reported to enhance lipogenesis as well as in the synthesis of high-value metabolites. 75The electroporation procedure in the transformation of L. starkeyi was shown to be a better procedure compared to LiAc-mediated transformation and PEGmediated spheroplast transformation methods in terms of transformation efficiency and time consumption, respectively. 76Agrobacteriummediated transformation in L. starkeyi is an effective method for homologous recombination as well as expression of heterologous genes in L. starkeyi. 77 Mutation Mutating the oleaginous yeast using techniques such as ethyl methanesulfonate (EMS) treatment and UV irradiation has been reported to increase lipid production.The selection of mutants after mutagenesis is achieved by methods such as cerulenin selection, Sudan Black B staining, and Percoll density gradient centrifugation. 78UV irradiation mutation of L. starkeyi E15 resulted in mutants with enhanced lipid accumulation compared to that of EMS-induced mutation.The lipid accumulation was observed as 32.0%, 44.2% and 68.1% for the wild-type, E15 strain and highest lipid-producing mutant (E15-15), respectively.In the UV irradiation mutation of L. starkeyi E15, three mutants, namely E15-11, E15-15, and E15-25, accumulated particularly higher TAG levels than their counterparts.The amounts of TAG per dry mass of the wild-type and E15 strains were 32.0% and 44.2%, respectively, on day 3, whereas those of E15-11, E15-15, and E15-25 on day 3 were 57.4%, 68.1%, and 60.5%, respectively.A higher TAG accumulation was observed on UV irradiation than that of EMS treatment mutation. 79 Challenges and future research scope Presently, the high operating expense is a challenge in the industrialization of single-cell oil production by L. starkeyi. 66For an economically advantageous process, the lipid yield and productivity need to be further enhanced.A metabolic engineering approach that targets potential genes can be an approach to enhance lipid production.Extensive research focusing on metabolic alterations should be undertaken.Using competent genetic tools, the possible gene targets in lipid synthesis can be studied.Recent techniques such as CRISPR/Cas9-based genome editing technology can be utilized in the improvement of lipid production. CONCLUSION Single-cell oil produced by oleaginous yeast is a non-plant type and renewable source that is used for biodiesel production.L. starkeyi is an excellent lipid producer and can assimilate a wide range of feedstock.The enhancement of lipid production is achieved by different strategies such as maintenance of a high C/N ratio, optimization of nutritional and process parameters, two-stage cultivation, mixed culture of microbes and genetic engineering.Metabolic engineering-based genetic modifications of the yeast L. starkeyi can result in even greater lipid production for the biodiesel industry.
4,788
2023-04-13T00:00:00.000
[ "Engineering", "Biology" ]
Arsenic ( III ) adsorption from aqueous solutions on novel carbon cryogel / ceria nanocomposite Carbon cryogel/ceria composite, with 10 wt.% of ceria, was synthesized by mixing of ceria and carbon cryogel (CC). The sample was characterized by field emission scanning electron microscopy, nitrogen adsorption and X-ray diffraction. The adsorption of arsenic(III) ions from aqueous solutions on carbon cryogel/ceria nanocomposite was studied as a function of time, solution pH and As(III) ion concentration. The results are correlated with previous investigations of adsorption mechanism of arsenic(III) on carbon cryogel. Adsorption dose experiments showed that the mass of the adsorbent was reduced for 20 times, in comparison with pure CC, for the same amount of adsorbed arsenic(III) ions. BET isotherm was used to interpret the experimental data for modelling liquid phase adsorption. I. Introduction Arsenic water pollution is widespread world problem.High arsenic concentrations in drinking and irrigation water have been measured in large areas of Bangladesh, India, China, and in some parts of United States of America, Argentina, Australia, Chile, Mexico, Taiwan, Vietnam and Thailand [1].More than 100 million people are at risk for consuming water with arsenic level above 0.01 ppm [2]. Arsenic is highly toxic element.There are numerous studies focused on health effects of chronic arsenic exposure [3][4][5][6].It has been found that consuming water with elevated level of arsenic leads to pigmentation and keratosis of the skin, chronic pulmonary disease, diabetes, miscarriage, abortion, infant mortality, vascular disease, cancers of the skin, lung, liver and urinary tract.Due to the sufficient evidence, arsenic and its inorganic compounds have been classified as Group I carcinogens to humans [7].Hence, it has to be removed from drinking water.Various techniques are being used for reducing arsenic concentration: reverse osmosis, activated alumina, coagulation/filtration, ion exchange, electro-dialysis, and oxidation/filtration [8].Singh et al. [2] gave a critical review of remediation techniques for arsenic. Among all methods proposed for arsenic removal adsorption stands out as a simple and efficient method.A wide range of sorbent materials can be used to decrease arsenic concentration in water solutions, such as thiolfunctionalized chitin nanofibres, goethite-based adsorbent, zero valent iron, synthetic siderite, titanium dioxide and many others [9][10][11][12][13].Ungureanu et al. [14] gave a review of latest advances in adsorption of arsenic. In our previous study we showed that carbon cryogel (CC) can be used as As(III) adsorbent over a wide pH range [15].Based on the experimental data, conclusions were brought that the surface of adsorbent should be high and neutral, i.e. the amount of surface functional groups should be reduced to increase the arsenic adsorption capacity.For that purpose we synthesized carbon cryogel/ceria nanocomposite with 10 wt.% of ceria, assuming that the novel material would have better arsenic adsorption capacity comparing to the pure carbon cryogel. The aim of this study is to investigate As(III) adsorption process on carbon cryogel/ceria composite.Adsorption kinetic, the effect of solution pH and arsenic concentration on removal rate were examined in batch system.Adsorption kinetics, as well as adsorption isotherms, were fitted to several theoretical models. Sodium arsenite (NaAsO 2 , analytical reagent, Mallinckrodt) was used to prepare arsenic(III) stock solution.As(III) solutions used in batch experiments were obtained by diluting the As(III) stock solution to desired concentrations with distilled water. Composite synthesis Carbon cryogel was synthesized by the method previously described by Babic et al. [15,16].Briefly, it is a polycondensation reaction of resorcinol with formaldehyde in water solution with sodium carbonate as a basic catalyst, followed by freeze-drying and carbonization in inert atmosphere at 800 °C.Very important step prior to drying is rinsing of gel in t-butanol (C 4 H 10 O, 99.5% for analysis, Acros Organics, USA) so that water solvent could be replaced with organic one which does not exhibit significant changes in the volume of molecules during the freezing process. Ceria was synthesized by a self-propagating room temperature method previously in detail described by Matović et al. [17].Calculated masses of reactants were vigorously hand mixed in alumina mortar with alumina pestle for about 5 minutes and left in air for 2 hours.Then, the reaction product mixture was rinsed in centrifuge Centurion 1020D at 350 rpm to remove NaNO 3 . Carbon cryogel/ceria composite was synthesized by mixing of ceria and CC in mortar for about 15 minutes.The nominal CeO 2 loading was 10 wt.%.Since there is no literature data about carbon cryogel/ceria composite, we have assumed that 10 wt.% of ceria would be enough to reduce the amount of the functional groups on surface of CC and, at the same time, not to significantly decrease the CC's surface area.Our assumption was based on several facts.CC is carbon material with high specific surface area and turbostratic structure, i.e. a large number of unpaired electrons exist on the surface [18].On the other side, in our previous investigations we confirmed the presence of the Ce 3+ and O 2-vacancies in the structure of the ceria obtained by the SPRT method [17].Due to that, we have concluded that this non-stoicihiometric ceria can be the source of the electrons.From the economical point of view, the prices of CC's precursors are significantly lower in comparison with ceria precursor and, consequently, the amount of ceria should be as low as possible. Composite characterization The surface morphology of the carbon cryogel/ceria composite was observed using a field emission scanning electron microscope (FESEM) TESCAN Mira3 XMU at 20 kV. The specific surface area and median pore size of the carbon cryogel/ceria composite were analysed using the Surfer (Thermo Fisher Scientific, USA). The carbon cryogel/ceria composite sample was characterized by recording their powder X-ray diffraction (XRD) pattern on a Rigaku diffractometer model Ultima IV using Cu Kα radiation with a Ni filter.Angular 2θregion between 10 and 80°was explored at a scan rate of 1°/s with the angular resolution of 0.02°. Batch adsorption experiments All adsorption experiments were carried out at room temperature (20 ± 2 °C) in a set of closed 50 ml PVC bottles using a mechanical shaker at a rate of 60 cycles/min. In the adsorption kinetic study the initial As(III) concentration was C 0 = 10 mg/l and no pH adjustment was taken.We added 0.1 g of composite material into 25 ml of As(III) aqueous solution.Time intervals were varied from 10 min to 24 h.After continuous shaking for predetermined period, the solid was separated by filtration, and the remaining As(III) concentration was measured using atomic absorption spectroscopy -hydride generation technique. To study the effect of pH on adsorption, 25 ml of As(III) aqueous solution of initial concentration C 0 = 10 mg/l was continuously shaken with 0.1 g of the carbon cryogel/ceria composite for 24 h, at different pH values.To adjust the pH to 2-11, 0.1 M HNO 3 and 0.1 M KOH were used.After continuous stirring for predetermined time interval, the solid was separated by filtration and arsenic concentration in the remaining solution was determined. Adsorbent dose study was conducted in order to determine the optimal mass of adsorbent in regard to arsenic removal percentage.We added different masses of composite material (m = 10-100 mg) into 25 ml of As(III) aqueous solution, C 0 = 10 mg/l, at native pH.After continuous stirring for 24 h, the solid was separated by filtration and arsenic concentration in the remaining solution was measured. Adsorption isotherms were studied by varying initial concentration of As(III) from 0.25-14 mg/l.Solutions with specific concentrations of As(III) were prepared by dissolving of arsenic stock solution into distilled water.Then, 5 mg of the synthesized carbon cryogel/ceria composite was added to 25 ml of the solution under stirring for 24 h.The pH value was adjusted to 5, 7 and 9 by (0.1 M) HNO 3 and KOH.At the end of pre-selected equilibrium time, the solid was separated by filtration and arsenic concentration in the remaining solution was determined. Structural characterization FESEM image, presenting the morphology and texture of the carbon cryogel/ceria composite, is shown at Fig. 1.It is evident that presence of 10 wt.% of ceria significantly changed the surface morphology in comparison with the pure CC whose structure is shown in our previous paper [15].The morphology of the carbon cryogel/ceria composite, i.e. the distribution of ceria in CC is very homogeneous.Nanoparticles of ceria have penetrated into the larger carbon cryogel's pores and, consequently, the pore radius decreased.This conclusion was confirmed by nitrogen adsorption-desorption measurements (Table 1).The results indicate that overall specific surface area, S BET , equals 614 m 2 /g which means that overall surface of the carbon cryogel/ceria composite, in comparison with the starting CC (S BET = 620 m 2 /g), is almost the same, i.e. the presence of 10 wt.% of ceria did not change the S BET .But, the median pore radius decreased from 14 nm to 7 nm (Table 1) due to the incorporation of the ceria particle into the porous structure of the carbon cryogel.By preparation of the composite sample in this way, the overall specific structure and mesoporosity of the material was preserved. Adsorption kinetic study Figure 3 (inset) shows adsorption kinetic of As(III) on the carbon cryogel/ceria composite.The arsenic removal rate was very fast and the adsorbed amount of As(III) increased gradually with time interval increment.Within the first 10 minutes over 93% of As(III) was removed.The equilibrium was reached after 2 h.Similar results have been reported in literature [20,21]. In order to evaluate the kinetic mechanism that controls an adsorption process, adsorption reaction models as well as adsorption diffusion models were applied to fit kinetic data [22].The best-fit model was selected based on the values of the linear regression correlation coefficient, r 2 .Pseudo-second order kinetic model is represented by the equation ( 1) and its solution, equation (2), for q = 0 and t = 0: where k 2 is rate constant, t is time and q and q e are transient and equilibrium amount of adsorbate, respectively.Relationship between t/q and t shows that experimental data, for the whole range of adsorption process, can be successfully correlated (r 2 = 1) by the pseudo-second order model (Fig. 3). The effect of pH Generally, the adsorption process is strongly affected by the solution pH. Figure 4 represents the percentage of As(III) adsorption on the carbon cryogel/ceria composite as a function of solution pH.All results are presented as a function of final (equilibrium) solution pH since the amount of adsorbed ions depends on final pH.As it can be seen in Fig. 4, the amount of adsorbed As(III) ions is not strongly affected by the pH values.Namely, the maximum adsorption percentage is achieved at pH values below 5 and continually, slightly, decreases at higher pH values.This is in agreement with already reported literature data [20,21,23].For the comparison, the adsorption percentage of As(III) ions on the pure CC did not show a significant difference at whole tested pH range, too.But, an increase of around 15% was recorded at pH 7-8 [15]. As in the case of the pure CC, the variation of solution pH value has an important effect on the interactions between arsenic and the adsorbent surface, because it affects the distribution of various hydrolysed arsenic species as well as the surface charge of adsorbent and should be discussed in view of different concentrations and forms of hydrolysed As(III) species and PZC (point of zero charge) of the carbon cryogel/ceria composite.Figure 5 shows the distribution of various hydrolysed As(III) species as a function of pH.The percentages of hydrolysis products were calculated from the equations and stability constants already presented in our previous paper [15]. By the comparison of the pH dependence of adsorbed As(III) ions percentage on the carbon cryogel/ceria composite with the percentage of As(III) hydrolysis products (Fig. 5) it can be concluded that hydrolysed As(III) ions were adsorbed as neutral molecule of arsenic acid (H 3 AsO 3 ) and H 2 AsO 3 -ions over the whole examined pH range (similarity between experimental and H 3 AsO 3 (1) and H 2 AsO 3 -(2) curves).Additionally, we have already showed that hydrolysis of metal ions starts at lower pH values in presence of inorganic or organic species than in aqueous solutions [24].In this case it means that, in the presence of the carbon cryogel/ceria composite, negatively charged H 2 AsO 3 -ions exist at pH values lower than 7. On the other side, the surface charge of the adsorbent will influence the adsorption processes.Point of zero charge of the carbon cryogel/ceria composite was determined to be at pH around 7 (not shown here), i.e. positive charge develops on the composite surface at pH below 7, and the composite surface is negatively charged at pH above 7.Consequently, the adsorption percentage of negatively charged, hydrolysed, arsenic(III) ions will be higher below PZC. Taking into account fact that main mechanism is adsorption of neutral molecules and that adsorption process is best fitted by the second-order kinetic model we can assume that rate determining step is diffusion process of the adsorbate within the pores of adsorbent. The role of ceria on surface of the carbon cryogel/ceria composite could be twofold.Non-stoichiometric ceria reduces the amount of functional groups and effects on correlated movements of electrons during the adsorbate-adsorbent interaction. Adsorbent dose study Prior the determination of adsorption isotherms, the optimal ratio between volume of the solution of As(III) ions and mass of the adsorbent was investigated.The constant volume of As(III) solutions is contacted with different masses of the composite material.The obtained results are presented in Fig. 6.The results are displayed as percentage of As(III) adsorption on the carbon cryogel/ceria vs. mass of the adsorbent.It is clear that with increasing of adsorbent mass the percentage of arsenic (III) adsorbed increases exponentially.According to these results, the optimal solution volume/mass of the adsorbent ratio was calculated (5 mg of adsorbent with 25 ml of the As(III) solution).By comparison with results obtained on the pure CC [15] (100 mg of adsorbent with 25 ml of the As(III) solution) the dose adsorbent is reduced 20 times, i.e. the sorption capacity of the carbon cryogel/ceria composite increased 20 times in comparison with CC. Adsorption isotherms Figure 7 shows adsorption isotherms for As(III) ions on the carbon cryogel/ceria composite at different pH values.The adsorption isotherms, presented in Fig. 7, confirmed our assumptions that adsorption is slightly pH dependent.Maximum adsorption capacity is achieved at pH = 5 which is in agreement with previous conclusions.Namely, according to the distribution diagram of the various hydrolysed As(III) species as a function of pH (Fig. 5), at pH = 5 As(III) is present only as a neutral molecule.Adsorption isotherms can be classified as A1 type [25,26].The shape of isotherms shows that dispersion interactions are dominant in the adsorption of As(III) ions, which is characteristic of physical adsorption.Also, the shape of isotherms at higher equilibrium concentrations indicates the appearance of multilayer adsorption.Slight decrease of the adsorbed amount in pH region from 3 to 6 can be explained by changing of orientation and/or lateral interactions of adsorbed molecules. Several isotherm models were used to interpret the equilibrium data.Among the few linear and non-linear models for fitting the adsorption isotherms [27] the BET isotherm for the modelling liquid phase adsorption shows the best agreement with experimental data [28].The BET isotherm equation for liquid phase adsorption is: K L (KLCeq) n+1 (3) where: q is the amount of the adsorbate adsorbed on the solid surface, mg/g, q m is the amount of the adsorbate corresponding to a complete monolayer adsorption, mg/g, C eq is the equilibrium liquid phase concentration, mg/l, n is the maximum number of adsorbed layers on solid surface in BET isotherm, K S is the equilibrium constant of adsorption for the 1 st layer in BET isotherm, (mg/l) -1 , and K L is the equilibrium constant of adsorption for upper layers in the BET isotherm, (mg/l) -1 . For n = ∞: In the case of liquid phase adsorption the BET isotherm equation has three degrees of freedom (q m , K S , K L ) and it is impossible to convert this equation to a linear form and it can be solved by using nonlinear regression calculations. As shown in Fig. 8, a good fit of the experimental data has been obtained (values for the coefficient of de- termination r 2 are presented in Table 2.).Values for the q m , K S and K L , at different pH are presented in Table 2.According to the calculation, the amount of the adsorbate corresponding to the complete monolayer adsorption (q m ) decreases with increasing the pH of the solution.Also, equilibrium constant of adsorption for 1 st layer (K S ) increases with increasing pH of the solution.Equilibrium constant of adsorption for upper layers (K L ) is changed with pH, too.In this model, the actual saturation concentration of liquid phase (C S (mg/l)) is adjustable parameter and it is an inverse value of K L .The large difference between calculated C S values and value of saturation concentration of NaAsO 3 in aqueous solutions (156 g/100 ml) confirms that value for the saturation pressure in original BET equation for the gas phase could not be replaced with saturation concentrations of the adsorbate in the liquid phase. IV. Conclusions In order to improve the adsorption capacity of carbon cryogel, the carbon cryogel/ceria composite material with 10 wt.% of ceria was synthesized.Characterization by the FESEM showed that the homogeneous distribution of ceria on the surface of the carbon cryogel was achieved.Nitrogen adsorption confirmed that the high specific surface area and porous structure of the material were preserved.XRD analysis confirmed the presence of ceria.The adsorption process of As(III) ions was investigated as a function of time, pH of the solution and adsorbate concentration.Adsorption kinetics followed the pseudo-second order model.Due to the hydrolysis, As(III) ions in water solutions are adsorbed as neutral molecules of H 3 AsO 3 and consequently, the pH of the solution does not affect significantly the adsorption process.Based on these facts, it is assumed that rate-determining step in the adsorption process is diffusion of the adsorbate within the pores of adsorbent.Adsorption dose experiments, i.e. calculation of optimal ratio between the volume of the solution and mass of the adsorbent, showed that the mass of the adsorbent was reduced 20 times, in comparison with CC.The assumption is that presence of 10 wt.% of non-stoichiometric ceria reduced the amount of functional groups and influenced the correlated movements of electrons during the adsorbate-adsorbent interaction.Adsorption isotherms confirmed that the amount of As(III) removed is slightly pH-dependent and the shape of isotherms is characteristic for the physical, multilayer adsorption.Experimentally obtained isotherms are best-fitted by the application of BET isotherm for modeling the liquid phase adsorption. Figure 8 . Figure 8. Correlation of experimental data of adsorption of As(III) on carbon cryogel/ceria composite with BET isotherm for liquid phase adsorption (symbolsexperimental data, line -BET equation) Table 2 . Values for q m , K S and K L at different pH
4,376.4
2016-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Black hole entropy and viscosity bound in Horndeski gravity Horndeski gravities are theories of gravity coupled to a scalar field, in which the action contains an additional non-minimal quadratic coupling of the scalar, through its first derivative, to the Einstein tensor or the analogous higher-derivative tensors coming from the variation of Gauss-Bonnet or Lovelock terms. In this paper we study the thermodynamics of the static black hole solutions in n dimensions, in the simplest case of a Horndeski coupling to the Einstein tensor. We apply the Wald formalism to calculate the entropy of the black holes, and show that there is an additional contribution over and above those that come from the standard Wald entropy formula. The extra contribution can be attributed to unusual features in the behaviour of the scalar field. We also show that a conventional regularisation to calculate the Euclidean action leads to an expression for the entropy that disagrees with the Wald results. This seems likely to be due to ambiguities in the subtraction procedure. We also calculate the viscosity in the dual CFT, and show that the viscosity/entropy ratio can violate the η/S ≥ 1/(4π) bound for appropriate choices of the parameters. Introduction In the dictionary of gravity/gauge duality mappings in the AdS/CFT correspondence [1][2][3], perturbations of the metric are related to the energy-momentum tensor of the field theory in the boundary of the AdS spacetime [2][3][4]. In this picture, an AdS planar black hole is the gravitational dual of a certain ideal fluid. A widely valid relation between the shear viscosity and the entropy density was established, namely [5][6][7][8] (1.1) One way to understand this ratio is that it can be shown that the viscosity is proportional to the cross-section of the black hole for low-frequency massless scalar fields [8]. Alternatively, the shear viscosity is determined by the effective coupling constant of the transverse graviton on the horizon, by employing the membrane paradigm [9]. (This was confirmed by using the Kubo formula in [10,11].) In [12], it was shown that the black hole entropy is determined by the effective Newtonian coupling at the horizon, and that it is thus not surprising that the ratio of the shear viscosity to the entropy density is universal, in the sense that the dependence of the quantities on the horizon is canceled. Recently, it was established that the relation (1.1) of the boundary theory is dual to a generalised Smarr relation obeyed by the bulk AdS planar black holes, thereby providing a new understanding of its universality, and its connection to the black hole thermodynamics [13]. There have (See [18] for a review.) The viscosity/entropy ratio (1.1) can, however, be violated when the bulk gravity theory is extended by the addition of higher-order curvature terms [19,20]. 1 (See also, for further examples, [25][26][27].) This leads us to one of the motivations for this paper, which is to investigate whether one can violate the ratio (1.1) without introducing higher-order curvature terms in the bulk theory. In a typical theory of Einstein gravity, matter fields couple to gravity minimally through the metric. A scalar field can also couple to gravity non-minimally, such as in Brans-Dicke theory [28], where the effective Newton constant varies in spacetime. However, it was established in [13] that the ratio (1.1) holds in general in such a theory. Scalar fields can, however, also couple non-minimally to gravity in other ways. In particular, their derivatives can couple to the curvature tensor. Horndeski considered a wide class of such gravity/scalar theories in the early seventies [29], focusing his attention on cases where the field equations, both for gravity and the scalar field, involve no higher than second derivatives. The Horndeski theories were rediscovered recently in studies of the covariantisation of Galileon theories [30]. The Horndeski terms take the form where the E (k) tensors are "energy-momentum tensors" associated with the Euler integrands of various order, namely E (k) ν µ ≡ δ νρ 1 ···ρ 2k µσ 1 ···σ 2k R σ 1 σ 2 ρ 1 ρ 2 · · · R σ 2k−1 σ 2k ρ 2k−1 ρ 2k . (1. 3) The H (k) terms are analogous to Euler integrands, in that they have the property that each field carries no more than a single derivative and hence the linearized equations of motion involve at most second derivatives. Thus although the theory involves higher-order derivatives, it contains no linear ghost excitations. In this paper, we shall consider Einstein gravity with a cosmological constant, together with just the two lowest-order Horndeski terms, namely H (0) = g µν ∂ µ χ∂ µ χ , H (1) = −4G µν ∂ µ χ∂ ν χ , (1.4) where G µν is the Einstein tensor. We find that although the theory contains the curvature tensor only linearly, the viscosity/entropy ratio (1.1) no longer holds. It is worth commenting that the viscosity can be computed by standard procedures using the AdS/CFT correspondence, involving the straightforward technique of studying linearised perturbations around the background bulk solution. The calculation of the viscosity/entropy ratio then hinges upon the proper definition of the entropy of the black hole. Since Hawking established the thermal radiation of a black hole [31,32], there has been no ambiguity in establishing the black hole entropy in a generally-covariant theory. JHEP11(2015)176 In particular, in Einstein gravity minimally coupled to matter, the entropy is given by one quarter of the area of the horizon. This area law has been generalized to the Wald entropy formula when more complicated couplings or higher-order curvature terms are involved, namely [33,34] where L is defined by the action I = d n x √ −gL. Applying this formula to static black holes with spherical, toric or hyperbolic isometries, the Horndeski terms (1.4) do not contribute to the Wald entropy S W , and hence one might expect that the entropy would still be just one quarter of the horizon area. However, we find that this is in fact not the case. By examining the Wald procedure [33,34] in detail, we find that in a theory such as Horndeski gravity there is an additional contribution to the entropy that is not encompassed by the usual Wald formula (1.5). It arises because the derivative of the scalar field diverges on the horizon in the black-hole solutions (although there is no physical divergence, since all invariants, such as g µν ∂ µ χ ∂ ν χ, remain finite). The paper is organised as follows. In section 2 we introduce the Horndeski theory that we shall be considering, and we review the static black hole solutions. These are known for all the cases of spherical, toroidal and hyperbolic horizon geometries. Our focus will be on the spherical and the toroidal horizons. We also include a demonstration of the uniqueness of the known static solutions. In section 3 we address the problem of calculating the entropy, and also the mass, of the static black holes. We begin by calculating the entropy using the standard Wald formula (1.5), and then we consider the application of the Wald formalism in more detail, showing that there is another contribution to the entropy that is not captured by (1.5). We show that in the case of the planar black holes (with toroidal horizons), the entropy expression we obtain is consistent with the computation of the Noether charge associated with a scaling symmetry of the black holes. We also consider the calculation of the Euclidean action, showing that, at least when following a naive regularisation procedure, this yields yet another result for the entropy, and the mass, that disagrees with those from the Wald formalism. In section 4 we calculate the shear viscosity in the dual boundary theory using the AdS/CFT correspondence, and hence we obtain an expression for the viscosity/entropy ratio. This is different from 1/(4π) on account of the Horndeski term, and we show that for an appropriate choice of the parameters it can violate the η/S ≥ 1/(4π) bound. The paper ends with conclusions in section 5. The theory As we have discussed in the introduction, Horndeski gravity represents a class of higherderivative theories involving gravity with a non-minimally coupled scalar. The couplings differ from those in the Brans-Dicke theory, since in the Horndeski theories the scalar couples through its derivative to the curvature tensors. We shall focus on the Horndeski theory whose Lagrangian involves at most only linear curvature terms. As we shall show, JHEP11(2015)176 the viscosity/entropy ratio (1.1) can be violated even in such a theory. The action is given by where κ, α and γ are coupling constants, and G µν ≡ R µν − 1 2 R g µν is the Einstein tensor. Note that the theory is invariant under a constant shift of χ. In a typical gravity theory with a scalar field, such as Brans-Dicke theory, one can define different metric frames by means of conformal scalings using the scalar field. However, for the Horndeski theory (2.1), this would lead to the breaking of the manifest constant shift symmetry of the scalar, and hence it would not be a natural field redefinition to make here. The variation of the action (2.1) gives rise to The total derivative term in (2.2) plays no role in the equations of motion However, it does play an important role in the Wald formalism, which we shall present in section 3.2. Static black hole solutions We now consider static black holes, with the ansatz where dΩ 2 n−2, with = 1, 0, −1 is the metric for the unit S n−2 , the (n − 2)-torus or the unit hyperbolic (n − 2)-space. It is convenient to take dΩ 2 n−2, =ḡ ij dy i dy j for general values of to be the metric of constant curvature such that its Ricci tensor is given bȳ R ij = (n − 3) ḡ ij . We may, for example, take dΩ 2 n−2, to be given by where dΩ 2 n−3 is the metric of the unit (n − 3)-sphere. JHEP11(2015)176 It is clear from the equations of motion that χ = χ 0 (constant) is a solution, in which case, the Horndeski gravity reduces to Einstein gravity with a cosmological constant Λ 0 . It follows that the Schwarzschild-AdS black hole is a solution of the theory. We shall regard this solution as being "trivial," in the sense of not yielding anything new. In addition, a one-parameter family of black hole solutions for which the scalar field is not a constant was constructed in [35]. (See also, [36,37].) In this section, we would like to prove that these are the only black hole solutions from the ansatz (2.5) in which the scalar is r-dependent. First, we review the construction in [35]. The scalar equation of motion E = 0 yields There are two more equations that follow from E µν = 0: In [35], a class of black hole solution was obtained by solving (2.7) by taking (In other words, the integration constant in the first integral of (2.7) was taken to be zero, and χ was allowed to be non-zero, thus implying that its co-factor, given in (2.9), must be equal to zero.) This leads to the solution h = − µ r n−3 + 8κ[g 2 r 2 (2κ + βγ) + 2 κ] (4κ + βγ) 2 + (n − 1) 2 β 2 γ 2 g 4 r 4 (n + 1)(n − 3)(4κ + βγ) 2 2 F 1 1, which is valid for all values of . In presenting the solution, we have introduced two parameters (g, β) in place of the original parameters (α, Λ) in the Lagrangian, with Note that the solution contains only one integration constant, µ. All other parameters are those of the theory itself. Note also that since the dimension n is an integer, the JHEP11(2015)176 hypergeometric function reduces to polynomials with an arctan function in even dimensions, and with a log function in odd dimensions. To be explicit, we have n = even : (2.12) , where we use the notation [F (x)] m to denote the truncated power series expansion of F (x) around x = 0, in which only the terms up to and including x m are retained. Thus for n even and n odd, respectively. For static solutions of this kind, it is in fact always sufficient to construct the solution with = 1. The solutions for all other values of , which we presented above, can then be obtained from the = 1 solution by means of the rescalings From now on, we shall present results for the two specific cases = 0 and = 1. = 0 solution: when = 0, the solution reduces to the very simple form Note that in this = 0 case, χ can be solved for explicitly, giving Thus the = 0 solution describes an AdS planar black hole, with the requirements that µ > 0 and β ≥ 0. The horizon radius r = r 0 is given by µ = g 2 r n−1 0 . The Hawking temperature is given by (2.18) = 1 solution: for = 1, the solution describes a spherically-symmetric and static black hole. In a large-r expansion, if n is even the functions h and f have the asymptotic forms JHEP11(2015)176 where (c k , d k ) are constants, which are functions of the parameters (κ, g, β) but independent of µ. If n is odd, then for k = (n−3)/2, the quantity c k has an additional term proportional to log r. This amounts to a logarithmically diverging addition to the mass coefficient µ at order 1/r n−3 . This in turn implies that d k has additional log r terms for all k ≥ (n − 3)/2. Note that all the (c k , d k ) vanish for = 0. The metric is asymptotic locally to AdS spacetime, and it cannot become pure AdS spacetime, regardless of the choice of the parameter µ. To see that the solution describes a black hole, we note that h is positive as r goes to infinity, but becomes of order −µ/r n−3 as r → 0, where there is a spacetime curvature singularity. Thus when µ > 0, there must exist some intermediate value of r, be an event horizon r = r 0 , for which This implies that the parameter µ can be expressed in terms of the horizon radius r 0 in this = 1 case as Note that this relation between µ and r 0 is far more complicated than the simple expression µ = g 2 r n−1 0 that holds in the = 0 case. The temperature of the = 1 black hole is given by Note that if we set µ = 0, then the solution has no event horizon, and near r = 0 the functions h, f and χ have the forms Thus the µ = 0 solution is a smooth spherically-symmetric soliton, without any free parameters, that is asymptotic locally to AdS spacetime. There also exists a solution for = 1 in the limit of 4κ + βγ = 0, but it does not describe a black hole. Uniqueness of the Horndeski black hole solutions We shall leave the discussion of the mass and entropy of the black holes to the next section. To close this section, we shall show that the solutions discussed above are in fact the only black holes with non-constant χ that are contained within the ansatz (2.5) in the theory. JHEP11(2015)176 To show this, we return to the equation of motion (2.7) for the scalar field. One can immediately write down the first integral where q is an integration constant. The solutions we discussed above were obtained by taking q = 0. It was possible to find such solutions with χ = 0 by imposing the relation (2.9), which in fact rendered the scalar equation of motion (2.7) trivial. If instead we take the integration constant q to be non-zero, then χ is now determined by (2.24). If a solution with q = 0 is to describe a black hole, there must be an event horizon at some radius r = r 0 . The functions h and f near the horizon will have Taylor expansions of the form It follows from (2.24) that χ near the horizon has the expansion Substituting these expansions into the other equations of motion, we find that no such solutions can exist. In other words, the assumption that there exists a horizon, near which the expansions (2.25) would hold, is inconsistent with the equations of motion when q = 0. In order to have a solution with a horizon, we must therefore set q = 0, which then reduces to the previous case discussed above. However, as mentioned already, in order for this solution not to be trivial, i.e. for χ to be non-vanishing, we must then also impose the condition (2.24). This leads to the black hole solution (2.10). In the near-horizon region, the function χ in the black-hole solutions (2.10) has an expansion of the form (2.27) Substituting back into the equations of motion, we find that all the coefficients in the expansions can be expressed in terms of two parameters, h 1 and r 0 . For example, Thus the solution has three integration constants (χ 0 , h 1 , r 0 ). However, the parameters (χ 0 , h 1 ) are trivial. It follows that the only non-trivial parameter is r 0 , which is determined by µ in the final solution. Finally we would like to emphasize again that β is not an integration constant, but a parameter of the theory. For β = 0, there are two black holes, but each associated with a different vacuum. When β = 0, there is only the Schwarzschild-AdS black hole solution in the theory. JHEP11(2015)176 3 Black hole entropy and thermodynamics In the previous section, we reviewed the Horndeski gravity theory, and its static black hole solutions. We identified the horizon and computed the temperature of these black holes. In this section, we consider various possible methods for calculating their entropy. It turns out that different well-established methods yield different answers. A correct answer of the entropy is important for studying the black hole thermodynamics, and it is paramount for determining the η/S ratio, as we discussed in the introduction. Wald entropy formula First let us consider the well-known Wald entropy formula (1.5). It is straightforward to see that for the Horndeski Lagrangian L given in (2.1), one has where we have defined χ µ = ∂ µ χ. For the static black holes in the Horndeski theory, described in section 2, we find from (3.1) that the Wald entropy formula (1.5) for the entropy gives the same result as in standard Einstein gravity, namely one quarter of the area of the event horizon, where ω n−2 is the volume of a unit S n−2 in the = 1 case. For = 0, corresponding to a toroidal horizon, the periods of the circles forming the torus can be chosen arbitrarily, and we shall, for convenience, then take ω n−2 = 1 in this paper, and so correspondingly S should then be viewed as the entropy density. Since the static black hole solutions are characterised by only one parameter (i.e. one integration constant), it is guaranteed that one can obtain an expression for a "thermodynamic mass" by integrating the first law of black hole thermodynamics 2 If we use the expression (3.2) for the entropy, then from the result for the Hawking temperature obtained in the previous section we therefore find In a more general situation where there are further intensive/extensive pairs of thermodynamic variables contributing on the right-hand side of the first law for multi-parameter solutions, the integrability of the right-hand side can provide a non-trivial check on the correctness of the thermodynamic quantities. No such consistency check arises in the case of a one-parameter family of solutions, since all 1-forms are exact in one dimension. JHEP11(2015)176 Note that in the = 0 case it was straightforward to express the mass in terms of the "mass parameter" µ, because of the simple relation µ = g 2 r n−1 0 for these planar black holes. On the other hand, the relation between µ and r 0 is much more complicated in the = 1 case, and is given in (2.21). Thus when = 1 the expression (3.5) for M would become a complicated transcendental function of the mass parameter µ. On the face of it, the mass formula (3.4) for the = 0 case looks not unreasonable. In fact the thermodynamical quantities satisfy also the expected generalised Smarr relation However, for the = 1 case, the mass formula (3.5) looks less reasonable. As mentioned above, it would be a complicated transcendental function of the "mass parameter" µ. Whilst this fact, of itself, does not conclusively show that it must be incorrect, it does perhaps raise doubts about its likely validity, since it would be a very unusual kind of relation that is not normally seen in other black hole solutions. Furthermore, if the = 1 mass formula is called into question then this also raises questions about the validity of the = 0 mass formula. In order to explore these issues in greater depth, we shall make a more detailed investigation of the Wald procedure, in order to see whether there are new subtleties that can arise in a theory such as that of Horndeski. Wald formalism Wald has developed a procedure for deriving the first law of thermodynamics by calculating the variation of a Hamiltonian derived from a conserved Noether current. The general procedure was presented in [33,34]. The Wald entropy formula (1.5) is a consequence of applying this procedure in rather generic higher-derivative theories. The Wald formalism has been used to study the first law of thermodynamics for asymptotically-AdS black holes in variety of theories, including Einstein-scalar [39,40], Einstein-Proca [41], Einstein-Yang-Mills [42], in gravities extended with quadratic-curvature invariants [43], and also for Lifshitz black holes [44]. However, the rather unusual-looking results that it led to for the mass of the = 1 black holes in section 3.1 raised the possibility that the formula (1.5) might not be valid for Horndeski gravity. For this reason, we shall now study in detail the application of the Wald formalism for the action (2.1). A general variation of the fields in the action (2.1) was given in (2.2). The surface term J µ is given by with JHEP11(2015)176 Following the Wald procedure, we can now define a 1-form J (1) = J µ dx µ and its Hodge dual We now specialise to a variation that is induced by an infinitesimal diffeomorphism δx µ = ξ µ . One can show that after making use of the equations of motion. Here i ξ denotes a contraction of ξ µ on the first index of the n-form * L 0 . One can thus define an (n − 2)-form Q (n−2) ≡ * J (2) , such that J (n−1) = dQ (n−2) . Note that we use the subscript notation "(p)" to denote a p-form. To make contact with the first law of black hole thermodynamics, we take ξ µ to be the time-like Killing vector that is null on the horizon. Wald shows that the variation of the Hamiltonian with respect to the integration constants of a specific solution is given by where c denotes a Cauchy surface and Σ (n−2) is its boundary, which has two components, one at infinity and one on the horizon. Thus according to the Wald formalism, the first law of black hole thermodynamics is a consequence of For the Horndeski gravity considered in this paper, we find JHEP11(2015)176 To specialise to our static black hole ansatz (2.5), the result for the Lagrangian with γ = 0 is well established (see, for example, [39,40]), and is given by We find that the contributions associated with the γ term in the action are given by We now apply the Wald formalism to the black hole solutions. First, we note that as a consequence of equation (2.9), when we add the contributions in (3.14) and (3.15) the χ δχ in the total expression cancel, giving the result In fact, as can be seen from the expression for χ 2 in for the black hole solutions in (2.10), we have δ(f χ 2 ) = 0, and so (3.16) can be further simplified, to give We first consider the simpler case of the = 0 AdS planar black holes, for which f χ 2 = β. We find Thus we see indeed that δH ∞ = δH + , since µ = g 2 r n−1 0 . This implies that we can define the mass and entropy as JHEP11(2015)176 such that δH ∞ = δM , δH + = T δS . (3.20) The first law of black hole thermodynamics (3.3) then follows straightforwardly from the Wald identity (3.12). However the factor 1 + βγ/(4κ) in both the entropy and the mass disagrees with the results in (3.2) and (3.4) that we obtained in section 3.1 from a direct application of the Wald entropy formula (1.5) and the integration of the first law dM = T dS. The case of the spherically-symmetric black holes with ( = 1) is more complicated. We find that δH evaluated on the horizon takes the general form where f 1 andχ 1 are coefficients in the near-horizon expansions defined in (2.25) and (2.27). For our specific = 1 solution, we have Thus if we define δH + = T dS, with T given in (2.22), we find that the entropy is given by −n(n − 1)g 2 r 2 0 2 F 1 1, Note that the first term inside the square brackets gives precisely the result we saw earlier (3.2) for Wald entropy S W , derived using the formula (1.5). The remaining contribution in the square brackets is proportional to γ, the coefficient of the Horndeski term in the action (2.1). To derive the first law, we evaluate the δH at asymptotic infinity, and we find This implies that the mass is given by This turns out to be exactly the same form as that in the = 0 AdS planar black hole. It is now straightforward to verify that the first law (3.3) is indeed satisfied. Note that χ 0 , being a constant shift integration constant of χ, plays no role in the first law. It is worth commenting that for the = 0 solutions, the masses we obtained in (3.4) and in (3.19) by the two different methods are both proportional to µ. The only difference is in the constant prefactor coefficient. This on its own makes it difficult to judge which JHEP11(2015)176 is the more reasonable result. However, when = 1, the difference becomes more striking. The result (3.25) from the detailed Wald procedure that we presented in this paper is seemingly more plausible, for two reasons. Firstly, the mass is simply proportional to the parameter µ, instead of being a convoluted transcendental function of µ. Secondly, the mass dependence on µ is the same for both the = 0 and = 1 solution. In solutions with no additional scalar hair, and since the = 0 solution can be obtained as a scaling limit of the = 1 solution, this conclusion would seem to be reasonable. Further comments on the entropy from Wald formalism Having derived the first law of thermodynamics and also the entropy in section 3.2, using the general Wald formalism, we now examine the somewhat unusual features of the black holes in Horndeski gravity that lead to the breakdown of the standard Wald entropy formula (1.5). It follows from (3.13) that for the static ansatz (2.5) that where the hatted indices are tangent-space indices, the semicolon denotes a covariant derivative and T µνρσ ≡ ∂L ∂R µνρσ , S0101 (n−2) = T0101r n−2 Ω (n−2) . (3.27) Note that 0 is the time direction and 1 is the r direction. The expression for T µνρσ for the Horndeski gravity is given by (3.1). Typically, one evaluates Q (n−2) on the horizon at r = r 0 , with h = h 1 (r − r 0 ) + · · · and f = f 1 (r − r 0 ) + · · · , and so the second term on the right-hand side of (3.26) vanishes and hence, as was observed in [33,34], we find where S W is the standard Wald entropy, given by (1.5). Establishing the variational identity (3.12) is more subtle, even for the standard case of Einstein gravity. It requires that we evaluate δQ on the horizon. Naively, one would simply obtain δT S W + T δS W from (3.28), and then one would expect that the δT S W term would be cancelled by the i ξ Θ contribution in (3.11), leading to However, in order to evaluate the variation properly, we need to expand (3.28) up to order (r − r 0 ), since δ(r − r 0 ) = −δr 0 and so it is non-zero even in the limit when one sets r = r 0 on the horizon. The net effect is that all the terms in δQ (n−2) are cancelled out by terms in i ξ Θ, and in fact the T δS term arises from the remaining terms in i ξ Θ alone. To be specific, let us examine δQ − i ξ Θ for a spherically-symmetric black hole in pure Einstein gravity coupled to a massless scalar, as given by (3.14). If we first perform Taylor expansions of Q and i ξ Θ, as given in the first two equations in (3.14), around the horizon at r = r 0 , then indeed the above statement can be verified. The final equation in (3.14) gives an alternative but equivalent evaluation with the variation δQ, which makes the JHEP11(2015)176 observation more apparent. We may evaluate δQ first, and then set r = r 0 . In this case, the r n−2 factor in Q κ,α just depends on the coordinate r, and hence is not varied. With this procedure, we find that all the terms in δQ κ,α are cancelled out by terms in i ξ Θ κ,α , leading to the third equation of (3.14). Thus using this procedure, we find that the δH + = T δS term for the usual Einstein gravity arises from the (n − 2)δf /r term in i ξ Θ in (3.14). This term corresponds to It is rather intriguing how this term is ultimately related to S W which involves only T 0101 . Indeed, we see from (3.1) that in vielbein components, T0101 = − 1 2 κ and T1î1ĵ = 1 2 κ δ ij for the Horndeski black hole solutions. In particular, the γ term does not contribute in either case. In the black holes of Horndeski gravity there are further subtleties. Firstly, the α term in i ξ Θ κ,α in (3.14) does not vanish for these solutions, and can contribute a term to the entropy that is not contained in S W . Furthermore, although the second term in (3.26) vanishes on the horizon, its variation does not. This extra term can be seen in the form of Q γ in (3.15). Thus (δQ − i ξ Θ) γ in (3.15) will give an additional contribution to the entropy that is over and above that of the standard Wald contribution S W . Thus we now have However, the Wald identity (3.12), as we have seen, continues to hold. The non-vanishing contributions from both the α and the γ terms have the same essential origin, namely that the scalar field χ is not regular on the horizon, but rather, it has a branch cut singularity, as shown in (2.27). One might question whether this is compatible with the interpretation of the solutions as black holes. However, as we have remarked in section 2.1, the scalar χ in Horndeski gravity is like an axion, in the sense that it enters the theory only through its derivative. In particular, therefore, it would not be natural to define different conformally-scaled metric frames (in the manner that one does with the dilaton in string theory), since that would break the manifest axionic shift symmetry of χ. Furthermore, all invariant polynomials constructed from ∂ µ χ with the metric and the Riemann tensor are regular on the horizon. For example, g µν ∂ µ χ ∂ ν χ is finite and non-zero on the horizon. (These properties can be seen from the fact that the vielbein components of the gradient of χ are finite everywhere, including on the horizon, since one just has E μ , with all other components vanishing, where E μ a is the inverse vielbein.) This supports the idea that these solutions admit a valid black hole interpretation, but at the price that the Wald entropy formula (1.5) no longer provides the complete expression for the entropy. However, the identity (3.12), and hence the first law of black hole thermodynamics, continues to hold, with the entropy being derived from the strict application of the Wald formalism. Noether charge and mass of AdS planar black holes In the previous subsections, we described two different methods for calculating the entropy and mass of the Horndeski black holes, one based on the use of the Wald formula (1.5) for the entropy, and the other based on a more detailed consideration of the Wald formalism. In both these approaches, we did not use independent procedures to calculate the mass and entropy, but rather, we relied on the use of the first law of thermodynamics to obtain one from the other. Since the black-hole solutions are characterised by only one parameter, there is no non-trivial integrability check, in the sense that the right-hand side of the first law dM = T dS would be integrable regardless of whether the expression for the entropy was correct or not. The fact that the two approaches led to different results calls for an independent check on the calculation of the mass, or the entropy. Even though the mass and entropy obtained from the Wald formalism in section 3.2 seems to be more reasonable, the mass is determined through an integration of the first law, rather than directly, in this case. A question one can ask is whether the mass is indeed a conserved quantity. For the AdS planar black holes (i.e. the = 0 solutions), this question can be answered by means of a simple Noether calculation. For = 0, we rewrite the ansatz as The effective one-dimensional Lagrangian becomes where a prime denotes a derivative with respect to ρ. The Lagrangian is invariant under the global scaling a → λ 2−n a , b → λ b . (3.34) This global symmetry yields a conserved Noether charge In terms of the coordinates of the original ansatz (2.5), we have Substituting the AdS planar black hole solution into this Noether charge formula, we find Thus we see that Q N is the same as the mass obtained from the Wald formalism in section 3.2, up to some purely numerical constants. This supports the conclusion that the mass and entropy obtained in section 3.2 are valid, whilst the results in section 3.1 are not. Euclidean action An alternative method that has been used for calculating thermodynamic quantities for black hole solutions is by means of the quantum statistical relation first proposed for quantum gravity in [38]. Here Φ thermo denotes the thermodynamic potential, or the free energy, and I is the Euclidean action. The regularised Euclidean action was calculated for the = 1 Horndeski black hole in four dimensions in [35]. We have repeated that calculation, and obtained the same result (save for an overall factor of 2 discrepancy). However, the resulting expressions for mass and entropy are quite different from those in sections 3.1 or 3.2, and are given by Note that when β = 0, for which the black hole reduces to the standard Schwarzschild-AdS one, we get M = 1 2 µ and S = κπr 2 0 , as one would expect. It is clear that the mass suffers from the same shortcoming as the one we obtained from the Wald entropy formula in (3.5), in that it becomes a convoluted transcendental function of µ for non-vanishing β. (It is a different transcendental function from the one following from (3.5), however.) The calculation for the = 0 AdS planar black holes (2.16) is much easier, and can be straightforwardly carried out for a general spacetime dimension n. The regularised Euclidean action can be defined by subtracting the action of the background µ = 0 vacuum from the action for the black hole itself, namely where g (0) µν and χ (0) are the background field obtained by setting µ = 0 in the black hole solution (2.16). We find Note that in this calculation, we have set ω n−2 = 1, so that the resulting extensive quantities are densities. Using the quantum statistical relation (3.38) and the thermodynamic first law (3.3), we then find that the free energy, mass, temperature and entropy for the = 0 black holes are given by These expressions also disagree, in this case by constant overall factors, with the = 0 results obtained in sections 3.1 and 3.2. Taken in isolation, it would be hard to make any judgment as to whether these expressions were trustworthy or not. Interestingly the generalized Smarr relation (3.6) is also satisfied. However, the = 1 results (3.39) for the mass and the entropy certainly raise questions about the validity of this calculation using the Euclidean action. There is another method that has been used in order to obtain a finite Euclidean action, by adding a surface term and a counterterm. Taking n = 4 dimensions as an example, the whole action is then given by where I GH is the standard Gibbons-Hawking surface term, and for = 0, the counterterm is given by The γ in the square root is the determinant of induced metric γ µν . With these combinations, the total action is the same as the result of regularization. For = 1, the counterterm is (3.45) and the value of the action has an additional term linear the imaginary-time period (i.e. inversely proportional to the temperature), in comparison to that of the regularized calculation above: The effect on the thermodynamics is that the entropy is unchanged, but the mass acquires an additive contribution in the spherically-symmetric = 1 solutions, independent of the parameter in the solutions. This is not surprising, since when = 1, the µ = 0 solution is not vacuum AdS spacetime, but instead a smooth soliton, which has a constant mass. In the earlier regularisation by subtracting the background, this constant energy was subtracted out. The question remains as to how one might reconcile the results for the entropy and the mass, as calculated from the regularised Euclidean action, with our previous, and different, results obtained using the Wald formalism. We do not have a definitive resolution to this puzzle, other than to suggest that because of the rather unusual features of the blackhole solutions in Horndeski gravity, it may be that the naive application of a subtraction procedure to obtain a regularised Euclidean action may be inherently ambiguous. In a somewhat related context, it was found in [45] that attempts to employ the Abbott-Deser method [46] to calculate the mass of asymptotically-AdS black holes foundered on ambiguities in the subtraction procedure in some cases, for solutions in gauged supergravities where scalar fields were involved. In the absence of a rigorous derivation of a valid subtraction scheme for the calculation of the Euclidean action, it seems that one could engineer different schemes that gave different results, with no guide as to which result should be regarded as the correct one. JHEP11(2015)176 4 Viscosity/entropy ratio One of the motivations for this paper was to study the viscosity/entropy ratio in Horndeski gravity. Having obtained a formula for the entropy of the black holes, we are now in a position to proceed. To calculate the shear viscosity of the boundary field theory, we consider a transverse and traceless perturbation of the AdS planar black hole, namely where the background solution is given by (2.11), (2.16) and (2.17). We find that the mode Ψ(r, t) satisfies the linearised equation For an infalling wave which is purely ingoing at the horizon, the solution for a wave with low frequency ω is given by Note that the constant parameter K is determined by the horizon boundary condition. The overall integration constant is fixed so that Ψ is unimodular asymptotically, as r → ∞. In order to study the boundary field theory using the AdS/CFT correspondence, we substitute the ansatz with the linearised perturbation into the action. The quadratic terms in the Lagrangian, after removing the second-derivative contributions using the Gibbons-Hawking term, can be written as with Note that P 3 = 1 2 P 2 . We then find that the terms quadratic in Ψ in the Lagrangian are given by The last term , enclosed in square brackets, vanishes by virtue of the linearised perturbation equation (4.2), and so the quadratic Lagrangian is a total derivative. The viscosity is JHEP11(2015)176 determined from the P 1 ΨΨ term, following the procedure described in [6,20]. Using this, we find that the viscosity is given by We have, for the planar black holes, 8) and the entropy that we derived in section 3.2 using the Wald formalism is given by We therefore find that the viscosity/entropy ratio is given by for the Horndeski black holes. 3 Note that κ and β are both positive. For reality, we must have − 4κ β < γ < 4κ β . (4.11) When β = 0, which turns off the scalar field, the ratio goes back to the universal value of 1/(4π). When γ > 0, the ratio is less than 1/(4π) and hence the bound is violated. For γ < 0, the ratio is greater than 1/(4π). Finally, we note that in terms of the original parameters of the theory (2.1), the viscosity/entropy ratio is given by Interestingly, the ratio is independent of the parameter κ. Conclusion Motivated by applications for the AdS/CFT correspondence, we studied the black holes in a theory of Einstein gravity coupled to a scalar field, including a non-minimal Horndeski term where the gradient of the scalar couples to the Einstein tensor. There are two types of static black holes in this Horndeski gravity. One of these is the usual Schwarzschild-AdS black hole, for which the scalar field is constant. Our focus is on the other non-trivial one-parameter family of static black holes, for which the scalar depends non-trivially on the radial coordinate. Although the scalar has a branch-cut singularity on the horizon, it is JHEP11(2015)176 axion-like and enters the theory only through a derivative. Furthermore, in an orthonormal frame, ∂ a χ is regular everywhere, both on and outside the horizon, and all invariants involving the scalar field are finite everywhere. We also demonstrated the uniqueness of these static black hole solutions in the theory. We studied the thermodynamics of the black holes and found three surprises. The first is that the standard Wald entropy formula (1.5) does not give the complete expression for the entropy of these black holes. This can be attributed to the fact that the derivation of the Wald entropy (1.5) requires that the scalar be regular on the horizon. In fact, the branch cut singularity of the scalar on the horizon implies that there is an extra contribution to the entropy. We studied the Wald formalism in detail, and exhibited the new contribution explicitly. It turns out that the Wald identity (3.12) continues to hold for these black holes, and so does the first law of black hole thermodynamics. The entropy, however, is no longer given by (1.5), but can be determined from the implementation of the Wald procedure. We further established, using a simple construction of the Noether charge derivable from the scaling symmetry of the planar black holes, that the mass of the AdS planar black hole, as we derived from the Wald procedure, is indeed a conserved quantity. The second surprise concerns the use of the quantum statistical relation E − T S = T I to calculate the thermodynamic parameters of the black hole solutions. In order to apply this method, it is necessary to calculate the Euclidean action I of the black hole solution. The problem is that a direct integration of the Euclideanised action yields a result that diverges at the upper end of the radial integration, and so it is necessary to adopt some regularisation procedure. We tried to apply two different such procedures. The first involved subtracting the diverging contribution of a background where the mass is set to zero from the diverging contribution from the black hole with non-zero mass. The other procedure involved adding a boundary counterterm. The two methods gave the same results for the mass and the entropy, but these results differed from those that we obtained by using the Wald formalism. The origin of this mismatch is not clear to us; it may be related to intrinsic ambiguities in the subtraction schemes that we used in order to regularise the divergences. Such ambiguities are possibly more likely in a theory such as Horndeski gravity, with its somewhat unusual features, and so regularisation schemes for calculating the Euclidean action that usually work in less exacting situations may need to be scrutinised more carefully here. The third surprise concerns the results in section 4 for the viscosity/entropy ratio. In wide classes of conventional theories with no higher-derivative terms in the Lagrangian, one finds a rather universal result that η/S = 1/(4π). Counter-examples to the universality of the ratio have been found, but for isotropic situations such as we have considered they are always associated with higher-derivative gravities, such as Gauss-Bonnet or more general Lovelock gravities. As far as we are aware, our findings for the black holes in the Horndeski theory we studied in this paper provide the first example of the violation of the η/S = 1/(4π) result in a theory whose Lagrangian is at most linear in curvature tensor. A word of caution about the use of the Wald formalism to calculate the entropy is perhaps appropriate here. If we consider Einstein-Maxwell theory as an example, the first law dM = T dS + ΦdQ for Reissner-Nordström black holes can be derived from the Wald JHEP11(2015)176 formalism by calculating δH ∞ and δH + , and using the fact that δH ∞ = δH + . The ΦdQ contribution can either enter in δH + alone, if one uses the gauge where the potential vanishes at infinity, or in δH ∞ alone, if one uses the gauge where the potential vanishes on the horizon, or else in both δH ∞ and δH + , if one uses some intermediate gauge where the potential vanishes neither at infinity nor on the horizon. In the first law, only the potential difference Φ ≡ Φ + − Φ ∞ contributes. If the gauge where the potential vanishes on the horizon is chosen, then δH + = T δS and so δH + /T is an exact differential, which can be integrated to give the entropy, while δH ∞ = dM + Φ ∞ dQ, and is not exact. In the gauge where the potential instead vanishes at infinity, δH ∞ = dM , which is an exact differential, while δH + = T dS + Φ + dQ, and so δH + /T is not exact. More complicated situations were encountered recently where asymptotically-AdS dyonically charged black holes were constructed in a four-dimensional gauged supergravity involving a scalar and a Maxwell field [47,48]. It was found that δH ∞ was non-exact, and hence non-integrable, even when a gauge where the electric and magnetic potentials vanished at infinity was chosen, because of a varying contribution from the asymptotic coefficients in the large-distance expansion of the scalar field. The first law of black hole (thermo)dynamics, involving the scalar contribution, could nevertheless be derived using the strict Wald formalism [47]. The results were later generalised to black holes in general Einstein-scalar theories [39,40], Einstein-Proca theories [41], and gravity extended with quadratic curvature invariants [43]. Analogous issues could in principle arise when considering δH + : it is commonly the case that δH + on the horizon can be expressed as T δS. In a theory such as Einstein-Maxwell, this is a gauge-dependent property as we discussed above, and in order to have δH + /T be an exact differential in this case one would need to work in the gauge where the electric potential vanished on the horizon. In most theories that have been studied, the entropy is simply given by S W defined by the Wald entropy formula (1.5). The widespread validity of the Wald entropy formula is related to the fact that typically, matter fields vanish on the horizon of a black hole (and Maxwell potentials can be set to zero by means of appropriate gauge choices). In the Horndeski gravity considered in this paper, however, the axion-like scalar χ has an unusual behaviour near the horizon and near infinity, and indeed we have already seen that δH + = T δS W . We nevertheless assumed that it was still the case that δH + = T δS, i.e. that δH + /T could be integrated to define an entropy function. That δH + /T is integrable is guaranteed in the one-parameter family of solutions considered in this paper, since all 1-forms in one dimension are exact. In a multiple-parameter black hole solution, however, there does not appear to be any guarantee, a priori, that δH + /T must be a total differential in a theory such as Horndeski gravity. The non-integrability of the sort that occurs in δH ∞ in the dyonic asymptotically-AdS black holes we discussed above might also, in principle, occur for δH + /T on the horizon, if not all the fields are strictly vanishing on the horizon. It would be interesting to study this further in more general solutions in theories such as Horndeski gravities. The findings in this paper indicate that Horndeski gravity, and its black hole solutions in particular, deserve further investigation both in their own right, and also in the context of the AdS/CFT correspondence.
11,866.4
2015-11-01T00:00:00.000
[ "Physics" ]
Multicycle terahertz pulse generation by optical rectification in LiNbO$_3$, LiTaO$_3$, and BBO crystals We report multicycle, narrowband, terahertz radiation at 14.8 THz produced by phase-matched optical rectification of femtosecond laser pulses in bulk lithium niobate (LiNbO$_3$) crystals. Our experiment and simulation show that the output terahertz energy greatly enhances when the input laser pulse is highly chirped, contrary to a common optical rectification process. We find this abnormal behavior is attributed to a linear electro-optic (or Pockels) effect, in which the laser pulse propagating in LiNbO$_3$ is modulated by the terahertz field it produces, and this in turn drives optical rectification more effectively to produce the terahertz field. This resonant cascading effect can greatly increase terahertz conversion efficiencies when the input laser pulse is properly pre-chirped with additional third order dispersion. We also observe similar multicycle terahertz emission from lithium tantalate (LiTaO$_3$) at 14 THz and barium borate (BBO) at 7 THz, 10.6 THz, and 14.6 THz, all produced by narrowband phase-matched optical rectification. Introduction Intense, singlecycle, broadband terahertz (THz) sources are essential for many applications including THz-driven acceleration of electrons and protons [1,2], molecular alignment [3], high harmonic generation [4], and material sciences [5]. In particular, femtosecond laser-based optical rectification (OR) in χ (2) nonlinear materials is considered to be one of the most efficient methods for energyscalable THz generation [6]. OR can be highly effective when the group velocity of the laser pulse is matched to the phase velocity of the THz wave in the nonlinear medium-called phase matching. As an OR-based THz source, lithium niobate (LN) is widely used due to its excellent material properties such as high nonlinearities (d 33 = 168 pm/V at 1THz) [7], high transparency at 0.4∼5 µm [8], and well-developed poling techniques [6]. For efficient phase matching in LN, tilted-pulse-front (TPF) schemes can be used to generate intense singlecycle THz pulses [7,9,10,11]. Multicycle narrowband THz sources are also of great interest owing to many emerging applications including waveguide-based electron acceleration [12], coherent X-ray generation [13], resonant pumping of materials [3], and narrowband spectroscopy [6]. Multicycle narrowband THz radiation is often produced by OR in periodically-poled lithium niobate (PPLN) crystals [14,15,16,17,18]. Cryogenic cooled PPLN crystals are also used to suppress strong THz absorption in LN, lately providing a laser-to-THz conversion efficiency up to 0.1% [17]. Another approach is to drive OR with intensity-modulated laser pulses such that the produced THz waveform can follow the intensity envelope of the modulated laser pulse [19]. Other methods include transient polarization gratings [20], TPF planer waveguides [21], and cascaded second-order processes [22]. Recently, we have observed a new type of multicycle radiation at ∼15 THz emitted from a bulk LN crystal when irradiated by femotsecond laser pulses [23]. High-energy THz radiation up to 0.7 mJ has been also produced from a large diameter (75 mm) LN wafer with 80 TW laser pumping [24]. This type of radiation originates from a narrow phase matching condition naturally satisfied in between two phonon resonance frequencies in LN [23,24]. Previously, similar narrowband radiation around 15 THz was produced by difference frequency generation (DFG) in LN, in which two separate laser pulses with different frequencies are mixed to generate THz radiation at the difference frequency [25]. By contrast, our THz generation method is based upon OR of a single laser pulse. This OR process is expected to produce higher THz energy with reducing laser driver's pulse duration. However, certain LN crystals exhibit enhanced THz radiation when driven by highly chirped laser pulses [24], contrary to our understanding of OR. Moreover, in the previous experiments, the radiation spectrum was poorly characterized with THz bandpass filter sets [24] or incompletely studied [23]. In this paper, we present a comprehensive study of multicycle narrowband THz generation around 15 THz from LN crystals. Experimentally, we measure THz field autocorrelation and spectral power under various laser conditions, especially when the laser driver is chirped with third order dispersion. To explain our experimental observation, we carry out numerical calculations on THz generation and propagation in LN. We also describe experimental measurements of chirp-dependent narrowband THz generation from lithium tantalate and barium borate crystals. Experimental setup The schematic of our experimental setup is shown in Fig. 1 (a). Femtosecond laser pulses from a Ti:sapphire amplifier operating at 1 kHz are loosely focused onto a LN crystal by a lens with a focal length of 1.5 m. The laser (pump) beam size (3.4∼6.8 mm in 1/e 2 diameter) and fluence (3.2∼28.7 mJ/cm 2 ) are varied by translating the LN crystal along the beam propagation direction and/or controlling the laser energy. The pump pulse provides energies up to 2.6 mJ at a central wavelength of 800 nm with a 30 nm full-width half-maximum (FWHM) bandwidth as shown in Fig. 1(b). In our measurements, x-cut congruent LN crystals of 10 mm × 10 mm × 0.5 mm or 1 mm (thickness) are used for THz generation. The LN crystal is oriented such that its extraordinary axis is parallel to the laser polarization for maximal THz generation. To decouple output THz pulses from the copropagating pump beam, three optical windows coated with 180-nm-thick indium thin oxide (ITO) are placed after the LN crystal. The ITO window allows high optical transmission (>85% each) in the visible and near-infrared regions with strong reflection (∼80% each) at <15 THz [26]. Any pump leakage after the three ITO windows is completely blocked by a 280-µm-thick high-resistivity (>10 kΩ·cm) silicon (Si) window in the downstream beamline. The resulting THz pulses are characterized by a lab-built Michelson-type Fourier-transform infrared (FTIR) interferometer combined with a pyroelectric detector (PD) (Spectrum Detector Inc., API-A-62-THz). The incoming THz beam is split and recombined with a variable time delay by a 280-µm thick Si wafer in the interferometer and then focused by a 90 • off-axis parabolic (OAP) mirror onto the PD detector. The THz signal from the PD detector is fed into a lock-in amplifier that is phase-locked to an optical chopper modulating the input laser beam at 10 Hz. A delay scan in the interferometer provides a THz field autocorrelation from which the spectral power can be obtained by the Fourier transform. The pump pulse duration is varied by tuning the distance between the grating pair in the pulse compressor (see Appendix). This effectively changes the group delay dispersion (GDD) of the pump pulse. Figure 1(c) shows the pump pulse duration as a function of the grating distance. With a Gaussian spectral assumption, the Fourier transform-limited pulse duration is calculated to be τ = 0.44λ 2 0 /(c∆λ) ≈ 31 fs, where λ 0 = 800 nm and ∆λ = 30 nm are the central wavelength and bandwidth in FWHM, respectively. With GDD = 0 fs 2 , the pump pulse duration measured by a single-shot second-harmonic autocorrelator is about 67 fs in FWHM [dotted line in Fig. 1(c)]. This is longer than the transform-limited pulse duration of 31 fs. The difference is explained by uncompensated third order dispersion (TOD) and higher-order dispersion (HOD) of the pump pulse. Those result in an asymmetrical plot of the pulse duration as shown in Fig. 1(c). Figure 1(d) shows a typical THz intensity profile captured at the focus by a room-temperature microbolometer focal plane array (FLIR, Tau 2-336) [27,28]. It shows that the focused THz radiation is confined within a spot size of 160 µm in FWHM diameter. For a series of GDD values, a two-dimensional (2-D) spectral power plot (color scale) is obtained and shown in Fig. 2(c). Figure 2 clearly shows two types of THz radiation emitted from LN-broadband singlecycle emission at 0∼8 THz and narrowband multicycle emission around 15 THz. Interestingly, Fig. 2(c) shows that the narrowband emission is greatly enhanced with a properly stretched laser pulse duration (GDD = −800 fs 2 , 1,500 fs 2 ) whereas the broadband radiation is maximally produced with the shortest pulse duration (GDD ≈ 0 fs 2 ). The broadband THz emission has been previously observed and explained by non-phase-matched OR in LN through a second-order nonlinear χ (2) process [15,16,29]. This yields singlecycle THz pulses emitted from both the front and rear layers of LN with a thickness of one coherence length l c = λ THz /(2 |n g − n THz |) for each layer. Here n g = 2.3 is the optical group index of LN at 800 nm, n THz is the refractive index of LN at THz frequencies, and λ THz is the THz wavelength. At 1 THz, n THz = 5.1 [30,8] and λ THz = 300 µm gives l c = 53.5 µm. This dual THz pulse generation explains the fast modulations observed in the broadband THz spectrum in Figs. 2(b) and 2(c). They arise due to interference between two temporally separated THz pulses generated from the front and rear surfaces of the LN crystal. Characteristics of narrowband radiation at 15 THz The narrowband multicycle radiation in Fig. 2 peaks at 14.8 THz with a 0.94 THz FWHM bandwidth at GDD = 1,500 fs 2 . Its output energy dependence on the pump GDD is also measured and plotted in Fig. 3. Here the THz energy is measured by the pyroelectric detector (PD) along with a THz bandpass filter (BPF) providing a central frequency of 15 THz placed in front. This allows to measure only narrowband THz radiation around 14.8 THz. As shown in Fig. 3, the output THz energy peaks at two GDD ranges of 900∼1,600 fs 2 (positive chirp) and −1,200∼ −800 fs 2 (negative chirp). More interestingly, the positive GDD range yields more output THz energy with increasing pump energy and/or LN thickness. Furthermore, the output THz energy is abnormally suppressed at GDD ≈ 0 fs 2 , where the highest nonlinearity is expected due to the shortest pump pulse duration. Previously, similar results were observed and explained by THz screening and absorption by free charge carriers produced by multi-photon laser absorption in LN [11]. In our experiment, the nonlinear absorption coefficient by free carrier absorption (FCA), α f c , is estimated to 31.63 cm −1 at 14.8 THz under laser conditions of 470 GW/cm 2 peak intensity and 800 nm wavelength [31,32]. The value, however, is much smaller than the intrinsic absorption coefficient α = 1,440 cm −1 at 14.8 THz in LN [23,24]. Therefore, three-photon-absorption followed by FCA is not believed to cause the suppressed THz emission at GDD ≈ 0 fs 2 . Instead, the odd GDD-dependence can be explained by a THz-induced cascaded effect on the pump pulse that has nonzero TOD as will be described in Section 5. Another interesting feature observed with the narrowband THz radiation is its energy scaling. Figure 4(a) shows the measured output THz energy emitted from the 1-mm-thick LN crystal as a function of the pump energy with the beam diameter of D = 3.4 mm, corresponding to Fig. 3(f). In phase-matched OR, the output THz energy is expected to increase quadratically with the pump intensity, i.e., e THz ∝ |I pump | 2 . This is indicated by the red-dotted line in Fig. 4(a). The observed THz energy, however, increases much faster than the theoretical prediction at the pump energy exceeding 1.2 mJ. This unexpected behavior can be also explained by a THz-induced cascaded effect as will be explained in Section 5. The resulting laser-to-THz conversion efficiency is shown in Fig. 4(b). The maximum THz output energy of ∼92.6 nJ and efficiency of 3.7 × 10 −5 are achieved. This provides a maximum field strength of 0.4 MV/cm at the focus, estimated from the measured energy, pulse duration (∼470 fs), and beam spot size (∼160 µm). Narrowband THz radiation from LT and β-BBO crystals Multicycle narrowband THz emission is also observed from lithium tantalate (LT) and beta-barium borate (β-BBO). These two nonlinear materials including LN are commonly used inorganic χ (2) nonlinear crystals and have a trigonal structure with point group 3m. LT and β-BBO are also tested for GDD-dependent THz generation, and the result is shown in Fig. 5. Figures 5(a) shows THz autocorrelation signals obtained from a 0.5-mm-thick LT crystal at two different pump GDD values of 0 and −1,100 fs 2 . The corresponding spectra are shown in Fig. 5(b). From a GDD scan from −4,200 fs 2 and 4,200 fs 2 , a 2-D plot of THz spectrum (color scale) is obtained and displayed in Fig. 5(c). Figures 5(d-f) shows experimental data obtained with a 0.1-mm-thick β-BBO crystal. For both measurements, the pump energy fluence is fixed at 28.7 mJ/cm 2 . Clearly, both crystals exhibit multicycle THz waveforms and consequent narrowband THz emission. In the case of LT, its narrowband emission is centered at 14 THz, consistent with the refractive index [33] and phase matching condition for LT. Similar to the LN crystals tested before, LT shows both singlecycle (broadband) and multicycle (narrowband) radiation depending on the pump chirp condition. The spectral power 2-D plot (color scale) of β-BBO shown in Fig. 5(f) exhibits narrowband emission at 7 THz, 10.6 THz, and 14.6 THz. Our result is consistent with a previous study reporting narrowband emission at 4.3 THz, 7 THz, and 10.6 THz [34]. Interestingly, 4.3-THz emission is not seen in our experiment possibly due to its weak spectral power. Instead, our experiment reveals new narrowband emission at 14.6 THz. We note that this observation was possible due to our FTIRbased detector's capability of measuring high-frequency THz emission beyond 10 THz. Contrary to commonly used electro-optic sampling (EOS) methods, in our scheme the detection bandwidth is not limited by the laser pulse duration or THz absorption/dispersion in the electro-optic (EO) material. Also, our detector is independent from the source and not affected by any pump chirp. This allows us to characterize the radiation spectrum without being distorted or restricted by the pump GDD. Theoretical background The narrowband THz radiation observed in Figs. 2 and 5 is fundamentally characterized by phasematched (n g = n THz ) OR. Here n g and n THz are the group and refractive indices at the pump and THz frequencies, respectively. For example, Fig. 6(a) shows the refractive index n THz of congruent LN as a function of frequency [8,35,36,37] (see Appendix). The optical group index n g = 2.3 at 800 nm is also plotted in Fig. 3(a) with a gray dotted line. It shows that the phase-matching condition, n g = n THz , is satisfied at 14.8 THz in between two strong transverse-optical (TO) phonon resonance frequencies in LN (7.4 THz and 18.8 THz). These resonance frequencies are clearly shown by the absorption coefficient α plotted in Fig. 3(a) with a red solid line. Note that there are two additional phase-matched frequencies occurring at 8.3 THz and 19.3 THz. However, little or no emission is expected at those frequencies because of their strong absorption in LN. Figure 6(b) shows the effective nonlinear coefficient d eff in the extraordinary direction of congruent LN [35,36]. It shows d eff reaches its local minimum value of 10 pm/V at 11 THz while peaking to 1,424 pm/V and 543 pm/V at the two phonon resonance frequencies, 7.4 and 18.8 THz, respectively. At frequencies <7.4 THz, d eff asymptotically approaches 168 pm/V, which is consistent with the reported value at 1 THz in Ref. [7]. At 14.8 THz, d eff = 82 pm/V, which is still sufficiently large to generate strong THz radiation [7,32]. We emphasize this type of narrow phase matching can occur in many nonlinear crystals including LT and BBO. In general, the refractive index n THz changes so large in between two phonon resonance, and often there exist one or multiple frequencies at which n THz becomes equal to the optical group index of refraction n g . Absorption is also expected to be relatively low in between phonon resonance frequencies. In addition, the phase-matched frequency can be tuned by varying the pump laser wavelength although it provides a narrow tuning range. For example, optical pumping at 0.4∼1.9 µm in LN can yield phase-matched emission at 14.4∼15.7 THz. At the phase-matched frequency of 14.8 THz in LN, the absorption coefficient reaches its minimum value of α = 1,440 cm −1 as shown in Fig. 6(a). This value, however, is still large enough to attenuate the emission significantly. In phase-matched OR, the effective length for maximal THz generation is generally given by L eff = (α/2 − α L ) −1 ln [α/(2α L )] ≈ 160 µm, where α L = 0.0078 cm −1 is the laser absorption coefficient near 800 nm [24,38]. This means that only a layer of 160 µm thickness from the front surface of the LN crystal can maximally generate 14.8 THz radiation when the incident pump pulse is unchirped. In this case, a thin LN crystal is best suited for efficient THz generation as demonstrated in our previous experiment [24]. For thicker ( L eff ) crystals as in the current experiment, negatively-chirped pump pulses are generally preferable as they can be compressed with propagation and effectively produce THz radiation from near the rear surface of the LN crystal. Interestingly, our experiment shows that both positive and negative chirps yield enhanced THz emission as shown in Figs. 2 and 3. So it is the stretched pulse duration, not chirp, that matters in our narrowband THz generation. In general, too long pulses are not good as they do not provide enough bandwidths to generate 14.8 THz radiation by OR. For example, at GDD = 1,600 fs 2 that yields enhanced 14.8 THz radiation, the corresponding pump pulse duration is estimated to ∼130 fs in FWHM from Fig. 1(c). This is about twice longer than the period of 14.8 THz radiation, 68 fs. This implies that the pump pulse must have certain intensity modulations (or pulse splitting) within the pulse envelope to provide a sufficient bandwidth to generate 14.8 THz. Such modulations can be initially made by applying a proper combination of GDD and TOD onto the pump pulse. Pump intensity modulations can also arise and be amplified from a nonlinear process. For instance, a pump pulse propagating through LN can be distorted by the THz field it produces via a linear electro-optic (or Pockels) effect [39,40]. This is a second-order χ (2) nonlinear process and can lead to spectral shifts, broadening, and modulations of the pump pulse [10,39,38]. Thus, in order to explain the peculiar GDD dependence of the narrowband radiation, one needs to include the Pockels effect, as well as laser-THz dispersion and absorption in conducting numerical simulations. Theoretical model In order to simulate THz generation by OR, we first present the electric field of a laser (pump) pulse by using a Gaussian envelope shape given by where ω 0 is the center frequency, ∆ω = 4 ln 2/∆t is the FWHM spectral bandwidth, and ∆t is the pulse duration in FWHM. The amplitude E 0 is determined by the pump fluence F as To account for chirped pump pulses, the spectral phase of the pump pulse is expanded in a Taylor series about ω 0 as where the fourth and higher-order terms are neglected for the sake of simplicity. The input pump pulse is obtained in the frequency domain as where F T denotes the Fourier transform. Then we solve one-dimensional (1-D) coupled forward Maxwell equations (FME) [41,29,24] self-consistently for both THz and optical pump pulses in the frequency domain as where E T and E P are the electric fields of THz and optical pump, respectively, which propagate in the coordinate ξ = z − ct/n g moving at the group velocity of the pump. The first terms on the right hand side of both equations (α terms) correspond to absorption of both fields. The second terms correspond to material dispersion D (ω T , ω) = ω (ω T , ω) [n (ω T , ω) − n g ] /c. The third term in Eq. (5) represents the second-order nonlinear polarization due to OR, a source term for THz radiation. The third term in Eq. (6) describes the Pockels effect on the pump pulse induced by the produced THz field. The last term in Eq. (6) corresponds to self-phase modulation (SPM) of the pump pulse via the Kerr effect, where the third-order nonlinear susceptibility χ (3) is derived from the nonlinear refractive index n 2 = 3χ (3) / 4c 0 n 2 0 . Here n 2 = 10 −6 cm 2 /GW is used in our calculation [42]. In the simulation, we ignore THz-induced pump modulation via the χ (3) nonlinear process. This is because the χ (3) -based pump phase shift, ∆ϕ (3) , is much smaller than ∆ϕ (2) induced by the Pockels effect, i.e., ∆ϕ (3) /∆ϕ (2) 1 [39]. The numerical integrals in Eqs. (5) and (6) are solved by a 4th-order Runge-Kutta method with spatial resolution of 500 nm in order to achieve required numerical convergences. Note that the model used here considers only 1-D space along the propagating direction z. This is justified for a large pump beam size, where any transverse beam effects such as self-focusing and diffraction can be ignored. In the simulation, the input pump pulse is assumed to be Gaussian with a 30-nm FWHM spectral bandwidth at 800 nm to be consistent with our experimental condition. Experimentally, nonzero third order dispersion (TOD) arises from two sources; one is from the compressor's tuning for GDD control (see Appendix). The other one comes from the amplifier itself but not properly compensated by the compressor. This residual TOD is implicitly shown in Fig. 1(c) by the asymmetric slope. Here the total TOD can be expressed as TOD = TOD g + TOD i , where the subscripts g and i denote "grating" and "initial (residual)". In the simulation, we used TOD g ≈ −2(fs) · GDD and TOD i = 3,800 fs 3 for our compressor system (see Appendix). GDD-dependent THz spectrum We compare our simulation results with the experimental ones shown in Fig. 2. Figure 7(a) shows a simulated THz spectral power plot (color scale) obtained from a 1-mm-thick LN at laser fluence of 13.6 mJ/cm 2 . Clearly, it reproduces multicycle narrowband THz radiation around 15 THz. Also, it is most efficiently produced at GDD = 1,600 fs 2 and −800 fs 2 . At GDD ≈ 0 fs 2 , it is greatly suppressed while the broadband radiation at 0∼8 THz is maximally enhanced. This is in good agreement with our experimental results. For comparison, we repeated the simulation with a chirped pump with TOD i = 0 fs 3 to better understand the role of TOD in THz generation. The result is shown in Fig. 7(b). Interestingly, it shows the narrowband emission singly peaks at GDD = −800 fs 2 , and the maximal THz energy increases by 18.9% compared to Fig. 7(a). This does not agree with our experiment but is consistent with a general OR process, in which the radiated THz energy increases with decreasing pump pulse duration at fixed laser energy. Here a negatively chirped pump pulse is more favorable for THz generation because it can be compressed as it propagates through LN that possesses positive material dispersion. In addition, we repeated the simulation without including the Pockels effect but keeping the original TOD-the result is not shown in Fig. 7. In this case, maximal THz radiation occurs at GDD = −400 fs 2 , but the generated THz energy decreases to 62% compared to the first simulation result shown in Fig. 7(a). All these suggest that the strange GDD dependence of 15 THz observed in our experiment can be attributed to a combined action of both the Pockels and TOD effects. Evolution of THz and laser fields with propagation For a detailed understanding of multicycle THz generation, we simulated the evolution of THz electric fields and laser intensity envelopes as they propagate through LN. Here all simulation parameters are the same as in Fig. 7(a). First, Figure 8(a) shows the cumulative THz energy (color scale) plotted as a function of the initial GDD (vertical axis) and the propagation distance z (horizontal axis). Here the pump TOD varies as TOD = −2(fs) · GDD + 3,800 fs 3 . Two line-outs at GDD = −800 fs 2 and 1,600 fs 2 are plotted in Fig. 8(b). In the case of GDD = −800 fs 2 , the generated THz energy slowly increases and then decreases beyond z = 0.6 mm. However, with GDD = 1,600 fs 2 , the energy noticeably increases after z = 0.3 mm. Figure 8(c) shows the THz electric fields (black lines) calculated at z = 0, 0.3, 0.6, and 0.9 mm with an initial GDD value of 1,600 fs 2 . Also co-plotted (red lines) are the temporal derivatives of the pump intensity profiles, −dI (t, z) /dt. For reference, the input pump intensity profile I(t) is shown (blue line) at z = 0 mm. Clearly, it exhibits damped intensity modulations on its tail due to nonzero GDD and TOD. Initially, this type of intensity modulations is not best suited for multicycle THz generation because its oscillation frequency is chirped and not fully matched to 14.8 THz. However, with a propagation, the pump envelope becomes synchronously modulated by the co-propagating 14.8 THz field via the Pockels effect. This results in a series of pump pulses (pulse splitting) separated by the THz period, which in turn drives OR resonantly to generate multicycle 14.8 THz radiation. This is evidently shown by the time derivative of the pump intensity envelope in Fig. 8(c) (red lines). Its Fourier spectral power at various z is plotted in Fig. 8(d). At z = 0.9 mm, it peaks at ∼15 THz. It also produces two side bands. The left one is responsible for singlecycle broadband THz radiation at <10 THz, whereas the right (weak) one is believed to be the source of 20∼23 THz shown in Fig. 7(a) although it was not observed in our experiment. Also, the input I 0 (ω) and transmitted I t (ω) pump spectra are computed and plotted in Fig. 8(e) along with experimentally measured ones. The transmitted one corresponds to a 1-mm-thick LN crystal pumped at 22 mJ/cm 2 with GDD = 1,500 fs 2 . As shown in Fig. 8(e), both simulated and measured pump spectra show spectral modulations and small frequency shifts. For negative GDD values, blue-shifted spectra are observed (not shown here). Figure 9 shows the simulated output energy of multicycle THz radiation as a function of the pump GDD for 0.5-mm and 1-mm thick LN crystals. It shows that more output THz energy is produced at negative pump GDD values with a thinner (0.5 mm) LN. With increasing laser fluence and LN thickness (1.0 mm), however, the peak moves to the positive GDD side, consistent with our measurement in Fig. 3. Also, our simulation provides a laser-to-THz conversion efficiency of 2.1 × 10 −5 from a 1-mm-thick LN pumped at 13.6 mJ/cm 2 . This is in reasonable agreement with our experimental value ∼10 −5 obtained under similar laser fluence conditions in Fig. 4(b). Conclusion In conclusion, we have demonstrated efficient multicycle narrowband THz generation at 14.8 THz from bulk LN crystals by using chirped optical pump pulses. The generation mechanism is explained by phase-matched OR naturally occurring in between two phonon resonance frequencies in LN. In our experiment, we have observed enhanced multicycle THz emission when the pump pulse is highly chirped. This anomalous behavior is also observed in our numerical simulations and explained by resonant intensity modulations of the pump pulse by self-produced THz fields through the Pockels effect. The modulated pump pulse can in turn produce multicycle THz radiation efficiently with propagation. This cascaded effect becomes highly efficient when the pump pulse is pre-modulated with proper second and third order dispersion. We also report the first demonstration of multicycle THz pulse generation at 14 THz from LT and 14.6 THz from β-BBO crystals. This new type of narrowband phase matching scheme is universal and can be applied to many nonlinear materials with potential to provide robust, efficient, and tabletop multicycle THz sources. Dispersion control in a dual grating compressor In this experiment, the laser pulse duration is controlled by adjusting the separation between a pair of diffraction gratings in the pulse compressor. Right after the compressor, the second-order and third-order spectral phases of the laser pulse are given by [43] where L is the perpendicular distance between the two parallel gratings, θ in is the incident laser angle, and d is the grating groove spacing. In our compressor system, we use 1,500 grooves/mm gratings at θ in = 58.4 • around a central wavelength of λ = 800 nm. The second-order φ (2) and third-order φ (3) are generally referred to GDD and TOD, respectively. Figure 10 plots how GDD and TOD vary as a function of the grating separation L in our pulse compressor. For simplicity, Eq. (8) can be rewritten as TOD = A · GDD, where A is a constant that depends on both the laser wavelength and incident angle, i.e., A = A(λ, θ in ). In our case, it is estimated to A ≈ −2 fs, and the shortest pulse duration is achieved with GDD = −1.229 ps 2 . Thus, a "negative" or "positive" chirp is defined when GDD becomes less or greater than −1.229 ps 2 , respectively. 7.2 Calculation of (ω) and d eff (ω) of lithium niobate The complex dielectric function (ω) and nonlinear coefficient d eff (ω) of congruent LN are given by [37,36] ( where S j , ω j , and Γ j are the oscillator strength, the resonance frequency, and the width of the jth transverse-phonon mode; ∞ = 4.6 is the frequency-independent bound electronic dielectric function; d e and d Qj are the electronic and ionic nonlinear coefficients, respectively. The real (n) and imaginary (k) parts of the square root of the complex dielectric function, (ω) = n + ik, are related to the refractive index n (ω) = n and absorption coefficient α (ω) = 2kω/c. All parameter values necessary to calculate Eqs. (9) and (10) are listed in table 1. TOD effects on multicycle THz pulse generation To investigate the TOD effects, we repeated the simulation with various pump TOD values. In our experiment and simulation, the pump TOD varies with GDD as TOD = TOD g + TOD i = −2(fs) · GDD + TOD i . Figure 11 shows 2-D plots of narrowband THz energy (color scale) obtained with TOD i of −4000, −2000, 0, 2000, and 4000 fs 3 . The pump fluence is set to 13.6 mJ/cm 2 . For comparison, Fig. 8(a) is obtained with TOD i = 3,800 fs 3 . As shown in Fig. 11, the output THz energy strongly depends on both GDD and TOD. With TOD i = 0 fs 3 , efficient THz generation occurs near GDD = 0 fs 2 where the pump pulse duration remains relatively short. In this case, the energy rapidly increases but also quickly drops due to large THz absorption. Due to normal material dispersion in LN (368 fs 2 /mm at 800 nm), more negative GDD is necessary to keep the pump pulse duration short with increasing propagation distance (or LN thickness). This is why the color plot in Fig. 11(c) has a single stripe with a negative slope. With large TOD i values (either positive or negative), optimal THz generation occurs with relatively large GDD values (positive or negative) depending on the TOD i sign and the propagation distance z. In this regime, the pump intensity envelop can be pre-modulated by a proper combination of GDD and TOD, and the modulation can be resonantly amplified through the cascaded Pockels effect.
7,420
2020-05-23T00:00:00.000
[ "Physics" ]
A Multinomial Theorem for Hermite Polynomials and Financial Applications Different aspects of mathematical finance benefit from the use Hermite polynomials, and this is particularly the case where risk drivers have a Gaussian distribution. They support quick analytical methods which are computationally less cumbersome than a full-fledged Monte Carlo framework, both for pricing and risk management purposes. In this paper, we review key properties of Hermite polynomials before moving on to a multinomial expansion formula for Hermite polynomials, which is proved using basic methods and corrects a formulation that appeared before in the financial literature. We then use it to give a trivial proof of the Mehler formula. Finally, we apply it to no arbitrage pricing in a multi-factor model and determine the empirical futures price law of any linear combination of the underlying factors. Introduction Hermite polynomials are widely used in finance, for various purposes including option pricing and risk management.Madan and Milne [1] have built a framework applying functional analysis results to the particular case of Hermite polynomials and inferred pricing formulas for general payoffs expressed as linear combinations of Hermite polynomials.They applied their framework to the simple case of calls to determine the implicit basis prices in the market data and imply an empirical futures price law.More recently, a series of papers have developed closed-form series expansions for various models: Tanaka, Yamada and Watanabe [2] developed approximations of the prices of some interest derivatives; Schloegl [3] adapted this type of expansions to multiperiod models.On the other hand, Buet-Golfouse and Owen [4], Voropaev [5], and Owen et al. [6] applied the Mehler formula and multivariate Hermite expansions to the allocation of risk measures in a portfolio of financial instruments. The aim of this paper is to derive a theoretical framework that underlies many usages of Hermite polynomials in finance.In particular, the first main result of this paper is to have established a link between the probability distribution of the underlying factor and the empirical prices of Hermite functions.The second main result is a multinomial expansion theorem for Hermite polynomials (and its extensions).Both provide a solid foundation to derive the no-arbitrage price of a contingent claim stemming from a linear combination of factors. The article is organised as follows: in the first section we state some basic facts about univariate and multivariate Hermite polynomials; the second section is devoted to the justification of expansions on the basis of Hermite functions and demonstrates the link between implicit prices of Hermite polynomials and the probability distribution of the underyling assets under the forward probability measure; the third section states and proves a multinomial theorem for Hermite polynomials with extensions and examples provided in the fourth and fifth sections; the sixth and final sections are dedicated to the application of the multinomial theorem for Hermite polynomials to pricing under no-arbitrage.Finally, empirical applications of the described methodology can be found in [4] and [6]. A Few Facts about Hermite Polynomials Our objective is not to give a full account of the literature on Hermite polynomials but simply to recall some definitions and properties (see Abramovitz and Stegun [7] for more information).Let ( ) Explicit and inverse explicit expressions are available for Hermite polynomials: for all n ∈  and x ∈  , the following identities hold: N-multivariate Hermite polynomials are usually defined as the product of N univariate Hermite polynomials.Let us first clarify some notations used in the rest of the paper:  and N  , .,. N is the Euclidian scalar product in N  , while !n refers to the generalised factorial, i.e. ( ) He n η is defined as The orthogonality property can readily be adapted to the multivariate case component by component: An orthonormal basis  for H is given by the polynomial functions (also sometimes called "Hermite functions") Using standard arguments in functional analysis, an arbitrary claim Ψ in H may be expressed in the basis  as where the coefficients ( ) a n are obtained by the Hilbertian inner product To summarise, we have built an orthonormal basis in which to decompose functions that are square-integrable against the standard N-dimensional Gaussian distribution but have actually made no assumption on the distribution of the vector of factors η .Indeed, this is the subject tackled by the following section. Implied Prices and Probability Distributions In this section, we demonstrate the link between two notions that are used separately in the literature: the implicit prices of Hermite polynomials (as in Madan and Milne (1994), where the payoff is expanded in Hermite polynomials) and the risk-neutral distribution of the vector of factors η (see Yamada and Watanabe [2] and Schloegl [3] where it is the factors' density that is expanded in Hermite polynomials and not the payoff as such). We consider a financial market on a period [ ] 0,T with T < +∞ : ( ) where N Ω = is the uni- verse,  the chosen σ -algebra (assumed here to be the Borelian tribe) and P  the market's probability mea- sure (a priori, it is not necessarily a risk neutral measure, as we can choose it to be the physical measure).We denote by ( ) ( ) , 0 r t t ≥ the risk-free rate and ( ) T B 0, the related zero-coupon with time horizon T. Note that we consider this simple framework to lay out the assumptions and theorems, but it could be adapted to a multiperiod setting.In the definition below, we summarise the key aspects of a complete market (see Portait and Poncet [8]). Proposition 1.A self-financing strategy is admissible if its terminal value is a random variable whose second moment is well-defined (i.e., it is square-integrable), a contingent claim is attainable if there exists a selffinancing strategy whose terminal value is equal to the contingent claim almost surely (in particular, it has to be square integrable), and the market is complete if all contingent claims are attainable.A system of prices V is an application from the set of contingent claims to  and it is said to be viable if it is compatible with the noarbitrage condition: in particular, it is a linear form. From now on, we assume the market to be complete and to satisfy the no arbitrage condition and consider P  to be the risk-neutral measure (and to be absolutely continuous with respect to the Lebesgue measure).If he n η , which can be seen as simpler contingent claims.Using the linearity of the price functional V and Cauchy-Schwarz inequality (see Theorem 2.2 in Madan and Milne [1]), this finally yields the market value of he n η .Theorem 1.Under the assumption that V is continuous there exists a unique ( ) Proof.We already know that V is a linear form on ( ) (which is a Hilbert space) and under the theorem's assumption, it is also continuous.Hence, using the Riesz representation theorem (see Brezis [9]), we can infer the existence of a unique ( ) □ We now turn to the probability density function P  and its (unique) Radon-Nikodym derivative µ with respect to the reference measure P defined as the N-variate standard Gaussian distribution in the previous section, that is: d d : d so that, finally, we have Definition 1.Under the same assumptions, ( ) ( ) ( ) η is called the futures price law of η with respect to the probability measure P. The meaning of the futures price law can be derived as follows: rewriting [ ] of the payoff and the empirical prices law (which makes sense if the latter is square integrable).We now give a theorem linking the futures price law, prices of the basis elements and Hermite expansions to translate this observation in rigorous terms. Theorem 2. The following statements are equivalent: i) There exists a sequence Remark 1.This result is a slightly different view of Madan and Milne's Theorem 4.1 [1] because our set of assumptions is minimal and it was derived following the path of the Riesz representation formula.In particular Madan and Milne's assumption that ( ) d µ η is uniformly bounded above and below implies that ( ) Clearly, ii) implies iii).To prove that iii) implies ii), we apply Lemma 5.1.in Ch. 5 of Brezis [9]: the sequence , its components are orthogonal to each other and Then, noting It now remains to prove that i) is equivalent to iii).iii) implies i) is a simple consequence of the inner product in a Hilbert space: for any contingent claim Ψ that is also in . Hence, λ must equal λ  .□ This theorem thus shows that under mild conditions (i.e. that the probability P  is not too "different" from P) the futures price law can be expanded in the basis of Hermite functions and is such that its coefficients are the prices of the Hermite functions under the risk neutral measure.But so far, we have simply considered V from a theoretical perspective and since we want to prove the link between our results and Yamada and Watanabe's expansions in terms of the density of the factors η , we can express it directly as where r is the risk-free rate. Introducing the risk-neutral T-forward measure T P  , the following holds: where ( ) 0, B T is the price of the zero-coupon of horizon T (see [8] for details).A first but important remark is that if r is assumed to be bounded, then , = ,  so that we can consider contingent claims under either probability measure.As in Tanaka et al. [2], the assumption is made that the probability distribution g of η under the T-forward measure can be expressed as A sufficient and necessary condition for g to be a valid density function is to have Schloegl [3] discusses ways to ensure that the second condition is met in practice when the summation is taken over a finite number of Hermite functions.Now, using the pricing formula under the T-forward measure When the various assumptions of the theorems above are verified, it comes whence the following theorem linking the futures price law and the T-forward probability density function of the factors holds.A direct application of this theorem is the determination of price elements ( ) π n from the moments of the distribution g and vice-versa.For the sake of clarity we consider the case 1 in the rest of the section, but the results can easily be extended to the multivariate case. Lemma 1. Let n µ denote the th n moment of the distribution g: ( ) ( ) ( ) ( ) Proof.It suffices to use the explicit and inverse explicit formulas and perform some simple algebra to obtain both results.□ Since in the financial framework considered so far ( ) ( ) ( ) , all moments of the distribution g can be implied from the prices of the orthonormal basis  .Now that the foundations of the framework have been laid, we can move to another result, namely the Hermite multinomial theorem. Factor Models and the Hermite Multinomial Theorem Considering several factors at the same time and linear combinations of those is at the core of many financial models: Fama and French's three-factor model for asset returns, Brennan and Schwarz' two-factor model, Langestieg's multi-factor model for interest rates or the multi-factor Merton-Vasicek model for example.Supposing that we have a financial instrument depending on a linear combination , N β η of the original factors, we would like to expand this instrument in the basis of Hermite functions ( ) he n η : this has implications in terms of pricing and risk management as the factors can represent some macroeconomic variables that one might wish to stress.This section therefore states and proves a multinomial theorem for Hermite polynomials and corrects a previous expansion given by Voropaev in [5]. Let us start by considering the example of a two-factor model, i.e. He β η β η β η β η β η β η Let us now compute separately: The condition 2 We can now proceed to a general version of the theorem.This result did not have a general statement and proof widely available, but given its simplicity, it might have been derived in a different context. Remark 2. The theorem can be restated in a more condensed form and in terms of Hermite functions as ( ) We offer two proofs of the result, one based on the repeated application of the recursion property (see the Appendix) and the other on the generating function of Hermite polynomials to show how powerful and different these two tools are for analysing relationships involving Hermite polynomials.They offer different insights in the manipulation of Hermite polynomials and are a good exercise for the reader. Let us now move to a demonstration based on the exponential generating function. Proof.We have that ( ) ( ) Hence, comparing the first and the last lines of this equation we must have ( ) which yields the result.□ To show how powerful this simple tool is, we provide some direct extensions and examples in the next two sections. Two Extensions of the Multinomial Theorem It is possible to easily extend this result to multivariate Hermite polynomials and to weights which do not respect the factor loading condition. Looking first at the case of multivariate Hermite polynomials, the idea is to consider a linear combination of multivariate factors ( ) ( ) Then the following equality holds: η the vector of the th i coordinates of the factors. Proof.It suffices to apply the multinomial theorem for Hermite polynomials to each of the univariate Hermite polynomials ( ) ( ) Another extension, perhaps more important for practitioners, is to consider the case where the factor loading condition is not verified.For the sake of simplicity, let us go back to the univariate case and suppose that , ).We can state the general multinomial theorem for Hermite polynomials as follows: Theorem 6.Under the assumption that , 0 N > β β (i.e.0 ≠ β ), the following identity is checked: In the proof, we make use of the following lemma from Schloegl (2013) [3]: Proof.(Of the theorem) We can rewrite ( ) , , We derive the following equality from the intermediary lemma: □ Remark 3. In the same vein, it is also possible to infer a similar but even more general result for ( ) where c ∈  and , 1 N ≠ β β and even extend it to the multivariate case; we leave the computational details to the interested reader. Revisiting the Orthogonality Property and the Mehler Formula Let us revisit the orthogonality property, but this time in presence of correlation.Our aim is to compute , where X and Y are two standard Gaussian random variables with correlation coefficient ρ .To do so we prove the following theorem. Theorem 7.For any , the following identity is true: He Y He X He y He x x y y x n with , n m δ the Kronecker symbol and 2 φ the bivariate Gaussian probability distribution function. Proof.Another way of expressing this double integral is to write it as Using the binomial formula, which is a particular case of the multinomial formula that we have proved, we have The "correlated orthogonality" property has been proved.□ Based on this simple observation, a simple and elegant proof of the Mehler formula can be given: Corollary 1. (Mehler formula) The bivariate normal probability density function 2 φ satisfies the following equality (for and use the correlated orthogonality property.□ The Mehler formula is of special importance since it can be used as a foundation in credit-risk modelling, as in Voropaev [5], to compute the expected value of a portfolio V of K financial instruments k V , 1, , k K =  , conditional on the value of a factor, say Y. Suppose that each k V is a function (verifying all necessary inte- grability conditions) of a random variable k ξ which depends linearly on a systemic factor Y and an idio- syncratic (i.e.instrument-specific) factor k  ( k  's are assumed to be mutually independent and to follow standard Gaussian distributions): where we have defined ( ) ( ) ( ) ( ) , which does not depend on the decomposition in terms of Y and k  .This is extremely useful as it can be used for assessing the impact of Y on the whole port- folio (for instance by computing the value-at-risk of the conditional expected loss, and so on). The Multinomial Factorisation Theorem and Arbitrage Going back to our framework where the market has no arbitrage and is complete, we wish to determine the relationship between the implied prices of the basis and those of a linear combination of the underlying factors.To make things clearer suppose that we are looking at a payoff 0 Ψ whose underlying factor η is a linear combination of N factors (as previously, we note where k π is the potential implied price of ( ) k he η for all k and π n is the known implied price of ( ) (52) Turning to the prices, we see that On the other hand, we have that ( ) ( ) ( ) π , and shorting it at price n π . Since the payoffs ( ) ( ) Ψ η are equal, we have built a portfolio that displays an arbitrage.□ To summarise, we have shown that it was possible to express explicitly the coefficients ( ) and the prices n π as functions of the coefficients ( ) a Ψ n η and the prices π n .The strength of this theorem is to make explicit the no arbitrage relationship between the empirical prices of the Hermite polynomials and the empirical price of a linear combination of the factors, which leads to the formulation of the following result: Theorem 9. We have determined the futures price law density of , N η = β η denoted by λ β as ( ) ( ) Concluding Remarks This paper proposes a simple way of expanding the Hermite polynomial of a linear combination of factors into simpler elements.This method allows us to prove the celebrated Mehler formula in a very simple way, but also enables us to derive the empirical prices of functions of linear combination of factors in a market with no arbitrage and facilitates credit risk modelling.Practical illustrations of the theoretical framework developed in this paper can be found in [2]- [6].We have built on the theory developed by Madan and Milne and highlighted the relationship that existed between their results and other recent results obtained in the field of pricing.Using a multinomial theorem for Hermite polynomials, we have shown how to tackle expressions including more than one factor.The main assumption made throughout the paper is the existence of a payoff's or probability density function's expansion in the basis of Hermite polynomials.Although this is quite restrictive (it implies the existence of all moments in the latter case for instance), it does allow for significant deviations from the benchmark case of standard Gaussian distributions.The computational approach at hand indeed permits to only consider a series of simple computations rather than a difficult and time consuming one.It offers a practical analytical alternative to full-fledged Monte Carlo simulations.Finally, the result has been proved.□ important properties are the recurrence relationship and the orthogonality property.The recurrence relationship states that * are square integrable with respect to the measure P defined by the density is attainable and has a unique price [ ] V Ψ (V is also unique) and further assuming that be expanded in the basis of Hermite polynomials.This results in the possibility to express the contingent claim Ψ as a linear combination of the basis elements, namely the Hermite functions ( ) Theorem 3 . The implied prices ( ) π n of Hermite functions and the coefficients ( ) b n of the the T-forward probability density function satisfy the fundamental equality ( ) ( ) ( ) key in the equality and is called the "factor loading condition" in credit risk modelling.It can actually be seen as a normalisation constraint: if 1 η and 2 η are independent and normalised (i.e. have mean 0 and variance 1), then 1 Theorem 4 . Let * N ∈  and N ∈  .Then for all j can be restated in terms of Hermite functions as , using the valuation formula, by no arbitrage, we would necessarily have η.Theorem 8 .Ψ Bringing both equations together, we can write the following equality: Since the function 0 is a generic payoff, by analysing the series coefficient by coefficient, we finally obtain that choice for n π .Let us show that it is the only one by applying no-arbitrage pricing arguments.Indeed, suppose that there exists n ∈  such that n n 2 , 1 , the above expressions into the three pieces: I n N , it now boils down to making the change of variables 1 I n N , using the change of variables 1
4,932.2
2015-05-29T00:00:00.000
[ "Mathematics", "Business" ]
Symbolic regression of generative network models Networks are a powerful abstraction with applicability to a variety of scientific fields. Models explaining their morphology and growth processes permit a wide range of phenomena to be more systematically analysed and understood. At the same time, creating such models is often challenging and requires insights that may be counter-intuitive. Yet there currently exists no general method to arrive at better models. We have developed an approach to automatically detect realistic decentralised network growth models from empirical data, employing a machine learning technique inspired by natural selection and defining a unified formalism to describe such models as computer programs. As the proposed method is completely general and does not assume any pre-existing models, it can be applied “out of the box” to any given network. To validate our approach empirically, we systematically rediscover pre-defined growth laws underlying several canonical network generation models and credible laws for diverse real-world networks. We were able to find programs that are simple enough to lead to an actual understanding of the mechanisms proposed, namely for a simple brain and a social network. I ncreasingly many scientific domains rely on the concept of networks to represent an observable state of a system, where networks are usually seen as the outcome of a generative process. For systems without centralised control, these generative processes consist of local interactions between entities, be they proteins, neurons, organisms, people or organisations. While current technological advances have been making it increasingly easy to collect datasets for large networks, it is difficult to extract models from this data. This difficulty can be attributed both to the sheer size of the datasets and to the non-linear dynamics of many of these decentralised systems, which resist reductionist methodologies. Another difficulty is posed by the mapping between generative models and observable networks since there is a many-to-many correspondence between generative models and observable networks. A network may be explained by different models and a model -provided it is stochastic in nature -may be capable of generating different classes of networks due to the amplification of initial random fluctuations. Following conventional scientific methodology, researchers devise models that can account for a network and then test the quality of the model against a number of metrics. Much-cited examples include preferential attachment 1 , competition between nodes 2,3 , team assembly mechanisms 4 , random networks with constraints 5-7 , inter alia. Models are typically based on intuition or prior evidence that such and such process appears to be particularly important in the formation of interactions. A problem here is that of human bias in looking for good models. There is always the possibility that high-quality models are counter-intuitive, and thus unlikely to be proposed by researchers. The work we report in this paper work is aligned with the idea of creating artificial scientists. Parts of the scientific method are automated, namely the generation and refinement of hypothesis, as well as their testing against observables. For example, in a work with some parallels to the ideas presented in this paper, scientific laws are extracted from experimental data using genetic programming 8 . There have been some preliminary attempts at using genetic programming to search for network models [21][22][23] , and to structural analysis and community detection 24 . However, to the best of our knowledge, we provide the first proof-of-concept application of symbolic regression to discover and select plausible morphogenetic processes for real-world networks. The method we propose can be applied to both synthetic networks and on real-world networks. In the case of synthetic networks, it makes it possible to discover the exact generative rule used to construct the particular type of network in question, while in the case of real-world networks, it proposes a generative rule that robustly reproduces the original topological features. Furthermore, in contrast with previous works, our approach relies only on local information and uses a parameter-free fitness function without any ad-hoc assumptions. It eventually provides a straightforward mapping to mathematical expressions. A more detailed comparison to 22,23 is provided in the supplemental materials. Results Generator search. Machine learning techniques can be used to help researchers generate alternative models that are capable of reproducing networks with certain topological features. The approach we propose employes genetic programming 13,14 , a form of evolutionary computation. Genetic programming is a type of search inspired by natural selection where evolutionary pressure is created to guide a population of solutions to increasingly higher quality. In this case the individuals in the population are network generative models, and the quality measure is how much a synthetic network generated by a model approximates the real observable network. Two fundamental issues have to be addressed in implementing this technique. Firstly, the models need to have a representation that is uniform and permits recombination. Secondly, an appropriate measure of similarity needs to be defined so that synthetic and real-world networks can be compared. The first issue touches on a shortcoming in the current literature on ''network science'': there is no unified and elegant way of formally representing network generative processes. To address this we introduce the concept of network generator as a computer program which, for the purposes of this article, we refer to simply as generators. We define a network generative process as a sequence of discrete steps where a new arc is created at each step. The process can be straightforwardly applied to both directed and undirected networks. At any given moment, there is a set of possible arcs that could be created. A generator becomes fully defined if it provides a way to prefer some arc over the others. Instead of attempting to define a deterministic selection process we create a stochastic one -recognising that many of the generative processes that produce networks have some degree of intrinsic randomness. The generator is thus a function w(i, j) that assigns a weight w i j to all arcs (i, j) from a random sample S (see Methods). At each network construction step, a new arc is then stochastically selected with a probability P i j proportional to w i j such that: where w' i j~wi j if w i j . 0, 0 otherwise. If all the weights for a sample are zero, they are all set to 1 to avoid division by zero in the above probability expression. The core of our approach then consists in designing a process able to automatically discover weight computation functions w which produce realistic networks. Generators are represented as tree-based computer programs, which are equivalent to mathematical expressions. Tree leaves are variables and constants, and its other nodes are operators. These are the building blocks of our generators (see figure 1). The set of available operators includes simple arithmetic operators: {1, 2, *, /}, general-purpose mathematical functions: {x y , e x , log, abs, min, max}, conditional expressions: {., ,, 5, 50} and an affinity function (y). Variables contain information specific to the two vertices participating in the arc: in-and out-degrees (k and k9), undirected, directed and reverse distances between the two vertices (d, d D and d R ) and their sequential identifiers (i and j). In the case of undirected networks only k, d, i and j are used. Sequential identifiers and the related affinity function will be discussed later on. We rely on a random walk-based heuristic distance: not only would the explicit computation of all exact pairwise distances during the generative process be too computationally expensive, but perhaps more importantly, new connections are also likely to be accurately construed as a hop-by-hop navigation mechanism instead of a selection process based on an omniscient distance value (see Supp. Info.). This simple arrangement configures a uniform language to describe generators capable of expressing entity-level behaviours that produce non-linear, non-centralised network growth processes. The second issue of measuring network similarity is addressed by comparing a set of conventional features of both networks. We combine distributions that describe simple aspects of the network, such as in-and out-degree, direct and reverse PageRank 9 centralities (considering actual and, respectively, inverted arcs), with distributions describing finer and more mesolevel aspects of the structure, such as directed/undirected distances and triadic profiles 10 . These features are reduced to metrics by computing dissimilarities between the respective distributions. We rely on two notions of distribution dissimilarities. For degree and PageRank centralities we apply the Earth mover's distance (EMD) 11 , for the more sophisticated distance distributions and triadic profiles we rely on a simpler ratio-based dissimilarity metrics (see Supp. Material for a longer discussion). These dissimilarity metrics allow us to determine whether we are converging towards the original distributions at a small computational cost (other dissimilarity metrics may well be used, but we found these to work well in our case). We are interested in minimising all of the dissimilarity measures to get as close as possible to the target (real) network. This configures a multi-objective optimisation problem with possible trade-offs since some dissimilarities might need to be minimised at the expense of others. Our objective is to find a balanced solution and employ the following simple strategy: we decide to place all metrics on the same scale and configure their meaning as the improvement with respect to a random network. In practice, each dissimilarity between the target network and a candidate network is divided by the mean dissimilarity between the target network and 30 Erdős-Rényi (ER) random networks with the same number of vertices and arcs as the target. For a given metric, this means that if the dissimilarity between the target network and the ER average is, say, 5 and the distance from the target network to the candidate network is 3, the ratio is 3/5. A ratio of 1 thus corresponds to no improvement. The evolutionary algorithm then tries to improve models by minimising the highest of these ratios, which thereby defines a fitness function. While ER is assuredly a basic null model, opting for a more sophisticated model may induce undesired bias: for instance, using the configuration model would precisely incorporate the degree distributions of the target network, making it impossible to directly approximate it using the fitness function. A further feature of our framework is to not assume homogeneity between nodes, irrespective of their structural position. A heterogeneous model is one that starts with the assumption that not all entities in the system behave the same. For example, in a social network, some agents might be intrinsically more likely to form ties. Or they might be more likely to interact within a specific class of agents. We introduce heterogeneity by way of the sequential identifier input variable i g {1, … n}. These indices, considered as identifiers, can then be passed by variable to the generator programs, and used to introduce a priori distinctions in behaviours. Let us consider a simple example: This equation describes a generator where the probability of an arc is completely determined by the identifier of the origin vertex. It describes a situation where nodes have different a priori propensities to originate connections. Furthermore, it tells us that these propensities are distributed following a hyperbolic curve. Even though integer identifiers may appear to be a highly simplistic means of introducing heterogeneity, we need to remember that they can be combined with the other building blocks in an infinity of ways. In the below results from real-world networks we can see that some of the generators that were found make use of the indices in various ways. Indeed, the simplicity of building blocks can be leveraged and used to facilitate the definition of generators where certain vertices have natural affinity for each other. This is the affinity function y, which uses the modulo operation (remainder of the division of one number by another) to divide the sequence identifier space into a number of g groups, returns a if target and origin nodes i and j belong to the same group (i.e. in case of ''affinity''), and b otherwise: From now on, we will consider i and j to be implicit parameters and write the function simply as: y(g, a, b). We now have a methodological framework that we can use to generate plausible models for network generators. Several runs on the same target network may generate different models -although we will show experimental evidence that they tend to converge on the same behaviors. It is now up to the researcher to select amongst them, possibly using his domain knowledge. A more objective consideration is the trade-off between simplicity and precision. Our repres-entation of generators allows for a very straight-forward measure of model complexity: the program length. Trivially, the program length is an upper bound on the Kolmogorov complexity 15 of the model. This allows us to apply a quantified version of Occam's Razor: all other things being equal, choose the model with the lowest program length. In practice, depending on the variations in precision, the researcher might wish to sacrifice some parsimony for some precision, or viceversa. Application to real and synthetic networks To assess our method we start by testing if we can discover generators for networks that were produced by generators we defined ourselves. According to our generator semantics, two classical network types can be defined in a very succinct fashion. For an ER random network, where c is any constant value; for a generator based on Preferential Attachment (PA) as in the Barabási-Albert model, We used these two generators to produce networks of five different sizes, from 100 vertices and 1000 arcs to 500 vertices and 5000 arcs. We generated 30 networks for each size/generator combination and performed an evolutionary search runs on each one of them. We found a correct results rate of 97.3% for preferential attachment and 94% for random. In the preferential attachment case, the precise solution with no bloat (w 5 w PA ) was found 92.7% of the time. In the random network case, the precise solution with no bloat (w 5 w ER ) was found 76.7% of the time. Interestingly, this result on a series of stochastic realisations of the ER and PA models is a strong indication that a real network which does not lead to the discovery of w ER or w PA obeys a more sophisticated morphogenesis process (see Supp. Info. for detailed results). We then proceeded to experiment with seven datasets from a diverse selection of real-world contexts: the neural network of a C. Elegans roundworm 16,17 , a network of political blogs 18 , a software collaboration network (http://cpan-explorer.org/category/authors/, date of access: 10/03/2014), the power grid of the Western States of the USA 17 , a social network extracted from the neighbourhood of a single Facebook user 19 , a network of protein interactions in Homo Sapiens 20 and a word adjacencies network 27 . The first three are directed while the latter four are undirected. Figure 2 shows an overview of the results we obtained, featuring the expression of the best program found after the 30 evolutionary runs, as well as a comparison between the corresponding synthetic network and original (target) network. Figure 3 focuses on C. Elegans and shows a comparison of the various distributions we use in our fitness function for the real network, a sample of 30 random networks with the same number of nodes and arcs, and a sample of 30 synthetic networks produced by the best generator we found for that network. Given the stochastic nature of the generative process, multiple runs of the same generator can produce different results. The figure shows that, in practice, variance is very small. Similar approximations were obtained for the other networks. We provide an interpretation of each one of these generators in the Supplemental Materials. While these are high quality solutions according to the set of metrics we defined, another question is whether high-quality solutions generated by our method are similar to each other or represent completely different models. To investigate this issue we defined a process to quantify the similarity between two generators -let us call them generators w and w9. We produce a network using generator w and, at each arc creation step, for each sample of candidate arcs, we also compute the probability of each candidate using generator w9. We then compute the mean distance between the probabilities assigned by generators w and w9 to all the candidate arcs during the entire generative process. We thus get a dissimilarity measure between generators which we denote d ww9 . Conversely, we produce a network with generator w9 and compare the probabilities with the ones assigned by generator w, obtaining d w9w . Finally, we consider the (generator) dissimilarity between w and w9 to be d 5 (d ww9 1 d w9w )/2. In the left panel of figure 4 we compare the (generator) dissimilarity between the optimal generator we found (p27) and all other generators obtained for C. Elegans with the fitness of these generators, i.e. max (network) dissimilarity on all metrics. The Pearson correlation indicates a strong relationship between fitness and similarity to the optimal generator. Furthermore, there is a significant probability that such a correlation exists (p , 0.005). On the right panel we also compare the distance with the mean dissimilarity in order to observe generators over all metrics, obtaining the same conclusions. The results we obtain provide compelling evidence that the closer the generators are to the best program in terms of fitness (at the network level), the closer they are in terms of the qualitative behaviour defined by their programs (at the link level), implying that this correlation further strengthens the plausibility of this generator. Another point to note is that as program distance to the best solution increases, there is an increase in fitness variance. This is not surprising given that an increase in program distance corresponds to a decrease on the constraints on the space of possible programs. All the runs are subject to the same evolutionary pressure to decrease fitness, so it is likely that some become stuck in local minima -a common phenomena in heuristic search strategies. In fact, it is not possible to ever be sure that some result is not a local minima, but this is also a limitation of the scientific method in general. However, we show that independent runs of our algorithm form a cluster of high quality results with respect to generator similarity. There is a degree of convergence on a consensus that facilitates the task of choosing between competing theories. Discussion We proposed a methodology to describe network generators and automatically manipulate them in order to assist in the discovery of plausible morphogenetic processes. We presented a number of reasons to be optimistic about this approach. The generator semantics proved to be expressive enough to represent growth processes that lead to structurally diverse networks and the evolutionary algorithm was able to find plausible generators for these different cases. The plausibility of the solutions is based on a comprehensive set of conventional metrics that reflect different aspects of a network's structure. The generators found are sufficiently succinct to have high explanatory power. Multiple runs of evolutionary search on the same network were shown to converge on similar solutions. Similarly, runs on stochastic realisations of canonical ER-and PA-based generators essentially led to the discovery of the correct original laws. More broadly, we believe our approach has a wide range of application domains where it could fruitfully guide scientists towards credible processes underlying the formation of the empirical networks they are trying to model. There are many possible avenues to improve upon the method we propose. The vast array of techniques from the evolutionary computation and genetic programming bibliography could be employed. Larger populations and recombination operators may lead to higher quality results at the expense of computational tractability. Pareto optimisation may be used to explicitly select trade-offs between precision and solution complexity. Variables and operators for specific domains (e.g. spacial restrictions) can be introduced. In this work we strived for simplicity and generality, and to provide the scientific community with a tool that can be immediately useful but also serve as a baseline for further refinements. Methods The evolutionary algorithm maintains one or two generators at each time: w o is the generator that produced the networks with the lowest dissimilarity to the target network so far. w s is the generator with the shortest program that produced a network with a dissimilarity not more than 10% worse than w o . We refer to this dissimilarity ratio as anti-bloat tolerance. At any moment, it is possible that w o 5 w s . This procedure is meant to fight bloat -the accumulation of needless complexity in generator programs 12 . The algorithm is initialised with a randomly created generator w r (see Supp. Info. for details). In the initial state w r 5 w o 5 w s . For every evolutionary search generation, a parent generator is randomly selected from {w o , w s }. This parent generator is then cloned and mutated to produce the child generator w c . Mutation consists of randomly selecting a sub-tree, removing it and replacing it with another randomly selected sub-tree extracted from another randomly created tree. w c is used to produce a synthetic network and the dissimilarity of this network to the target is computed. The dissimilarity and program length of w c is compared against w o and w s , and w c will replace one or both if appropriate. The search will terminate once w o and w s remain unchanged for a certain number of generations, we choose this number to be 1000. w s will be taken as the final result. Given the significant computational effort needed to generate a network, we propose a strategy that limits the amount of such generation steps. While it is common in evolutionary algorithms to use large populations to prevent local minima, this is not the only possible strategy 25 , nor is it guaranteed to work 26 There are two parameters that introduce trade-offs in the search process: sample ratio and anti-bloat tolerance. Sample ratio is a trade-off between generator accuracy (lower samples leading to more randomness against the linking preference defined by the generator) and computational effort (higher samples require more generator evaluations per link generation step). Being V the set of vertices, s r a predefined sampling ratio, A the set of all possible arcs (jAj 5 jVj 2 ) and A9 the set of all arcs that do not currently exist in the network (A9 5 {a g Ajw a 5 0}, w a being the weight of arc a), we define a sample S with jSj 5 n 5 s r ? jAj such that S 5 {s 1 , …, s n } with s i g A9. In the experiments presented in this article, we do not allow duplicate or self-links. These restrictions could trivially be lifted if appropriate. The value we propose was set sufficiently high to work with the smaller networks in our data set -at some point, the sample becomes too small and the generators operate too randomly to lead to evolutionary improvement. Conversely, the sample size could be made smaller to reduce the computational effort for very large networks. Anti-bloat tolerance is a trade-off between result quality and conciseness. Here we adjusted once and for all the value against our initial experiment, C. Elegans, and found 15% to stall evolution and 5% to lead to hard to interpret, bloated solutions. Without any further parameters adjustment, we then tested the algorithm against real and synthetic datasets, having found that this leads to perfect solution on the synthetic cases and robust results on the other 6 real-world networks. It is possible that these parameters can be further optimised for specific cases or if more computational effort can be tolerated. However, in this work we strived to demonstrate the general applicability of the method. The stop condition (1000 stable generations) and random tree generation parameters (detailed in Supp. Info.) are conventional genetic programming parameters and were set within ranges that are very common in the literature. Given the heuristic nature of genetic programming, it is impossible to avoid such parameters. Quoting ''A Field Guide to Genetic Programming'' 14 : ''It is impossible to make general recommendations for setting optimal parameter values, as these depend too much on the details of the application. However, genetic programming is in practice robust, and it is likely that many different parameter values will work.'' The quality and meaning of the results presented are not contingent on these parameters, as these only affect the search process itself. Further efforts on parametrisation may lead to higher quality results being found. We avoided such efforts to prevent a bias for our dataset. We propose that this increases credence on the general applicability of the method. Ultimately, while we believe to have demonstrated the effectiveness of a heuristic search algorithm, this, of course, does not preclude refinements by further research.
5,840.8
2014-09-05T00:00:00.000
[ "Computer Science" ]
An Energy Consumption Model for Designing an AGV Energy Storage System with a PEMFC Stack : This article presents a methodology for building an AGV (automated guided vehicle) power supply system simulation model with a polymer electrolyte membrane fuel cell stack (PEMFC). The model focuses on selecting the correct parameters for the hybrid energy bu ff ering system to ensure proper operating parameters of the vehicle, i.e., minimizing vehicle downtime. The AGV uses 2 × 1.18 kW electric motors and is a development version of a battery-powered vehicle in which the battery has been replaced with a hybrid power system using a 300 W PEMFC. The research and development of the new power system were initiated by the AGV manufacturer. The model-based design (MBD) methodology is used in the design and construction of a complete simulation model for the system, which consists of the fuel cell system, energy processing, a storage system, and an energy demand models. The energy demand model has been developed based on measurements from the existing AGV, and the remaining parts of the model are based on simulation models tuned to the characteristics obtained for the individual subsystems or from commonly available data. parametric model by simulation of either the final system or from the parameters of the individual models’ elements (components of the designed system). The presented methodology can be used to develop alternative versions of the system, in particular the selection of the correct size of supercapacitors and batteries which depend on the energy demand profile and the development of the DC / DC converter and controllers. Additionally, the varying topology of the whole system was also analyzed. Minimization of downtime has been presented as one of many possible uses of the presented model. Introduction The use of electric drives in various types of vehicles is becoming increasingly popular. The growing use of such drives is due to the many advantages of electric motors compared to internal combustion engines. This is particularly evident in closed areas in internal transport where automated guided vehicles (AGVs) are heavily utilized. High torque, quiet operation, and zero-emissions are just some of the advantages over other primitive solutions. However, these vehicles have operational problems such as insufficient work duration and limited ranges. This is caused by the relatively low energy density of the energy sources used in these vehicles. The development dynamics of the basic energy sources used in AGVs, such as lithium-ion batteries, does not indicate that this problem will be solved in the short-term (within the next decade). For this reason, designers are searching for other energy sources that provide significantly higher levels of energy density while having the same advantages by the following types of irreversibles: Activation losses, fuel crossover and internal currents, ohmic losses, mass transport, and concentration losses. The origins and descriptions of these irreversibles, as well as the modeling method, have been previously discussed [4,5]. When creating a simplified model, two points from the activation region and two points from the ohmic region are utilized from the polarization curve. However, for the construction of a detailed model, further data are required, such as the number of cells, nominal Low Heating Value (LHV) stack efficiency, nominal operating temperature, nominal air flow rate, absolute supply pressure, and the nominal composition of fuel and air; these are typically provided in the fuel cell manufacturer's documentation. When it comes to modeling fuel cell dynamics, current step and interrupt tests must be completed for a given cell. The necessary parameters to construct this part of the model are then determined from these tests or can be obtained directly from the manufacturer's data, because they depend on the fuel cell itself. If such tests cannot be performed, the data can be assumed from a recommended range [4]. Occasionally, the fuel cell manufacturer does not provide basic technical data for the FC, and in this case more tests on the system are required to determine a full range of parameters. Other parameters obtained in tests depend on the whole system in which the FC works and its load, and they are specific for a particular configuration of the system. Other fuel cell equivalent circuit models for passive mode testing and dynamic mode design have been compared in [6]. This comparison includes the following dynamic models: Larmie [7], Dicks-Larminie [5], Yu-Yuvarajan [8], Choi [9], and shows that complex models are not always effective for practical applications. These four dynamic models are used to simulate the power-generating cell, whilst the passive equivalent circuit model represents the fuel cell which is not producing electric power. These models represent the response to an external electric stimulus to determine the condition of the fuel cell. Additionally, in [6], Page [10], and Garnier [11], passive models are compared. The work [6] does not present any relationship between passive mode test responses and dynamic mode performance. Not all of the fuel cell's irreversibles are relevant under normal operating conditions. While commissioning and rated conditions are the most common conditions, overloading is not a common condition. Some systems do not function under FC operation with such overloading conditions at all. Therefore, irreversibles that affect work under such unusual conditions are not considered or modeled at all. However, sometimes this is needed, and irreversible mass transport and concentration losses must be modeled. A model for mass transport losses in the form of a theoretical model is presented in [12] and in the form of an empirical model in [4]. This model was developed to simulate transport phenomena in a proton exchange membrane fuel cell (PEMFC). The hydrogen fuel cell is complex and expensive, and in systems with high dynamics of power demand where it is required to supplement such a cell with additional elements such as startup batteries, buffers, inverters/converters, then the whole system needs to be modified to handle a specific load. Testing such a system can be completed using a computer simulation model presented later in the article, but it is also possible to create a physical simulator which is a cheap alternative to testing. Such a solution built based on a programmable DC power supply, control interface, and software written in LabVIEW has been proposed for testing the entire system and acts as a guide in the development of power conditioning equipment [13] with the ability to work in steady-state and transient modes. Modeling of the System Using a Fuel Cell Stack To generate energy in FCs, it is necessary to use a hydrogen tank together with a hydrogen pressure reduction system and a control valve mediated by a controller to regulate the amount of hydrogen supplied on an ongoing basis. Oxygen is usually supplied from the air through a fan system to the fuel cell. It is also possible to supply oxygen from a high-pressure tank similar to the whole hydrogen supply system. For large FCs (larger than 10 kW), the installation of the FC itself becomes very complicated and maintaining balanced operational parameters becomes a problem. These issues are the subject of Energies 2020, 13, 3435 5 of 31 separate research, and balance of the plant (BOP) [14] and incorrect configuration and selection of inappropriate operating parameters of individual elements of the FC system can lead to insufficient cell performance and rapid degradation of the cell. A simple solution to the complexity of the installation on the FC preparation side can be made by using an FC configuration based on a dead-end anode (DEA) structure. This type of installation, unlike the flow-through anode (FTA) configuration, significantly simplifies the need to prepare hydrogen and guarantees the appropriate humidity of the cell, ensuring close to 100% hydrogen use by controlling the (normally closed) purge valve [15,16]. This configuration is popular for low power FCs, but also developed for higher power applications. DEA installations operating differently to FTA are not managed by a regulated control valve and have to be purged periodically by the purge valve [17]. Since the fuel cell itself is an energy generator operating under specific parameters, usually this produced energy must be adapted to the purposes of the energy demand characteristics. If the power take-off is not variable, this system may be simpler, but with high variability of energy demand, it is necessary to consider the electric converter/inverter and energy buffer supplied via batteries or supercapacitors. Supervising the work of these devices can take place at various levels, most often at the basic level through ongoing control of the parameters of individual devices, and frequently at the strategic level by adapting the operating parameters not only to the ongoing demand but also to future demand. Modeling the power supply load is a separate problem. A power supply load model takes the form of a specific load profile based on the behavior of the powered system and optimized with measurements taken during the experiment or by considering physical phenomena, e.g., a model outlining the dynamics of a moving object. The choice to develop this model depends on the design phase. If one has a prototype or physical copy of the system required to be powered, one can choose the first solution, but if one only has the concept or accurate documentation, the second solution is needed. Interesting solutions can be found in various works for modeling system fragments or the entire system oriented at determining specific parameters. An example model for a complete power supply system for hybrid vehicles is described in [18]. The modeled system consists of a hydrogen power supply, DC/DC converter, battery, inverter, electric motor, and the vehicle body. A complete model of the system was developed based on the experimental data. The model was then used to develop a power management control algorithm for fuel cell hybrid vehicles using a stochastic programing technique. This approach requires a complete system which can be subjected to a series of experiments. Another approach is to use model-based design (MBD), where a model is created at the design and concept stage, and numerical simulation experiments outline various potential solutions and determine the impact of various parameters on the system's performance (sensitivity analysis), or to formally optimize the system or its components [1,2]. Improved modeling of the Proton Exchange Membrane (PEM) fuel cell power stack for electric vehicles in which a separate oxygen tank supply system was used to improve performance is presented in [19]. Simulation calculations were oriented towards finding an optimal control strategy for the pressure that facilitates the output power according to the power demand of the load. In addition to the holistic approach to modeling the hydrogen fuel cell system, researchers are interested in individual elements of the system. Furthermore, the hydrogen cell itself, with a series of tanks, controllers, control valves, and fans supplying air, may include power electronics which process and adapt energy to meet demand from energy buffering units, including various types of batteries and supercapacitors. Regulators and controllers are indispensable to these units and operate at various levels, and often operate with a complex strategy for a given application. Selecting the power electronics for the FC's energy conversion system is quite a difficult task. The situation is additionally complicated by the fact that the energy-receiving system requires the conversion of energy to different voltages, types of currents, and their power simultaneously. We chose to only focus on work completed on general modeling of energy flow and power losses, not energy-electronic phenomena or their modeling. Therefore, only models for average value converters were assessed, and those cooperating with basic energy buffers and thus implementing alternative strategies of constant current and constant output voltage were analyzed. There are no universal solutions to control of energy flow because the characteristics of the energy demand received from FC are application dependent. The selection of elements of the entire FC system is of interest to many scientists. An essential element in the system is the boost converter. A simple model of the cell as an electrical circuit has been previously described [20]. The model, taking into account a portion of the irreversibles appropriate to the nature of the application, is used to select the suitable type of DC/DC boost converter and to select the parameters of the energy storage constituting the energy buffer which compensates for rapid changes in energy demand. Various connection options (behind or before the converter) of the supercapacitors are also discussed. For energy buffers, there has been significant progress in the development of the latest types of batteries. The multitude of solutions is not conducive when making optimal decisions, especially at the development stage of the system. Therefore, simple battery models using the most popular battery types are used. The basic battery models are lithium-ion, lead-acid, nickel-cadmium, and nickel-metal hydride [21], and their various parameters are also defined, including charge and discharge, temperature effects, and ageing. This enables the modeling of various connections cells in series and/or in parallel [22][23][24][25]. A "Theoretical Modeling Methods for Thermal Management of Batteries" review has been previously completed [26]. In addition to typical models, various new approaches are presented, e.g., in [27,28]. In [27] a novel lumped electrothermal circuit of a single battery cell was presented, including the extraction procedure of the parameters of the single-cell from experimental evidence and a simulation environment, given in SystemC-WMS for the simulation of a battery pack. In [28] a new open-circuit voltage (OCV) model is proposed. The new model can simulate the OCV curves of a lithium iron magnesium phosphate (LiFeMgPO 4 ) battery at different temperatures. It also considers both charging and discharging. The most remarkable feature from the different models, in addition to the proposed OCV model, is their integration into a single hybrid electrical model. A lumped thermal model is implemented to simulate the temperature development in the battery cell. The synthesized electro-thermal battery cell model is extended to model a battery pack of an actual electric vehicle. Typically, the problem of choosing a buffer system includes what type of energy buffers will be selected, the size of the buffer, and the features of individual parts (batteries, supercapacitors). Buffer hybridization is a common solution which involves a combination of a supercapacitor with a battery and is outlined in [29]. Various configurations and sets of supercapacitors and batteries together with DC/DC converters are discussed in several papers [30][31][32]. The correct selection of the buffer parameters and the topology of this system allows one to overcome most of the FC's weaknesses. Selecting the optimal parameters and topology for these subsystems in the FC is important, as the FC is strongly dependent on the energy demand characteristics in the system [30,32]. Modeling supercapacitors (SC) requires consideration of the electrical, self-discharge, and thermal behavior. A comprehensive review of the modeling techniques is described [33][34][35] The equivalent mathematical model derived from the electrical model, which was used to simulate the voltage response of the supercapacitor, is presented in [33]. The review presented in [33] discusses SC modeling, state estimation, and their industrial applications, intending to summarize recent research progress and stimulate innovative thoughts for SC control/management. For the SC modeling, state-of-the-art models for electrical, self-discharge, and thermal behavior are systematically reviewed, where the electrochemical, equivalent circuit, intelligent, and fractional-order models describing the electrical behavior simulation are highlighted. For SC state estimation, methods for state-of-charge (SOC) estimation and state-of-health (SOH) monitoring are covered, together with an underlying analysis of the ageing mechanism and its influencing factors. The models which are described in the literature have various advantages and disadvantages, ranging from the ease of use down to the complexity of characterization and parameter identification. Work presented in [35] presents a comprehensive review and compares these models, specifically focusing on the models that predict the electrical characteristics of double-layer capacitors (DLC), showing the strengths and weaknesses of different available models and their various areas for improvement. Experience in implementing the various applications of the hydrogen fuel cell system is very helpful when designing a complete system. One can find many interesting descriptions of applications with different degrees of maturity and covering both stationary and mobile applications in ground, water, and aerial vehicles. Research has described the various aspects of the whole system and its hybridization [36][37][38][39], current energy management and energy management strategy [40][41][42][43][44], energy control and processing [45,46], optimization of power systems based on fuel cells for matching operational parameters [47,48], power transmission in hydrogen cell-powered propulsion systems [49], and general aspects of the development of hydrogen cell-based systems [50,51]. Model of Energy Transfer in the System A general methodology for building an energy transfer model enabling simulation experiments when designing a hybrid power supply (HPS) system based on a hydrogen cell stack for an AGV is shown in Figure 1. Experience in implementing the various applications of the hydrogen fuel cell system is very helpful when designing a complete system. One can find many interesting descriptions of applications with different degrees of maturity and covering both stationary and mobile applications in ground, water, and aerial vehicles. Research has described the various aspects of the whole system and its hybridization [36][37][38][39], current energy management and energy management strategy [40][41][42][43][44], energy control and processing [45,46], optimization of power systems based on fuel cells for matching operational parameters [47,48], power transmission in hydrogen cell-powered propulsion systems [49], and general aspects of the development of hydrogen cell-based systems [50,51]. Model of Energy Transfer in the System A general methodology for building an energy transfer model enabling simulation experiments when designing a hybrid power supply (HPS) system based on a hydrogen cell stack for an AGV is shown in Figure 1. Conducting simulation experiments requires the definition of the HPS system in the AGV. Since these vehicles are designed for close repetitive transport operations over long periods and have known operating conditions, i.e., speed and load, one can adapt the HPS system to individual needs, such as the demand for instantaneous power during a specific operating condition. At this stage, the criterion for assessing the designed HPS system should also be determined. For the next step, it is necessary to measure and identify the instantaneous power demand by the AGV at different operating conditions. These measurements should include the power demand for expected operating conditions over the planned route. From this, work can be completed on the data preprocessing, modeling, and validation of the models representing the power demand. These models are identified Conducting simulation experiments requires the definition of the HPS system in the AGV. Since these vehicles are designed for close repetitive transport operations over long periods and have known operating conditions, i.e., speed and load, one can adapt the HPS system to individual needs, such as the demand for instantaneous power during a specific operating condition. At this stage, the criterion for assessing the designed HPS system should also be determined. For the next step, it is necessary to Energies 2020, 13, 3435 8 of 31 measure and identify the instantaneous power demand by the AGV at different operating conditions. These measurements should include the power demand for expected operating conditions over the planned route. From this, work can be completed on the data preprocessing, modeling, and validation of the models representing the power demand. These models are identified based on data from instantaneous power measurements at various operating conditions. Based on a set of such models, it is possible to simulate the power demand for the new AGV route and other operating conditions. A detailed discussion on this subject is presented in Section 3.1. Simultaneously, by the defining power demand models, it is possible to create component models of the HPS system. It should be noted that these models of HPS can be identified based on additional measurements or characteristics provided by the manufacturers. More information about creating and identifying component models of an HPS including a hydrogen cell, models of storage components, and other auxiliary components, are described in Section 3.2. To validate the hybrid power supply system model, it is possible to provide the load in the form of played-back real values of instantaneous power demand and creating comparisons, e.g., concerning the current power supply system installed in AGV. The proposed methodology described herein and in further sections of this manuscript allowed us to design a customized HPS system for operating conditions over a preplanned route. This can be achieved by conducting simulation experiments to find the optimal solution or a set of possible solutions which satisfy defined criteria. The optimal criteria can refer to finding the optimal battery or supercapacitor capacity for the HPS or other objectives. More information on this subject is discussed in Sections 4.2 and 4.3. A Generic Model for Instantaneous Power Demand This section describes a generic procedure for building a model to compute instantaneous power demand. This generic model is used to estimate the instantaneous power demand under the AGV's different operating conditions during working duty cycles. The model results are used as a load for the hybrid hydrogen power supply system model discussed in Section 3.2. The use of both models makes it possible to perform different simulation experiments, which allows one to examine different configurations of a power supply system with varying parameters. The generic model for instantaneous power demand is the first part of this model. The presented methodology for building an instantaneous power demand model, ultimately to develop a new hybrid vehicle power supply system, depends on the available data sources. Two possible options defining the data source availability can be distinguished: • Variant I: Data which describe the full dynamic model of the AGV are available. In this case, the developed model allows one to implement any scenario of AGV operation and estimate the instantaneous power demand. The data includes all the dynamic parameters of the vehicle including the mechanical system of the vehicle transmission system, the model of the control system, as well as the electric power supply system. It should be noted this is a seldom case and is a time-consuming modeling activity that requires a lot of information about the considered object, i.e., access to information about the dynamic parameters of the vehicle, information about how the vehicle is controlled, including the operation of supervised control system, etc. Unfortunately, some sections of this information are often unavailable due to companies protecting their intellectual property. • Variant II: Only data with selected operating conditions are available, such as the speed of individual main drives that accompany the measurements of the instantaneous power demand of the vehicle. It should be noted that the use of this variant is purposeful, especially for AGV which has a limited number of possible settings of selected operating conditions, e.g., rotation speed of drives as well as acceleration and braking ramps. In such a case, it is not necessary to identify the entire domain defined by the space of possible values under the parameters of the operating conditions but only selected characteristic parameters. For the aforementioned values, under a combination of operating conditions, a bank of autoregressive models has been applied. These models are representations of signals which, for selected operating conditions, represent instantaneous power demand for the selected type of vehicle. The main task of the models, in detail, is to: • Represent expected values and variance of the instantaneous power demand under selected operating conditions; • Reflect the dynamics of changes in the instantaneous power demand and their frequency amplitude characteristics. Models for Stationary Conditions The autoregressive model of the signal [53][54][55] of instantaneous power demand is given by the formula: where y is the instantaneous power demand, is the noise which follows a Gaussian distribution, A q −1 n is a polynomial of the n order represented by A q −1 n = 1 + a 1 × q −1 + a 2 × q −2 + a n × q −n , and E y , Var y are the expected value and variance of the instantaneous power demand. The expected value and variance can be additionally represented by other linear or quadratic functions of f (V, L). To account for dynamic changes in the instantaneous power demand, the model can be represented in the frequency domain [55] using the following formula: where P y e jω represents the power spectral density of the modeled signal. The above model can be applied under stationary operating conditions. Models for Nonstationary Conditions Similarly to stationary conditions, a signal model can be built for the nonstationary conditions [55,56]. This applies to parameters such as acceleration, braking, and emergency braking, etc. The model to apply for this case has the following formula: where Tr y is the linear model of the acceleration, deceleration ramp, etc. This part of the model can be determined using a least-squares criterion. En y d is the envelope model established for the signal after removing the trend from the nonstationary signal y d . The model of the envelope can be evaluated for the following signal: where z[k] is a module of an analytical signal obtained using a Hilbert transform [57] and N is a length of the moving average filter. The envelope signal can be represented by a regressive model given by Equation (1). If the envelope is flat and monotonical then a linear model can be used. After removing the trend and by eliminating the second-order nonstationarity resulting from the variable variance, the frequency assessment of the model presented in the Equation (2) can then also be used. Model Validation The validation of the model describing the route section under selected operating conditions can be calculated by using the following measures: • Using a training data set to develop the model and validation data y val (k) , the following measures of model compliance can be determined: where t 0 M(k)dt is the energy computed for the signal model, and Er FreqStruct = P y e jω − P y val e jω (6) where P y e jω , P y val e jω are the power spectral densities of the model obtained as an output of the model and the power spectral density of validation data, respectively. The measure determined here is a functional assessment in the frequency domain and it determines the difference in signal power for the frequency components. The selection of the model order is determined, based on the similarity of the power spectral density characteristics, to reflect the dynamics of the signal changes by the signal model. Combining Models After validating the individual models representing the signal from instantaneous power demand, a selected scenario can be built which represents the AGV route. Usually, this route is planned and the AGV moves along the route under established operating condition parameters such as speed, load, etc. Before creating a power demand model for a selected vehicle scenario, it was necessary to divide the scenario into appropriate route sections for which appropriate models would be assigned to generate the instantaneous power demand signals. An important element when building full waveforms for the entire scenario was the points where the signals of the partial models would be combined. To combine waveforms of the individual models, it is possible to use the following window (Equation (7)), which is a modified version of the window previously shown in [58], the length of which can correspond to the length of the modeled waveforms: Hybrid Power Supply System Model for the AGV The model of the hybrid power supply system for the AGV was developed in the MATLAB/Simulink environment partly using the Simscape Electrical library components. This model is a numerical tool supporting the selection of elements for the hybrid power supply system. The block diagram of a hybrid power supply system is shown in Figure 2. The numerical model was built based on this block diagram (Supplementary Materials). This model could be used to optimize the parameters of the power supply system after a specific operation scenario for the AGV is chosen (length and diversity of the route, load, driving dynamics) and after assuming the optimization criteria (for example, minimizing the capacity of the main energy store). Energies 2020, 13, x FOR PEER REVIEW 11 of 31 built based on this block diagram (supplementary materials available at 'www.mdpi.com/xxx/s1'). This model could be used to optimize the parameters of the power supply system after a specific operation scenario for the AGV is chosen (length and diversity of the route, load, driving dynamics) and after assuming the optimization criteria (for example, minimizing the capacity of the main energy store). The electrical energy source in the hybrid power supply system was the fuel cell stack fueled by hydrogen. It was assumed that hydrogen was stored in a metal hydrides tank equipped with a pressure regulator [59]. The flow of hydrogen through the fuel cell stack was regulated by a proportional control valve. The control signal for this valve was generated using a hydrogen flow regulator. This regulator was a component of the fuel cell stack controller. The controller additionally protected the stack against operation from moving outside the safe operating range of the electrical and thermal parameters. In addition, the fuel cell stack controller contained the SCU (short circuit unit), which periodically short-circuited the stack and improved its performance [60]. Due to the operation of the SCU, it was necessary to install an auxiliary supercapacitor in the system, which maintained the supply voltage for the duration of the stack short-circuit, and additionally provided an energy buffer for rapid changes in the load current of the stack when the stack was not able to impulsively provide adequate power due to limitations imposed by its own dynamics and the hydrogen fueling system dynamics. Electrical energy from the fuel cell stack was supplied to the main AGV power busbars through a DC/DC Constant Current -Constant Voltage (CCCV) converter working at a Constant Current (CC) or Constant Voltage (CV) output, where the output current setpoint for CC mode could be invariable or could be set by the stack load power regulator, which was part of the converter control system. The method for determining the output current setpoint depended on the configuration of the hybrid power supply system used and the method of its optimization. A lithium-ion battery or supercapacitor could act as the main electrical energy storage. The The electrical energy source in the hybrid power supply system was the fuel cell stack fueled by hydrogen. It was assumed that hydrogen was stored in a metal hydrides tank equipped with a pressure regulator [59]. The flow of hydrogen through the fuel cell stack was regulated by a proportional control valve. The control signal for this valve was generated using a hydrogen flow regulator. This regulator was a component of the fuel cell stack controller. The controller additionally protected the stack against operation from moving outside the safe operating range of the electrical and thermal parameters. In addition, the fuel cell stack controller contained the SCU (short circuit unit), which periodically short-circuited the stack and improved its performance [60]. Due to the operation of the SCU, it was necessary to install an auxiliary supercapacitor in the system, which maintained the supply voltage for the duration of the stack short-circuit, and additionally provided an energy buffer for rapid changes in the load current of the stack when the stack was not able to impulsively provide adequate power due to limitations imposed by its own dynamics and the hydrogen fueling system dynamics. Electrical energy from the fuel cell stack was supplied to the main AGV power busbars through a DC/DC Constant Current -Constant Voltage (CCCV) converter working at a Constant Current (CC) or Constant Voltage (CV) output, where the output current setpoint for CC mode could be invariable or could be set by the stack load power regulator, which was part of the converter control system. The method for determining the output current setpoint depended on the configuration of the hybrid power supply system used and the method of its optimization. Energies 2020, 13, 3435 12 of 31 A lithium-ion battery or supercapacitor could act as the main electrical energy storage. The reasons for using electrical energy storage (as energy buffer) together with the fuel cell stack, in traction applications, were the large fluctuations in the power demand and the need to accumulate energy from regenerative braking. The energy storage, in this case, complemented the deficiencies of the fuel cell stack, meaning the stack was not able to increase the output power impulsively, had limited peak power, and was not able to absorb braking energy. The nature of the fuel cell stack was rather dedicated to independent work in stationary applications. In traction applications, an additional energy storage device was necessary [30][31][32]. There was a management system between the main power busbars and the main energy storage, the primary role of which was to protect the energy storage against operation outside the safe range of electrical and thermal parameters. The management system also allowed for pre-charging of the main energy storage with energy from the fuel cell stack after starting the hybrid power supply system, which was needed when the supercapacitor acted as the energy storage. It was assumed that the energy storage could also be charged from an external energy source, depending on the adopted configuration of the hybrid power supply system and the scenario of the AGV operation. While developing the numerical model for the hybrid power supply system, assumptions were considered from the practical conditions or were the result of previous preliminary analyses. The initial selection of the fuel cell stack was guided by the average power demand of the AGV and from economic criteria. The cheapest fuel cell stack was selected that would meet the AGV requirements according to preliminary estimates. It was assumed that a horizon fuel cell stack, type H-300, with 300 W power, a rated voltage 36 V, and rated current of 8.3 A would be used [61]. This stack consisted of 60 PEM fuel cells connected in series, low-temperature operation, powered by hydrogen from the pressure tank and oxygen obtained from atmospheric air. The nominal efficiency of the H-300 stack was 40%. This was a low power fuel cell stack that had a very simple "balance of plant" structure. The stack was equipped with three fans that provided cooling to the stack with a suitable amount of the air. The fuel cell stack was equipped with a factory controller that regulated the rotation speed of the fans by supplying them using the Pulse Width Modulation (PWM) method, and which controlled the hydrogen two-state valves. This fuel cell stack with factory controller functioned as a dead-end anode stack [15,16] without external humidification and hydrogen recirculation. It was assumed that the functionality of this controller could be extended to meet the needs of the power supply system under development by controlling the proportional hydrogen valve for flow-through anode operation [17], which was included in the numerical model. The presented numerical model was primarily used to determine the flow of electrical energy in a hybrid power supply system, so several simplifications were assumed when developing this model. It was assumed that the fuel cell operated at a constant temperature and the airflow from which oxygen was extracted was always sufficient, regardless of the power load of the cell. The assumption regarding airflow was also fulfilled for the modeled fuel cell in the absence of external restrictions, which has been previously determined [62], where it was stated that even with the smallest used fan efficiency the cell worked with an air excess coefficient of~20. Both thermal phenomena occurring in the hydrogen tank and the hydrogen release dynamics from the metal hydride storage were not taken into consideration. It was assumed that the hydrogen in the fueling system always had sufficient pressure to achieve the required hydrogen flow. Additionally, thermal phenomena in other elements of the power supply system were deemed to be negligible, assuming that they worked in optimal and constant thermal conditions. The phenomena related to the pulse operation of power electronic devices in the DC/DC converter were also not taken into account together with any ageing of the lithium-ion battery. It was assumed that an external energy source was required to start the hybrid power supply system, ensuring the power needed to start the fuel cell stack and the stack controller, especially when the main energy storage was discharged. A low-capacity start-up battery could be used as an auxiliary energy source, which, if necessary, could be charged from an external source and, after starting the Energies 2020, 13, 3435 13 of 31 power supply system, could be recharged from the main power busbars. The energy needed to start the power supply system was small; however, the starter battery model was omitted for simplicity. Modeling of the AGV drive system had been simplified to just model the instantaneous power demand, while the demand for the power of the components of the drive system (inverters, motors) during vehicle movement and related to the operation of the vehicle's control, safety, and signaling systems also had to be taken into account. The instantaneous power demand model for a selected AGV operation scenario was created by submitting multiple data samples obtained during measurements made by a real AGV with different load states and with different operating states, both during steady driving and in dynamic states (acceleration, braking). The data samples were recorded for an AGV powered by a standard (factory) lithium-ion battery that was charged from an external source at the end of the operation. Then the data were subjected to filtering and processing as described in Section 3.1. It was assumed that the instantaneous power demand for a vehicle powered by a standard battery and in a vehicle powered by a hybrid power supply system with a fuel cell stack under the same operating conditions and load conditions was the same. In connection with the adopted method of modeling the AGV drive system, the phenomena associated with switching power electronic devices in the inverters of the vehicle's drive nodes were excluded from the research. Optimization of the structure and parameters of the hybrid power supply system could be carried out considering various criteria by setting selected parameters for the numerical model and analyzing the obtained waveforms, both utilizing experiments performed by trial and error and by automatic optimization algorithms. Usually, the parameters of the fuel cell stack were assumed at the beginning of the optimization process because the choice of the stack was not very flexible and the rated powers of the available stacks were highly graduated. The choice of energy storage was more flexible, so the parameters of this storage device could be optimized. During the simulation, the ongoing analysis of the selected waveforms of electrical quantities were carried out in terms of exceeding the defined criteria (critical values). This analysis is conducted regardless of the applied optimization method in the numerical model. If such an exceedance occurred during the simulation, then the simulation would be stopped and the model would return an error code that determined which criterion had been violated. A total of fifteen different criteria were defined in the numerical model for the various components of the hybrid power supply system. These criteria are: • For the fuel cell stack: A minimum voltage, maximum load current, maximum load power, and the conditions of long-term power overload; • For the auxiliary supercapacitor: The maximum charging or discharging current; • For the DC/DC converter: A minimum supply voltage, maximum load power, and the conditions of long-term power overload; • For the main energy storage: The maximum charging and discharging current, and the conditions of long-term overload during charging and discharging; • For the main power busbars load model (i.e., the AGV power demand model): A minimum voltage, maximum voltage, and the maximum difference between the achieved power and the required power. These criteria resulted from the catalogue of real element parameters of the hybrid power supply system and the conditions imposed by the elements of the AGV drive system (e.g., for inverters: The minimum and maximum supply voltage). Not all the criteria needed to be active at the same time. The selection of active criteria depended on which power supply parameters were unknown in the design aid process and which were imposed as project assumptions. For example, if the required minimum DC/DC converter power rating was unknown, then the criteria related to the power overload of the converter was turned off. If a specific DC/DC converter type needed to be used in the design, then in this situation the parameters of this converter should have been treated as project assumptions and the appropriate criteria values in the model were to be set, following the datasheet of the converter. In addition, the model for the hydrogen fueling system analyzed the hydrogen consumption during Energies 2020, 13, 3435 14 of 31 the simulation and returned the appropriate error code if the hydrogen tank was emptied. In this situation, the simulation was also stopped. The numerical model of the hybrid power supply system defined the allowable voltage range and allowable state-of-charge (SOC) range of the main energy storage. Exceeding the voltage or state of charge for energy storage was not treated as a critical error and did not stop the simulation. However, it affected the way the energy storage worked, which was signaled in the model by the appropriate status signals. If the minimum voltage or the minimum state of charge was exceeded during discharge, the energy storage could only be charged. If with such limited use of energy storage, there was an increased power demand from the AGV model, the voltage of the main power busbars would fall below the criterion value. Similarly, if the maximum voltage or maximum charge was exceeded during charging, the energy storage could only be discharged. If under this condition, the AGV model attempted to achieve a return of braking energy to the energy storage, then the voltage of the main power busbars would rise above the criterion value. Exceeding the criterion values of the main power busbars voltage was treated as a critical error and stopped the simulation by returning an appropriate error code. In this situation, the error code had to be analyzed together with the main energy storage status to detect the reason for stopping the simulation. The simulation model developed in the MATLAB/Simulink environment was built according to the block diagram shown in Figure 2. In addition to the blocks outlined in Figure 2, it also contained elements that allowed one to record the simulation results in the MATLAB for automatic optimization, and it also contained elements that allowed an ongoing view of waveforms, important parameters, error and status signals for the trial and error experiments. To model the fuel cell stack, a block from the Simscape Electrical library was used, which is described in detail in [4]; the addition of concentration or mass transport losses in accordance with the method presented in [5] was applied. The losses of concentration or mass transport ∆V trans are described by the equation: where the coefficients m and n are selected experimentally and I FC is the stack load current. To tune the model for the fuel cell stack's activation area and load losses (ohmic losses), the results of measurements completed on the real H-300 stack and the genetic algorithm were used. During measurements this stack operated as a dead-end anode with the factory controller. In addition, the concentration losses model was experimentally tuned to obtain the appropriate stack voltage drop when overloaded. The thresholds for stack voltage and current were taken into account, and when they reached the stack were disconnected from the load by the factory stack controller. The power of the fuel cell stack's own needs ("balance of plant") was modeled as being linearly dependent on the stack load power. The H-300 stack balance of plant was very simple (containing only fans, a controller, and hydrogen valves). However, it would be possible to model the balance of plant for a more sophisticated system, if the power demand characteristics of the components were available. The fuel cell stack controller model included a hydrogen flow regulator that generated the FFR (ref) control signal for the hydrogen proportional control valve, which determined the flow through the anode of the stack. The principle of proportional control for this regulator was derived from the equations of the fuel cell stack model used in MATLAB presented in publications [4] and [5]. This regulator calculated the hydrogen flow needed to meet the hydrogen needs of the fuel cell stack at a given load current and a given hydrogen utilization. With a set number of cells in the stack, stack temperature, pressure and purity of hydrogen, the control principle is described as follows: where the value of the coefficient C FFR can be determined using the relationships given in [4] or [5]. current I FC(avg) of the fuel cell stack. This regulator ensured the hydrogen flow when the load current in the stack increased dynamically, which in turn ensured the rapid opening of the hydrogen control valve and prevented a voltage drop in the stack. Due to the strong averaging of the stack load current at the regulator input and the dynamics of the control valve (which was modeled by first-order inertia), the setpoint of hydrogen utilization by the stack should have been slightly less than the nominal hydrogen utilization to ensure proper fueling of hydrogen in fast transient states. The nominal hydrogen utilization could be calculated using the stack's rated parameters and relationships, as given in [4]. For an H-300 stack, it was 83%. When starting the hybrid power supply system and its associated transient states, the flow regulator ensured a sufficiently high initial hydrogen flow. The stack controller model contained a stack power demand model (power of its own needs), implemented as an array of values with interpolation that models "balance of plant". This power demand was included in the load model of the main power busbars. The characteristics of an H-300 stack for nominal hydrogen utilization, obtained by the numerical model and tuned based on the results of the measurements are presented in Figure 3. The "auxiliary supercapacitor" block in Figure 2 also contains the controller that charges the auxiliary supercapacitor in a precise manner during the power supply system start-up to the required minimum voltage and then connects it to the output busbars of the fuel cell stack. The DC/DC converter model was an average value model that considered the efficiency characteristics implemented as an array of values with interpolation and a no-load current. Additionally, it included the characteristics of the output power limitation as a function of the converter supply voltage. The output power limitation could be used interchangeably with the power threshold detection (and error code) depending on the purpose of the simulation test. The setpoint of The "auxiliary supercapacitor" block in Figure 2 also contains the controller that charges the auxiliary supercapacitor in a precise manner during the power supply system start-up to the required minimum voltage and then connects it to the output busbars of the fuel cell stack. The DC/DC converter model was an average value model that considered the efficiency characteristics implemented as an array of values with interpolation and a no-load current. Additionally, it included the characteristics of the output power limitation as a function of the converter supply voltage. The output power limitation could be used interchangeably with the power threshold detection (and error code) depending on the purpose of the simulation test. The setpoint of the output current in CC mode could be constant or it could come from the regulation of the fuel cell stack load power. It was a Proportional Integral (PI) type, anti-windup regulator. The model of the main energy storage management system, depending on the state of charge and voltage of the energy storage, allowed for its normal operation (such as charging and discharging) or to operate with restrictions (only discharging or only charging). This allowed a pre-charge of the energy storage after starting the power supply system if this function was needed. The main energy storage model contained models of supercapacitor or lithium-ion battery, alternatively selected. The model of the main power busbar loading system is included in the "AGV" block shown in Figure 2, which loads the power supply system with the power required by the AGV. Additionally, the power for the fuel cell's own need is represented by the "auxiliary DC/DC converter" block in the same diagram. The power required by an AGV is shown in the value tables, containing samples of the power demand while driving and samples of the vehicle's own needs. The hybrid power supply system model included control signals that enforced the appropriate order of switching on its elements during start-up, thus mapping the operation of the real system. Automated Guided Vehicle (AGV) An automated guided vehicle is designed for the transport of goods, materials, and semi-finished products as part of internal transport carried out in closed production or warehouse halls. The vehicle is designed to travel at ground level and can transport goods directly by itself by placing a loaded pallet on the upper loading surface of the vehicle or by pulling an attached transport trolley. The vehicle moves independently throughout the hall, performing tasks independently without human assistance in accordance with its pre-planned action and along a planned route. Usually, the vehicle travels along fixed routes according to a fixed schedule adapted in conjunction with the production cycle. The reproducible nature of the travel route and loads is important for matching the planned hydrogen fuel cell stack-based power supply system to the application. The vehicle monitors the surroundings via a sensor system to avoid collisions with them. The vehicle is powered by a lithium-ion battery placed in an easily accessible and replaceable cassette and the drive consists of two electric motors. A low-power AGV (Formica-1, AIUT Ltd., Gliwice, Poland) was used in this research. Identification Experiment Identification of the instantaneous power demand model whose output is the input of the hybrid power supply system model requires proper planning of the identification experiment. The first step of these activities was to develop a common test plan for different operating conditions that take into account various stationary and nonstationary operations carried out on the real AGV. The experiment was completed for different operating conditions at different route sections. The experiments are listed in Table 1. Due to the autonomous operation of the AGV control and the stochastic nature of the interaction between the vehicle surface and the AGV, the selected experiments were repeated several times and the average results obtained in this way were used for testing the signal models. During the conducted experiments, the following values were recorded: The voltage and the current returned by the batteries, the current values recorded on the main drive, and the current value on the stabilizing converter. Additionally, measurements of the resistance of the drive that was not directly measured were made. Due to these measurements, it was possible to record the instantaneous power demand. A schematic of the measuring system is shown in Figure 4. The data were recorded using an oscilloscope and with a sampling frequency of 100 or 50 kHz, depending on the duration of the selected route section. No. Operating Conditions Related to Routes Other Operating Conditions Energies 2020, 13, x FOR PEER REVIEW 17 of 31 Restrictions on the safety and control of AGV are specified in the standard [52], including various responsibilities imposed on manufacturers and users. Due to the above reasons, the AGV was equipped with a logger system to record or monitor selected parameters during operating conditions around the route. No. Operating Conditions Related to Routes Other Operating Conditions Selected logger data was used to observe the operating conditions. The data gathered concerned the rotational speeds of integrated Tekno TO-62 drives (left and right drive nodes according to Figure 4) equipped with an induction motor (nominal power 1.18 kW), the mechanical transmission with gear ratio 8.12 with a maximum continuous wheel torque of 25 Nm, and the power with a nominal voltage of 33 V. This element was also equipped with a 48 VDC nominal brake and a 5000 pulses speed encoder. The data were recorded using the AGV's inbuilt logger with a sampling frequency of ~2.5 Hz and were not synchronized with the instantaneous power demand signals recorded with the use of an oscilloscope. The recorded speed data were used to identify the operating conditions associated with the route section covered and its identification. This is a necessary part of the Restrictions on the safety and control of AGV are specified in the standard [52], including various responsibilities imposed on manufacturers and users. Due to the above reasons, the AGV was equipped with a logger system to record or monitor selected parameters during operating conditions around the route. Selected logger data was used to observe the operating conditions. The data gathered concerned the rotational speeds of integrated Tekno TO-62 drives (left and right drive nodes according to Figure 4) equipped with an induction motor (nominal power 1.18 kW), the mechanical transmission with gear ratio 8.12 with a maximum continuous wheel torque of 25 Nm, and the power with a nominal voltage of 33 V. This element was also equipped with a 48 VDC nominal brake and a 5000 pulses speed encoder. The data were recorded using the AGV's inbuilt logger with a sampling frequency of~2.5 Hz and were not synchronized with the instantaneous power demand signals recorded with the use of an oscilloscope. The recorded speed data were used to identify the operating conditions associated with the route section covered and its identification. This is a necessary part of the proposed approach, in particular, which is forced by conducting measurements in situ conditions where synchronization of measurements with the logger data (operating condition) was not possible. Figure 5 shows selected waveforms, speed signals from the logger, and the auxiliary computed signals. To synchronize the measurements and thereby identify individual sections of the route for which signal models can be developed, the data were preprocessed by determining the auxiliary signals which were used to enhance the recognition of different operating conditions (some examples are shown in Figure 5), using resampling methods and identifying common starting points for both sources of data. For the obtained segments of the labeled measurement data related to route sections and selected operating conditions, models for stationary and nonstationary conditions were identified defining the banks of models. Figure 6 shows the selected labeled measurement data based on previously determined data labels from the logger and auxiliary data. Based on the data labels, it was possible to segment the data and create a bank of signal models representing the instantaneous power demand for selected operating conditions. To synchronize the measurements and thereby identify individual sections of the route for which signal models can be developed, the data were preprocessed by determining the auxiliary signals which were used to enhance the recognition of different operating conditions (some examples are shown in Figure 5), using resampling methods and identifying common starting points for both sources of data. For the obtained segments of the labeled measurement data related to route sections and selected operating conditions, models for stationary and nonstationary conditions were identified defining the banks of models. Figure 6 shows the selected labeled measurement data based on previously determined data labels from the logger and auxiliary data. Based on the data labels, it was possible to segment the data and create a bank of signal models representing the instantaneous power demand for selected operating conditions. Instantaneous Power Demand Model for a Selected Scenario The route scenario presented in Figure 7, developed for the AGV, consists of a section of the slalom route with the load (marked in red) and the rest of route unloaded. Instantaneous Power Demand Model for a Selected Scenario The route scenario presented in Figure 7, developed for the AGV, consists of a section of the slalom route with the load (marked in red) and the rest of route unloaded. Instantaneous Power Demand Model for a Selected Scenario The route scenario presented in Figure 7, developed for the AGV, consists of a section of the slalom route with the load (marked in red) and the rest of route unloaded. For the presented scenario, a signal of instantaneous power demand for a section of the route without load, shown in Figure 7, was modeled with the use of a set of models. For this section of the route the following models were prepared: The needed lengths (number of samples) of the individual waveforms computed by the models was determined based on information about the time necessary to achieve the required speed (in the case of braking and accelerating). The model output did not compute any velocity, as this value could be read from the inverse of the average power demand versus the average velocity which had been identified based on the collected data sets presented in Table 1. This was determined using the linear approximation P inst−ave = C 1 × v ave + C 2 , where C 1 is 351.4 Ws m and C 2 is 279.3 W (valid for the average velocities v ave between 0.3 and 0.8 m/s). The required number of samples for a constant speed period could be determined from the required length of the route and sampling frequency. The waveforms were generated for the considered route section shown in Figure 7 by using previously listed models. The errors of the individual models are presented in Table 2. An example assessment of the selected model (with constant speed) for the instantaneous power signal distribution in the frequency domain is presented in Figure 8. The calculated errors were obtained from the test measurement data. Next, the individual waveforms generated with signal models were combined using the window indicated in Equation (7). An example of the joined data from two models is shown in Figure 9. The needed lengths (number of samples) of the individual waveforms computed by the models was determined based on information about the time necessary to achieve the required speed (in the case of braking and accelerating). The model output did not compute any velocity, as this value could be read from the inverse of the average power demand versus the average velocity which had been identified based on the collected data sets presented in Table 1. This was determined using the linear approximation = × + , where is 351.4 and is 279.3 W (valid for the average velocities between 0.3 and 0.8 m/s). The required number of samples for a constant speed period could be determined from the required length of the route and sampling frequency. The waveforms were generated for the considered route section shown in Figure 7 by using previously listed models. The errors of the individual models are presented in Table 2. An example assessment of the selected model (with constant speed) for the instantaneous power signal distribution in the frequency domain is presented in Figure 8. The calculated errors were obtained from the test measurement data. Next, the individual waveforms generated with signal models were combined using the window indicated in Equation (7). An example of the joined data from two models is shown in Figure 9. Energies 2020, 13, x FOR PEER REVIEW 21 of 31 Figure 9. The combination of the two waveforms generated from the signal models for nonstationary and stationary signals (a) without using a prepared window; (b) with the use of window; and (c) with a used window to combine the two signals from the models. The relative energy error for the modeled route was mainly (excluding the influence of windowing) a weighted average of errors for the individual models used to determine the instantaneous power demand, and the weights of this average resulted from the fraction used of the individual signals in the whole combined waveform. The example presented in this section shows the possibility of modeling the instantaneous power demand using a well-known class of autoregressive models of signals. The proposed approach requires a simulation experiment by recording the instantaneous power demand for various operating conditions. The advantage of the presented method is the lack of interference from the AGV software, including its control system where this information is often unavailable due to company intellectual property issues, and there is no need to create a dynamic vehicle model. An Example of Using the Model to Optimize the Hybrid Power Supply System To demonstrate the practical use of the numerical model for an AGV hybrid power supply system, a short model route was designed, as outlined in Figure 7. The AGV moved with a load of 1.2 tons along a model route (marked in red) and then moved along a route without a load (marked in blue). The loading and unloading points and control points are marked in green, where the vehicle stopped for a maximum of a few seconds. When driving without a load, the vehicle accelerated and braked more rapidly than when driving with a load. The demand for power (PAGV) for the AGV during the model route was determined by the measurement results obtained for a real AGV, using the processing methods described in Section 3. An example of the waveform of the power demand while driving is shown in Figure 10. The results of measurements for the power demand when the vehicle was stopped were used to model the AGV stoppage. Figure 9. The combination of the two waveforms generated from the signal models for nonstationary and stationary signals (a) without using a prepared window; (b) with the use of window; and (c) with a used window to combine the two signals from the models. The relative energy error for the modeled route was mainly (excluding the influence of windowing) a weighted average of errors for the individual models used to determine the instantaneous power demand, and the weights of this average resulted from the fraction used of the individual signals in the whole combined waveform. The example presented in this section shows the possibility of modeling the instantaneous power demand using a well-known class of autoregressive models of signals. The proposed approach requires a simulation experiment by recording the instantaneous power demand for various operating conditions. The advantage of the presented method is the lack of interference from the AGV software, including its control system where this information is often unavailable due to company intellectual property issues, and there is no need to create a dynamic vehicle model. An Example of Using the Model to Optimize the Hybrid Power Supply System To demonstrate the practical use of the numerical model for an AGV hybrid power supply system, a short model route was designed, as outlined in Figure 7. The AGV moved with a load of 1.2 tons along a model route (marked in red) and then moved along a route without a load (marked in blue). The loading and unloading points and control points are marked in green, where the vehicle stopped for a maximum of a few seconds. When driving without a load, the vehicle accelerated and braked more rapidly than when driving with a load. The demand for power (P AGV ) for the AGV during the model route was determined by the measurement results obtained for a real AGV, using the processing methods described in Section 3. An example of the waveform of the power demand while driving is shown in Figure 10. The results of measurements for the power demand when the vehicle was stopped were used to model the AGV stoppage. Figure 7). It was assumed that the model cycle for AGV operation included: Waiting time for the first drive after starting the power supply system of 30 s, five drives along the model route, a standstill after each drive, and waiting time for switching off after the driving cycles of 10 s. An example of the AGV's power demand waveform during the operation cycle is shown in Figure 11. The standstill time after driving was one of the parameters that changed during the optimization process and, in this case, was 255 s. The total electric energy consumption during the entire operation cycle was 131.2 Wh, with an average power demand of 236 W. The optimization aimed to minimize the energy storage capacity and the duration of stops between drives, assuming that all the energy needed to power the AGV came from the fuel cell stack, i.e., the state of charge of the energy storage should have been the same after an entire operation cycle as at its beginning. In the model hybrid power supply system, none of the fifteen electrical parameter criteria could be violated. It was essential that, during the simulated vehicle operation between the main energy storage voltage and the threshold (criterion) values of the supply voltage of the load system, a safety margin of ~3 V was maintained. These threshold values were 30 and 60 V, respectively. Additional parameters that were tuned in the optimization process and which had an essential impact on the results obtained were the allowable range of the energy storage voltage (in particular the energy pre-charge storage voltage), the output voltage of the DC/DC converter in CV mode, and the output current of the DC/DC converter in CC mode. An important result obtained from the model was the hydrogen consumption for the assumed operation cycle of the AGV, which allowed one to choose the required capacity of the hydrogen tank. The preliminary simulation tests were carried out for the assumed operation cycle, assuming that the main energy storage was a LiFePO4 battery with a capacity of 10 Ah and a rated voltage of 48 V, achieved from a real test bench. This is a low-cost battery that can provide a large enough impulse discharge current, with an expected value of ~3C without degradation. This battery is built Figure 7). It was assumed that the model cycle for AGV operation included: Waiting time for the first drive after starting the power supply system of 30 s, five drives along the model route, a standstill after each drive, and waiting time for switching off after the driving cycles of 10 s. An example of the AGV's power demand waveform during the operation cycle is shown in Figure 11. The standstill time after driving was one of the parameters that changed during the optimization process and, in this case, was 255 s. The total electric energy consumption during the entire operation cycle was 131.2 Wh, with an average power demand of 236 W. Figure 7). It was assumed that the model cycle for AGV operation included: Waiting time for the first drive after starting the power supply system of 30 s, five drives along the model route, a standstill after each drive, and waiting time for switching off after the driving cycles of 10 s. An example of the AGV's power demand waveform during the operation cycle is shown in Figure 11. The standstill time after driving was one of the parameters that changed during the optimization process and, in this case, was 255 s. The total electric energy consumption during the entire operation cycle was 131.2 Wh, with an average power demand of 236 W. The optimization aimed to minimize the energy storage capacity and the duration of stops between drives, assuming that all the energy needed to power the AGV came from the fuel cell stack, i.e., the state of charge of the energy storage should have been the same after an entire operation cycle as at its beginning. In the model hybrid power supply system, none of the fifteen electrical parameter criteria could be violated. It was essential that, during the simulated vehicle operation between the main energy storage voltage and the threshold (criterion) values of the supply voltage of the load system, a safety margin of ~3 V was maintained. These threshold values were 30 and 60 V, respectively. Additional parameters that were tuned in the optimization process and which had an essential impact on the results obtained were the allowable range of the energy storage voltage (in particular the energy pre-charge storage voltage), the output voltage of the DC/DC converter in CV mode, and the output current of the DC/DC converter in CC mode. An important result obtained from the model was the hydrogen consumption for the assumed operation cycle of the AGV, which allowed one to choose the required capacity of the hydrogen tank. The preliminary simulation tests were carried out for the assumed operation cycle, assuming that the main energy storage was a LiFePO4 battery with a capacity of 10 Ah and a rated voltage of 48 V, achieved from a real test bench. This is a low-cost battery that can provide a large enough impulse discharge current, with an expected value of ~3C without degradation. This battery is built of sixteen 10 Ah prismatic cells connected in series. The energy storage model was tuned using The optimization aimed to minimize the energy storage capacity and the duration of stops between drives, assuming that all the energy needed to power the AGV came from the fuel cell stack, i.e., the state of charge of the energy storage should have been the same after an entire operation cycle as at its beginning. In the model hybrid power supply system, none of the fifteen electrical parameter criteria could be violated. It was essential that, during the simulated vehicle operation between the main energy storage voltage and the threshold (criterion) values of the supply voltage of the load system, a safety margin of~3 V was maintained. These threshold values were 30 and 60 V, respectively. Additional parameters that were tuned in the optimization process and which had an essential impact on the results obtained were the allowable range of the energy storage voltage (in particular the energy pre-charge storage voltage), the output voltage of the DC/DC converter in CV mode, and the output current of the DC/DC converter in CC mode. An important result obtained from the model was the hydrogen consumption for the assumed operation cycle of the AGV, which allowed one to choose the required capacity of the hydrogen tank. The preliminary simulation tests were carried out for the assumed operation cycle, assuming that the main energy storage was a LiFePO 4 battery with a capacity of 10 Ah and a rated voltage of 48 V, achieved from a real test bench. This is a low-cost battery that can provide a large enough impulse discharge current, with an expected value of~3 C without degradation. This battery is built of sixteen 10 Ah prismatic cells connected in series. The energy storage model was tuned using optimization methods and the datasheet from the battery cells. It was assumed that the battery was pre-charged before starting the AGV power supply system (the initial state of charge was 50%). The selected results of preliminary simulation tests are shown in Figure 12. The results for the turned off SCU are presented so that transients that are associated with the operation of the SCU do not impair their readability. Energies 2020, 13, x FOR PEER REVIEW 23 of 31 pre-charged before starting the AGV power supply system (the initial state of charge was 50%). The selected results of preliminary simulation tests are shown in Figure 12. The results for the turned off SCU are presented so that transients that are associated with the operation of the SCU do not impair their readability. Figure 12a shows the total demand for power Preq, including PAGV power for the AGV and PAux power for the hybrid power supply system's own needs. Figure 12b shows the load power of the fuel cell stack PFC. It can be seen that the stack was utilized optimally and correctly (without overloading) throughout the entire operating cycle of the AGV stack and was loaded with power close to the rated power. Figure 12c shows the hydrogen flow rate FFR obtained at the control valve output. Integrating this waveform, after converting it to the standard liters, it can be determined that ~141 L of hydrogen were consumed during the entire AGV cycle of operation, which corresponds to 423 Wh hydrogen energy, assuming that the change in enthalpy of formation was equal to the lower heat value [5]. When considering the energy consumption of the AGV from Figure 12a (~144 Wh), the efficiency of the hybrid power supply system was 34%. Figure 12d shows the current IDC/DC output waveform of Figure 12a shows the total demand for power P req , including P AGV power for the AGV and P Aux power for the hybrid power supply system's own needs. Figure 12b shows the load power of the fuel cell stack P FC . It can be seen that the stack was utilized optimally and correctly (without overloading) throughout the entire operating cycle of the AGV stack and was loaded with power close to the rated power. Figure 12c shows the hydrogen flow rate FFR obtained at the control valve output. Integrating this waveform, after converting it to the standard liters, it can be determined that~141 L of hydrogen were consumed during the entire AGV cycle of operation, which corresponds to 423 Wh hydrogen energy, assuming that the change in enthalpy of formation was equal to the lower heat value [5]. When considering the energy consumption of the AGV from Figure 12a (~144 Wh), the efficiency of the hybrid power supply system was 34%. Figure 12d shows the current I DC/DC output waveform of the DC/DC converter. It can be seen that this converter works permanently in CC mode and the setpoint of the output current is constant and equal to 5 A. Figure 12e shows the U Batt voltage waveform of the 10 Ah battery. This voltage was practically constant, which resulted from the relatively rigid discharge characteristics and the slight changes in the state of charge SOC Batt% for this battery, shown in Figure 12f. A practically constant voltage of the battery at a constant output current of the DC/DC converter caused the fuel cell stack to be loaded with constant power throughout the entire operation cycle. The standstill time needed to restore the battery state of charge to the condition before driving was 257 s. The battery used in the preliminary simulation tests had too large a capacity for the energy demand for the selected scenario of AGV operation, which was uneconomical. In subsequent tests, the battery capacity was reduced to a value of 0.2 Ah; this still ensured the correct operation of the AGV. The capacity of the battery was chosen so that its SOC varied from 20% to 80% during the operation of the AGV. It was assumed that the battery was pre-charged to an SOC of 20%, with an SOC of at least 70% required to start the vehicle. Therefore, when the power supply system was turned on, the battery was pre-charged by the fuel cell stack. The results did not change significantly. A slightly poorer utilization of the fuel cell stack was obtained while the AGV was driving. This was due to greater voltage drops in the smaller capacity battery, which, with a constant output current of the DC/DC converter, resulted in a decrease in the stack load power. The consumption of hydrogen increased to~148 standard liters due to the initial charging of the battery, which absorbed 7.3 standard liters of hydrogen and lasted~90 s. After omitting the supercapacitor pre-charge energy, the system efficiency was similar to previously, at~34%. The used battery had a capacity of only 0.2 Ah, which was practically impossible due to too low current values for batteries with such a small capacity. Simulation tests using a selected scenario for vehicle operation and a 0.2 Ah battery were not of practical importance but were used to present the issue and how to use the model as a design aid. Therefore, during these tests, no criterion values for battery current were determined. A similar battery operation regime with real, higher capacity could be obtained for the real scenario AGV operation with higher energy demand. The use of battery capacity in such a work regime seems optimal, but it should also be noted that a battery working continuously in such a regime can quickly degrade. Due to the possibility of quick battery degradation, further simulation tests were completed using a supercapacitor as the main energy storage. The most important advantage of supercapacitors, outlined in [30,32], is their use as an energy buffer for a fuel cell stack in traction applications due to their higher power density compared to batteries. Supercapacitors have higher efficiency and a higher number of charge and discharge cycles without degradation compared to the battery. By optimization, the criterion for the minimum capacity of the supercapacitor was chosen. The capacity of the selected SC maintained the voltage of the main power busbars over their required operating range whilst preserving a safety margin from the criterion values. The results of these simulation tests are given in Figure 13. It was assumed that the supercapacitor was not pre-charged, so its pre-charging was implemented by the fuel cell stack on start-up of the power supply system. hydrogen energy used for the AGV operation was ~695 Wh. The obtained power supply system efficiency was similar to that previously shown in the battery case, but the vehicle's standstill time after driving required to charge the supercapacitor to its pre-driving condition was 670 s. Such a long standstill time was due to the low power utilization of the fuel cell stack at low supercapacitor voltage when the stack provided slightly more power than that of the AGV's own needs. Further simulation tests were completed with the fuel cell load power regulator turned on, which affected the setpoint of the DC/DC converter output current under the CC mode. As a result of the re-optimization, the capacity of the supercapacitor in the main energy storage was reduced to 24.5 F, obtained by connecting the 2P22S component supercapacitors with a capacity of 270 F each. The results of the simulation tests are shown in Figure 14. It can be seen that the utilization of the fuel cell stack was again optimal and correct (Figure 14b). At the same time, a much shorter standstill time (240 s) after driving is needed to charge the supercapacitor to its pre-driving condition. The hydrogen consumption was 150 standard liters, of which ~14.5 standard liters are required for the initial charge of the supercapacitor, which lasts ~200 s. The energy consumption of the AGV is 139.6 Wh ( Figure 14a) and hydrogen energy used for AGV operation was ~406.5 Wh. Again, a system efficiency of ~34% was obtained, but the required standstill was much shorter than for a 35 F supercapacitor. Improvement in the use of the fuel cell stack compared to the previous simulation was obtained as the stack load power regulator increased the setpoint of the DC/DC converter output current in the CC mode at low supercapacitor voltage, just enough not to overload the stack. The fast transients shown in Figure 14b-d result from when the supercapacitor was charged to a certain maximum voltage, and the DC/DC converter then went into CV mode. In this situation, the load power of the fuel cell stack dropped sharply, and after the converter returned to CC mode, it increased again rapidly. The main difference between the operation of the power supply system with a LiFePO 4 battery and a supercapacitor is due to the different discharge characteristics of these energy storage devices. The voltage U SC of the supercapacitor changes significantly more when discharged than the voltage U Batt of the LiFePO 4 batteries (at least in the SOC range between 20% and 80%). There is a constant output current of the DC/DC converter in CC mode, resulting in a much worse utilization of the fuel cell stack due to significantly lower stack load power at a low supercapacitor voltage (Figure 13b). The presented results were obtained using a real supercapacitor model which consisted of 44 component supercapacitors with a capacity of 385 F each in a 2P22S connection system (two connected in parallel, 22 in series), which gave a resultant capacity of 35 F and a rated voltage of 61.6 V. Using a supercapacitor, the hydrogen consumption was now 251 standard liters, of which 19.4 standard liters were required for pre-charging of the supercapacitor. The pre-charge time was approximately 340 s. The energy consumption of the AGV was 243.7 Wh (Figure 13a) and the hydrogen energy used for the AGV operation was~695 Wh. The obtained power supply system efficiency was similar to that previously shown in the battery case, but the vehicle's standstill time after driving required to charge the supercapacitor to its pre-driving condition was 670 s. Such a long standstill time was due to the low power utilization of the fuel cell stack at low supercapacitor voltage when the stack provided slightly more power than that of the AGV's own needs. Further simulation tests were completed with the fuel cell load power regulator turned on, which affected the setpoint of the DC/DC converter output current under the CC mode. As a result of the re-optimization, the capacity of the supercapacitor in the main energy storage was reduced to 24.5 F, obtained by connecting the 2P22S component supercapacitors with a capacity of 270 F each. The results of the simulation tests are shown in Figure 14. It can be seen that the utilization of the fuel cell stack was again optimal and correct (Figure 14b). At the same time, a much shorter standstill time (240 s) after driving is needed to charge the supercapacitor to its pre-driving condition. The hydrogen consumption was 150 standard liters, of which~14.5 standard liters are required for the initial charge of the supercapacitor, which lasts~200 s. The energy consumption of the AGV is 139.6 Wh (Figure 14a) and hydrogen energy used for AGV operation was~406.5 Wh. Again, a system efficiency of~34% was obtained, but the required standstill was much shorter than for a 35 F supercapacitor. Improvement in the use of the fuel cell stack compared to the previous simulation was obtained as the stack load power regulator increased the setpoint of the DC/DC converter output current in the CC mode at low supercapacitor voltage, just enough not to overload the stack. The fast transients shown in Figure 14b-d result from when the supercapacitor was charged to a certain maximum voltage, and the DC/DC converter then went into CV mode. In this situation, the load power of the fuel cell stack dropped sharply, and after the converter returned to CC mode, it increased again rapidly. Energies 2020, 13, x FOR PEER REVIEW 26 of 31 It can be seen that the presented simulation studies achieved the optimization goals and aid the design of the hybrid power supply system. The given results were valid for the assumed scenario of the AGV operation. In subsequent simulation tests, how the SCU operation affects the obtained optimization results was checked. The SCU short-circuited the fuel cell stack for 100 ms every 10 s. However, transients lasted longer than 100 ms and were associated, among other things, with the need to recharge the auxiliary supercapacitor which was partially discharged when supporting the DC/DC converter supply voltage during stack short-circuit. The obtained results were accurate and similar to those presented in Figure 14, the main difference was that the standstill time after driving was extended to 250 s to charge the supercapacitor back to its condition before driving. Hydrogen consumption increased slightly to 155 standard liters and the power supply system efficiency decreased by 0.7%. It can be seen that the presented simulation studies achieved the optimization goals and aid the design of the hybrid power supply system. The given results were valid for the assumed scenario of the AGV operation. In subsequent simulation tests, how the SCU operation affects the obtained optimization results was checked. Discussion The SCU short-circuited the fuel cell stack for 100 ms every 10 s. However, transients lasted longer than 100 ms and were associated, among other things, with the need to recharge the auxiliary supercapacitor which was partially discharged when supporting the DC/DC converter supply voltage during stack short-circuit. The obtained results were accurate and similar to those presented in Figure 14, the main difference was that the standstill time after driving was extended to 250 s to charge the supercapacitor back to its condition before driving. Hydrogen consumption increased slightly to 155 standard liters and the power supply system efficiency decreased by 0.7%. Discussion The energy transfer numeric model for hybrid power supply system was the main objective of the article. The model consisted of the elements of supply system such as: Power converter, energy buffer, FC, and control devices. For the load of the supply system, the instantaneous power demand model was used. This model represents a generic instantaneous power demand for a given route for stationary and nonstationary conditions for an AGV under experimentally determined selected operating conditions. The main reason for developing the generic instantaneous power demand model is its simplicity and the fact that it does not require any additional information about the subsystems of AGVs and any supplementary information from the AGV manufacturers. It should be emphasized that the method proposed here could be improved by using further models that consider nonstationary conditions, i.e., that include nonstationarity of frequency components. Section 4.3 describes the optimization process of the hybrid power supply system parameters for a model route and an AGV operation scenario. A similar process of optimization and design-aid can be completed using the real vehicle operation cycle by obtaining the route parameters and driving scenario from the vehicle manufacturer or vehicle user. In a situation where the AGV cannot make stops after driving, which allow for recharging of the main energy storage as described in Section 4.3, it is possible to minimize the capacity of the energy storage using a numerical model with the appropriate utilization of a fuel cell stack, under the assumption that the energy storage will be recharged using an external source after the entire operation cycle of the vehicle. In this situation, a compromise can be made between the capacity of the main energy storage (lithium-ion battery) and the hydrogen consumption, to consequently determine the capacity of the hydrogen tank. It is possible to use a stack load power regulator to intentionally reduce stack utilization and not exceed the assumed hydrogen consumption. Further development of the presented numerical model, in the part related to the production of electrical energy, may include issues such as the dynamics of hydrogen release from the metal hydride storage under various operating conditions, and the dynamics of the cell response to a change in the hydrogen flow at the control valve output by considering the dynamics of the hydrogen distribution inside the cell. In the section of the model in which the power demand for the AGV is modeled, a dynamic model of the vehicle can be used which requires the drive torque for the specific route conditions, vehicle load, and the traction parameters (speed and acceleration). The hybrid power supply system power demand can be calculated based on the required drive torque in the dynamic models for the vehicle's drive nodes. Further development of the numerical model requires conducting additional tests on the fuel cell stack together with the hydrogen tank and examination of the real AGV to collect additional data and verify the extended numerical model. The overall assessment of the proposed solution was carried out quantitatively for selected model elements (models of instantaneous power demand and the tuned hydrogen cell model). For other elements of the model, the assessment is qualitative as it is dependent on the specific instance of the AGV. Conclusions The research aim was to develop a model of a hybrid power supply system with a fuel cell stack for designing an energy storage system. The power supply system model is a numerical tool supporting the design and optimization of the power supply system following the MBD methodology. The article presents an example of the process of energy storage optimization for the AGV hybrid power supply system, which implements an example cycle of operation. Data for the AGV power demand model were obtained from measurements carried out on a real factory battery-powered AGV. These data were processed, and allowed the extract models for standard route fragments that can be interpolated under various load conditions. Based on these fragmentary models, it is possible to develop a power demand model for any route and optimize the hybrid power supply system for this route. The conclusions from the generic instantaneous power demand model are: • The proposed generic model allows for the determination of the instantaneous electric power for any route without the need to identify the dynamic drive system parameters; • The model enables the determination of both stationary and nonstationary operating conditions using a simple approach with autoregressive models from signals with additional elements used for modeling the first-order and second-order nonstationarity with the application of additional linear, quadratic, or autoregressive models; • Building a generic model for instantaneous power demand is possible since the AGV object is a system with constant control settings and operating conditions, and the AGV usually moves along an unchanged route for a long period. For more complex objects, the proposed approach may not be cost-effective as it would require more identification experiments. Conclusions related to the model of the hybrid power supply system: • The model seems to correctly imitate the energy transfer in the hybrid power system. The waveforms calculated by the model are reliable and all the phenomena visible are correct and explainable. The effectiveness of the model, however, must be confirmed by measurements of real cases with the design and optimization of the hybrid power supply system, which will be the subject of future research; • The methodology used to model the components of the hybrid power supply system, using a few original ideas, means that the results of computer simulations are calculated relatively quickly, even for long routes taken by the AGV.
21,032.2
2020-07-03T00:00:00.000
[ "Engineering", "Environmental Science" ]
OSCILLATION OF THREE DIMENSIONAL NEUTRAL DELAYDIFFERENCE SYSTEMS 6751 | P a g e N o v e m b e r 2 0 1 6 w w w . c i r w o r l d . c o m OSCILLATION OF THREE DIMENSIONAL NEUTRAL DELAYDIFFERENCE SYSTEMS K.THANGAVELU ASSOCIATE PROFESSOR ,DEPARTMENT OF MATHEMATICS, PACHIYAPPA’S COLLEGE ,CHENNAI600 030<EMAIL_ADDRESS>G.SARASWATHI RESEARCH SCHOLAR,DEPARTMENT OF MATHEMATICS,PACHIYAPPA’S COLLEGE ,CHENNAI-600 030<EMAIL_ADDRESS>Abstract. This paper deals with the some oscillation criteria for the three dimensional neutral delay difference system of the form 1.Introduction Consider a three dimensional neutral delay difference system of the form ∆ x n +p n x n-k =b n y n α ∆ y n =c n z n β (1.1) ∆ z n =-a n x n−l+1 γ n=1,2,…, subject to the following conditions b n ∆ x n + p n x n−k + a n x n−l+1 γ = 0. whose oscillatory behaviour has been studied in, for example, [1][2][3][4] and the refrences cited therein. Also the oscillatory theory is considered for two-dimensional and three-dimensional difference systems (see, for example, [5][6][7][8][9][10] and the references cited therein).This observation motivated us to consider the three-dimensional neutral delay difference systems and to investigate its oscillatory behaviour. In section 2, we present some basic lemmas which will be used to prove the main theorems, and in Section 3, we obtain the sufficient conditions for the oscillation of system (1.2). Examples are provided in Section 4 to illustrate the main results. SOME BASIC LEMMAS In this section, we state and prove some basic lemmas, which will be used in establishing our main results. Summing the last inequality from N1 to n-1 and then taking n→ ∞, we find that yn→ −∞ as n→ ∞. Then there is an integer N2≥ N1 and a constant η such that yn<η < 0 for n≥ N 2. ∆w n = η α bn , n≥ N 2. Where wn =xn +pn x-k for n≥ N 2. Now taking summation from N2 to n-1 and then making n→ ∞, we see that wn→ −∞, as n→ ∞. This contradicts the fact that wn >0 for all n≥ N. Hence zn>0 for all n≥ N. The proof for the case wn <0 eventually is similar. This completes the proof of the lemma. Proof. Proceeding as in Lemma 2.2, we have > 0 and > 0 for n≥ ≥ 1. From the first equation of the system Therefore > 0 and nondecreasing for all n≥ . From the definition of , we obtain This completes the proof of the lemma. 3.OSCILLATION RESULTS In this section, we establish sufficient conditions for the oscillatory and asymptotic behaviour of the solutions of system (1.2). and < 1. We shall prove that →∞ = 0. Let →∞ = 1 > 0. Then there exists an integer 1 ≥ , such that +1 > 1 > 0 for n≥ 1 . Now summing the third equation of (1.2) from n to ∞ and then using ≥ − +1 and ≥ 1 for n≥ 1 . We obtain Since is a ratio of odd positive integer, we have from the last inequality Summing the second equation of (1.2) from 1 to n-1 and then using (3.10) we obtain In view of (3.1) the last inequality implies for → ∞ that →∞ = ∞, which is a contradiction. Therefore where 0< < 1, ℎ ℎ conclusion of the Theorem 3.1 holds. Proof. Let {( , , )} be a nonoscillatory solution of system (1.2). We see that Theorem 3.1 satisfies one of the two cases of Lemma 2.2 for n≥ . First we consider case(I). In this case, we have inequality (3.7). Using (3.11) Raising (3.13) to (1-) th power we obtain Since {xn} is monotonically nondecreasing, there exists an integer 2 ≥ 1 and a constant 1 > 0 such that − +1 ≥ 1 , ≥ 2. Multiplying (3.17) by −1 , using the third equation of (1.2), summing from 2 to n-1 and then using the fact that {zn} is positive and nondecreasing we have which contradicts (3.13). Therefore, case(I) cannot occur and for case(II), we proceed in the same way as in the proof of Theorem 3.1. This completes the proof. → ∞ as n→ ∞. which is a contradiction to the fact that > 0 for m≥ 1 . Therefore, case(I) cannot occur and hence the solution of (1.2) satisfies case(II). The proof for case(II) is similar to that of Theorem 3.1 and this completes the proof. } is one such solution of the system (4.1). } is one such solution of the system (4.2).
1,079.4
2016-11-30T00:00:00.000
[ "Mathematics" ]
LMSM: A modular approach for identifying lncRNA related miRNA sponge modules in breast cancer Until now, existing methods for identifying lncRNA related miRNA sponge modules mainly rely on lncRNA related miRNA sponge interaction networks, which may not provide a full picture of miRNA sponging activities in biological conditions. Hence there is a strong need of new computational methods to identify lncRNA related miRNA sponge modules. In this work, we propose a framework, LMSM, to identify LncRNA related MiRNA Sponge Modules from heterogeneous data. To understand the miRNA sponging activities in biological conditions, LMSM uses gene expression data to evaluate the influence of the shared miRNAs on the clustered sponge lncRNAs and mRNAs. We have applied LMSM to the human breast cancer (BRCA) dataset from The Cancer Genome Atlas (TCGA). As a result, we have found that the majority of LMSM modules are significantly implicated in BRCA and most of them are BRCA subtype-specific. Most of the mediating miRNAs act as crosslinks across different LMSM modules, and all of LMSM modules are statistically significant. Multi-label classification analysis shows that the performance of LMSM modules is significantly higher than baseline’s performance, indicating the biological meanings of LMSM modules in classifying BRCA subtypes. The consistent results suggest that LMSM is robust in identifying lncRNA related miRNA sponge modules. Moreover, LMSM can be used to predict miRNA targets. Finally, LMSM outperforms a graph clustering-based strategy in identifying BRCA-related modules. Altogether, our study shows that LMSM is a promising method to investigate modular regulatory mechanism of sponge lncRNAs from heterogeneous data. Introduction Long non-coding RNAs (lncRNAs) are RNA transcripts with more than 200 nucleotides (nts) in length [1]. More and more evidence has shown that lncRNAs play important functional roles in many biological processes, including human cancers [2][3][4]. As a major class of noncoding RNAs (ncRNAs), lncRNAs have attracted increasing interest from researchers in their exploration of non-coding knowledge from the 'junk'. Among the wide range of biological functions of lncRNAs, their role as competing endogenous RNAs (ceRNAs) or miRNA sponges is in the limelight. As a family of small ncRNAs (~18nts in length), miRNAs are important post-transcriptional regulators of gene expression [5,6]. According to the ceRNA hypothesis [7], lncRNAs contain abundant miRNA response elements (MREs) for competitively sequestering target mRNAs from miRNAs' control. This regulation mechanism of lncRNAs when acting as miRNA sponges is highly implicated in various human diseases [8], including breast cancer [9]. For example, lncRNA H19, an imprinted gene is associated with breast cancer cell clonogenicity, migration and mammosphere-forming ability. By sponging miRNA let-7, H19 forms a H19/let-7/LIN28 reciprocal negative regulatory circuit to play a critical role in the breast cancer stem cell maintenance [10]. To systematically investigate the functions of lncRNAs as miRNA sponges in human cancer, a series of computational methods have been developed to infer lncRNA related miRNA sponge interaction networks. The methods can be divided into three categories according to the statistical or computational techniques employed: pair-wise correlation based approach, partial association based approach, and mathematical modelling approach [11]. It is commonly known that to implement a specific biological function, genes tend to cluster or connect in the form of modules or communities. Consequently, based on the identified lncRNA related miRNA sponge interaction networks, several methods [12][13][14][15][16][17] using graph clustering algorithms were developed to identify lncRNA related miRNA sponge modules. For the identification of sponge lncRNA-mRNA pairs, most of existing methods only consider pair-wise correlation of them. Since the lncRNA related miRNA sponge interaction networks are created by simply putting together sponge lncRNA-mRNA pairs, when the expression levels of each sponge lncRNA-mRNA pair are highly correlated, the collective correlation between the set of sponge lncRNAs and the set of mRNAs in the same identified module is not necessarily high. As we know, the pair-wise positive correlation between the expression levels of a lncRNA and a mRNA pair is commonly used to identify the sponge interactions between them. For the identification of lncRNA related miRNA sponge modules, it is also necessary to investigate whether the clustered sponge lncRNAs and mRNAs in a module have high collective positive correlation or not. Moreover, these methods do not consider the influence of the shared miRNAs on the expression of the clustered sponge lncRNAs and mRNAs. It is known that the "tug-of-war" between sponge lncRNAs and mRNAs is mediated by miRNAs. Therefore, it is extremely important to consider the influence of the shared miRNAs in identifying lncRNA related miRNA sponge modules. Recently, to study lncRNA, miRNA and mRNA-associated regulatory modules, Deng et al. [18] and Xiao et al. [19] have proposed two types of joint matrix factorization methods to identify mRNA-miRNA-lncRNA co-modules by integrating gene expression data and putative miRNA-target interactions. However, it is still not clear how the shared miRNAs influence the expression level of the sponge lncRNAs and mRNAs in a module. To address the above issues, we firstly hypothesize that sponge lncRNAs form a group to competitively release a group of target mRNAs from the control of the miRNAs shared by the lncRNAs and mRNAs (details see Section Materials and methods). We name this hypothesis the miRNA sponge modular competition hypothesis in this paper. Then based on the hypothesis, we propose a novel framework to identify LncRNA related MiRNA Sponge Modules (LMSM). The framework firstly uses the WeiGhted Correlation Network Analysis (WGCNA) [20] method to generate lncRNA-mRNA co-expression modules. Next, by incorporating matched miRNA expression and putative miRNA-target interactions, LMSM applies three constraints (see Section Materials and methods) to obtain lncRNA related miRNA sponge modules (also called LMSM modules in this paper). One of the constraints, high canonical correlation, is used to assess whether the group of sponge lncRNAs and the group of mRNAs in the same module have a high collective positive correlation or not. The other constraint, adequate sensitivity canonical correlation conditioning on a group of miRNAs, is used to evaluate the influence of the shared miRNAs on the clustered sponge lncRNAs and mRNAs. To evaluate the LMSM approach, we apply it to matched miRNA, lncRNA and mRNA expression data, and clinical information of breast cancer (BRCA) dataset from The Cancer Genome Atlas (TCGA). The modular analysis results demonstrate that LMSM can help to uncover modular regulatory mechanism of sponge lncRNAs in BRCA. LMSM is released under the GPL-3.0 License, and is freely available through GitHub repository (https://github. com/zhangjunpeng411/LMSM). A hypothesis on miRNA sponge modular competition The ceRNA hypothesis [7] indicates that a pool of RNA transcripts (known as ceRNAs) regulate each other's transcripts by competing for the shared miRNAs through MREs. Based on this unifying hypothesis, a large-scale gene regulatory network including coding and non-coding RNAs across the transcriptome can be formed, and it plays critical roles in human physiological and pathological processes. However, by using MREs as letters of language, the hypothesis only depicts the crosstalk between individual RNA transcript (e.g. coding RNAs, lncRNAs, circRNAs or pseudogenes) and mRNA at the pair-wise interaction level and the crosstalk between RNA transcripts and mRNAs at the module level is still an open question. There has been evidence showing that for the same transcriptional regulatory program, biological process or signaling pathway, genes tend to form modules or communities to coordinate biological functions [21]. These modules correspond to functional units in complex biological systems, and they play important role in gene regulation. Based on these findings, in this paper, we hypothesize that regarding miRNA sponging, the crosstalk between different RNA transcripts is in the form of modular competition. We call the hypothesis the miRNA sponge modular competition hypothesis. As shown in Fig 1, based on our hypothesis, instead of having pair-wise competitions, miRNA sponges form groups to compete at module level for common miRNAs. Here, a miRNA sponge module consists of a competing group (other coding RNA group, pseudogene group, circRNA group or lncRNA group) and a mRNA group. Here, other coding RNAs also include mRNAs. From the perspective of modularity, the hypothesis at module level extends the ceRNA hypothesis and provides a new channel to look into the functions and regulatory mechanism of miRNA sponges or ceRNAs. Since the available resources of lncRNAs are more abundant than those of other non-coding RNAs (e.g. circRNAs and pseudogenes), in this paper, we focus on the competition between lncRNAs and mRNAs to validate and demonstrate the proposed miRNA sponge modular competition hypothesis. Our goal is to discover lncRNA related sponge modules, or LMSM modules. Here each LMSM module contains a group of lncRNAs which compete collectively with a group of mRNAs for sponging the same set of miRNAs. The LMSM framework Overview of LMSM. As shown in Fig 2, the proposed LMSM framework comprises two stages. In stage 1, the WGCNA method [20] is used for finding lncRNA-mRNA co-expression modules from matched lncRNA and mRNA expression data. Then in stage 2, LMSM identifies The four types of miRNA sponges (other coding RNAs, lncRNAs, circRNAs or pseudogenes), miRNAs and their target mRNAs are shown. Each miRNA sponge module consists of a group of the same type of miRNA sponges, e.g. a group of lncRNAs and a group of target mRNAs. In the same module, the group of miRNA sponges competes with the group of target mRNAs for binding with a set of miRNAs. If the miRNA sponges win the competition, the group of target mRNAs will be released from repression and they will be translated into proteins. If the miRNA sponges lose the competition, the group of target mRNAs will be post-transcriptionally repressed and degraded. LMSM modules from the lncRNA-mRNA co-expression modules using three criteria. That is, a co-expression module is considered as a LMSM module if the group of lncRNAs and the Firstly, we use the WGCNA method to infer lncRNA-mRNA co-expression modules from the matched lncRNA and mRNA expression. Then by using miRNA expression data and putative miRNA-target interactions, we infer lncRNA related miRNA sponge modules (LMSM) by applying three criteria: significant sharing of miRNAs by the group of lncRNAs and the group of target mRNAs in the same co-expression module, high canonical correlation between the lncRNA group and the target mRNA group, and adequate sensitivity canonical correlation between the lncRNA group and the target mRNA group conditioning on shared miRNAs. Each LMSM module must contain at least two sponge lncRNAs and two target mRNAs. https://doi.org/10.1371/journal.pcbi.1007851.g002 group of mRNAs in the co-expression module: (1) have significant sharing of miRNAs, (2) have high canonical correlation between their expression levels, and (3) have adequate sensitivity canonical correlation conditioning on their shared miRNAs. LMSM checks the criteria one by one, and once a co-expression module does not meet a criterion, it is discarded and will not be checked for the next criterion. In the following, we will describe the two stages in detail. Identifying lncRNA-mRNA co-expression modules. For identifying lncRNA-mRNA co-expression modules, we use the WGCNA method. WGCNA is a popular method for identifying co-expressed genes across samples and it can be used to identify clusters of highly coexpressed lncRNAs and mRNAs. In our task, we use the matched lncRNA and mRNA expression data as input to the WGCNA R package [20] to identify lncRNA-mRNA co-expression modules. We use the scale-free topology criterion for soft thresholding. The coefficient of determination R 2 (the range is from 0 to 1) is used to quantify the goodness of scale-free topology, and larger R 2 values mean better scale-free topology. Normally, the R 2 value larger than 0.8 in power law curve fit is ranked as good-level in the WGCNA method. Therefore, the desired minimum scale free topology fitting index R 2 is set as 0.8 in this work. Inferring lncRNA related miRNA sponge modules. To identify lncRNA related miRNA sponge modules from the co-expression modules obtained in stage 1, we propose three criteria (detailed below) by following the key tenet of our miRNA sponge modular competition hypothesis. That is, a group of lncRNAs (acting as miRNA sponges) competes with a group of mRNAs with respect to a set of miRNAs shared by the two groups. The first criterion requires that the group of lncRNAs and the group of mRNAs in a miRNA sponge module have a significant sharing of a set of miRNAs. LMSM uses a hypergeometric test to assess the significance of the sharing of miRNAs between the group of lncRNAs and the group of mRNAs in a co-expression module, based on putative miRNA-target interactions. The p-value for the test is computed as: In the equation, N 1 is the number of all miRNAs in the dataset, M 1 and K 1 denote the total numbers of miRNAs interacting with the group of lncRNAs and the group of mRNAs in the co-expression module respectively, and L 1 (e.g. 3) is the number of miRNAs shared by the group of lncRNAs and the group of mRNAs in the co-expression module. The second criterion is to assure that the sponge modular competition between the group of lncRNAs and the group of mRNAs in a miRNA sponge module is strong enough. In existing work, to identify lncRNA related mRNA sponge interactions, a principle followed is that the expression level of a lncRNA and the expression level of a mRNA need to be strongly and positively correlated. Following the same principle on strong positive correlation in expression levels while considering our modular competition hypothesis, LMSM requires the collective correlation between the expression levels of the group of lncRNAs and the group of target mRNAs in the same module to be strong and positive. To assess the collective correlation, we perform canonical correlation analysis [22] to obtain the canonical correlation between the group of lncRNAs and the group of mRNAs in a co-expression module. Let the two column vectorsX ¼ ðx 1 ; x 2 ; . . .; x m Þ T and Y ¼ ðy 1 ; y 2 ; . . .; y n Þ T represent the group of lncRNAs and the group of mRNAs in a co-expression module respectively.S XX , S YY and S XY are the variance or cross-covariance matrices calculated from the expression data of X and Y. The canonical correlation analysis seeks the canonical vectors a (a 2 R m ) and b (b 2 R n ) which maximize the correlation ofcorrða T X; b T YÞ. The canonical correlation between the group of lncRNAs and the group of mRNAs, denoted as CC lncR-mR , is then calculated as follows with the found canonical vectors: In this work, we use the PMA R package [23] to compute canonical correlation. Finally, the third criterion adapted from the sensitivity correlation [24] is employed to assess if the miRNAs shared by the group lncRNAs and the group of mRNAs in a module have large enough influence on the modular competition between the two groups of RNAs. To check according to this criterion, we incorporate miRNA expression data, and compute SCC lncR-mR the sensitivity canonical correlation between the group of lncRNAs and the group of mRNAs in a co-expression module as follows: where PCC lncR-mR is the partial canonical correlation between the group of lncRNAs and the group of mRNAs, i.e. the canonical correlation conditioning on the expression of their shared miRNAs in the co-expression module, or the canonical correlation between the two groups of RNAs when the influence of the shared miRNAs is eliminated. Therefore, from Eq (3), we see that SCC lncR-mR implies the correlation between the two groups of RNAs under the influence of their shared miRNAs. PCC lncR-mR in Eq (4) can be calculated as: where CC miR-mR (CC miR-lncR ) is the canonical correlation between the set of miRNAs in the coexpression module and the group of mRNAs (lncRNAs) in the co-expression module. In this study, empirically, a lncRNA-mRNA co-expressed module with p-value < 0.05 for the hypergeometric test of miRNA sharing (criterion 1), CC lncR-mR > 0.8 for modular competition strength assessment (criterion 2) and SCC lncR-mR > 0.1 for miRNA influence (criterion 3) is regarded as a lncRNA related miRNA sponge module (a LMSM module). Evaluating statistical significance of LMSM modules To evaluate the statistical significance of LMSM modules, we adapt the null model method proposed in [25]. The null model method hypothesizes that the shared miRNAs do not affect the correlation between two genes, i.e. the sensitivity correlation (the difference between correlation and partial correlation) between two genes is 0, and has been successfully applied to evaluate statistical significance of ceRNA interactions. Similar to [25], LMSM is also adapted from the Sensitivity Correlation (SC) method [24]. Therefore, the null model method can be applied to evaluate the statistical significance of LMSM modules. In our null model, the null hypothesis is that the group of the shared miRNAs does not influence the canonical correlation between the group of lncRNAs and the group of mRNAs, i.e. SCC lncR-mR = 0. For each LMSM module, a group of lncRNAs or a group of mRNAs corresponds to a gene, and a group of the shared miRNAs corresponds to a miRNA in the null model. For obtaining more precise p-values, the number of datasets sampled is set to 1E+06 for the null model. Since the sampling procedure is computationally intensive, we use the pre-computed sets of covariance matrices in SPONGE R package [25] to build our null model. Based on the constructed null model, we can infer adjusted p-values (adjusted by Benjamini and Hochberg method [26]) for each LMSM module. A LMSM module with adjusted p-value less than 0.05 is regarded as a statistically significant module. Application of LMSM in BRCA BRCA enrichment analysis. Instead of performing Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes Pathway (KEGG) enrichment analysis, to investigate whether an identified LMSM module is functionally associated with BRCA, we focus on conducting BRCA enrichment analysis by using a hypergeometric test. For a LMSM module, the p-value for the test is calculated as: where N 2 is the number of genes (lncRNAs and mRNAs) in the dataset, M 2 denotes the number of BRCA genes in the dataset, K 2 represents the number of genes in the LMSM module, and L 2 is the number of BRCA genes in the LMSM module. A LMSM module with pvalue < 0.05 is regarded as a BRCA-related module. Module biomarker identification in BRCA. The module survival analysis can imply whether the identified LMSM modules are good biomarkers of the metastasis risks of cancer patients or not, and it can give us the hint whether the LMSM modules may be related to and potentially affect the metastasis or survival of cancer patients. For each BRCA sample, we fit the multivariate Cox model (proportional hazards regression model) [27] using the genes (lncRNAs and mRNAs) in LMSM modules to compute its risk score. All the BRCA samples are equally divided into the high risk and the low risk groups according to their risk scores. The Log-rank test is used to evaluate the difference of each LMSM module between the high and the low risk BRCA groups. Moreover, we also calculate the proportional hazard ratio (HR) between the high and the low risk BRCA groups. In this work, the survival R package [28] is utilized, and a LMSM module with Log-rank p-value < 0.05 and HR > 2 is regarded as a module biomarker in BRCA. Identification of BRCA subtype-specific modules. As is known, BRCA is a heterogeneous disease with several molecular subtypes, and the choice of chemotherapy for each BRCA subtype is different. This diversity indicates that the genetic regulation of each BRCA subtype is specific. To identify BRCA subtype-specific modules, we firstly identify BRCA molecular subtypes using the PAM50 classifier [29]. By using a 50-gene subtype predictor, the PAM50 classifier classifies a BRCA sample into one of the five "intrinsic" subtypes: Luminal A (LumA), Luminal B (LumB), HER2-enriched (Her2), Basal-like (Basal) or Normal-like (Normal). In this work, we use the genefu R package [30] to predict molecular subtypes of each BRCA sample in the dataset used in our study. To identify BRCA subtype-specific LMSM modules, we firstly need to estimate the enrichment scores of LMSM modules in BRCA samples. To calculate the enrichment score of each LMSM module in BRCA samples, the gene set variation analysis (GSVA) method [31] is used. To calculate the enrichment score, the GSVA method uses the Kolmogorov-Smirnov (KS) like random walk statistic as follows: where t(t = 1 by default) is the weight of the tail in the random walk, r ij is the normalized expression-level statistics of the i-th gene in the j-th sample as defined in [31],g k is the k-th LMSM module,IðgðiÞ 2 g k Þ is the indicator function on whether the i-th gene belongs to the LMSM moduleg k , jg k jis the number of genes in the k-th LMSM module, and p is the number of genes in the dataset. To transform the KS like random walk statistic into an enrichment score (ES, also called GSVA score), we calculate the maximum deviation from zero of the random walk of the j-th sample with respect to the k-th LMSM module in the following: For each LMSM moduleg k , the formula generates a distribution of enrichment scores that is bimodal (see the reference [31] for a more detailed description). Based on the enrichment scores of LMSM modules in each BRCA sample, we further identify two types of BRCA subtype-specific LMSM modules, up-regulated modules and downregulated modules. For one type of regulation pattern (up or down regulation), a LMSM module is regarded to be specific to a BRCA subtype. For an up-regulated BRCA subtype-specific LMSM module, the enrichment score of the LMSM module in the specific BRCA subtype samples is significantly larger than the score in the other BRCA subtype samples. For a down-regulated BRCA subtype-specific LMSM module, the enrichment score of the LMSM module in the specific BRCA subtype samples is significantly smaller than the score in the other BRCA subtype samples. For example, if a LMSM module g k is up-regulated Basal-like specific, the enrichment scores of the LMSM module in Basal-like samples should be significantly larger than those in Luminal A, Luminal B, HER2-enriched and Normal-like samples. In this work, for each LMSM module, we use Welch's t-test [32] to calculate the significance p-value for the difference of the average enrichment scores between any two BRCA subtype samples. Given a BRCA subtype, a LMSM module is considered as an up-regulated (or down-regulated) module specific to this BRCA subtype if the module's average enrichment score in samples of the given subtype is higher (or smaller) than the average enrichment score in samples of any other subtype and the significance p-value of the Welch's t-test between the samples of this subtype and any other subtype is less than 0.05. Performance of LMSM modules in classifying BRCA subtypes. In this section, to check the biological relevance of the discovered LMSM modules, we conduct module classification of BRCA subtypes. Here, classifying BRCA subtypes (LumA, LumB, Her2, Basal and Normal) is a multi-class classification (also known as a special case of multi-label classification). To understand the classification performance of the feature genes in each LMSM module, we apply a state-of-the-art multi-label learning strategy called Binary Relevance (BR) [33] implemented in the utiml R package [34] to conduct multi-label classification analysis. For the BR strategy, we use the Support Vector Machine (SVM) classifier [35] with default parameters implemented in e1071 R package [36] as the base algorithm to build the multi-label model. We select two commonly used multi-label classification measures: Subset accuracy and Hamming loss, and conduct 10-fold cross-validation to evaluate the performance of each LMSM module. In this work, Subset accuracy denotes the percentage of correct predictions and Hamming loss is the fraction of wrong predictions to the total number of predictions. Higher values of Subset accuracy and smaller values of Hamming loss indicate better classification performance. In addition, for the evaluation, we use the baseline method in [37], a commonly used multi-label classification method as the baseline for comparison. The base algorithm of the baseline method is also the SVM classifier with default parameters implemented in e1071 R package [36]. Heterogeneous data sources We collect matched miRNA, lncRNA and mRNA expression data, and clinical data of BRCA dataset from The Cancer Genome Atlas (TCGA, https://cancergenome.nih.gov/). A lncRNA or mRNA without a corresponding gene symbol in the expression data of BRCA dataset is removed. To obtain a unique expression value for replicates of miRNAs, lncRNAs or mRNAs, we compute the average expression value of the replicates. As a result, we obtain the matched expression data of 674 miRNAs, 12711 lncRNAs and 18344 mRNAs in 500 BRCA samples. Most of the mediating miRNAs act as crosslinks across LMSM modules Following the steps shown in Fig 2, we have identified 17 LMSM modules (details can be seen in S1 Data). The average size of the identified modules is 672.53 and the average number of the shared miRNAs in a module is 232.82. In total, there are 549 unique miRNAs mediating the 17 LMSM modules, and 90.16% (495 out of 549) miRNAs mediate at least two LMSM modules (details can be seen in S2 Data). This result indicates that most of the mediating miRNAs act as crosslinks across different LMSM modules. LMSM modules are all statistically significant In this section, by computing null-model-based p-values, we evaluate whether the identified LMSM modules are statistically significant or not. As a result, the adjusted p-values for the identified 17 LMSM modules (details can be seen in S3 Data) are all statistically significant (adjusted p-value = 1.00E-06). This result demonstrates that LMSM modules are all statistically significant. Most of LMSM modules are implicated in BRCA To investigate whether the identified LMSM modules are related to BRCA or not, we conduct BRCA enrichment analysis and identify BRCA module biomarkers using the methods described in Section Materials and methods. For the BRCA enrichment analysis, we have collected a list of 4819 BRCA genes (734 BRCA lncRNAs and 4085 BRCA mRNAs) associated with the matched lncRNA and mRNA expression data (details in S4 Data). As shown in Table 1, 10 out of 17 LMSM modules are functionally enriched in BRCA at a significant level (p-value < 0.05). In Table 2, 15 out of 17 LMSM modules are regarded as module biomarkers in BRCA at a significant level (Log-rank p-value < 0.05 and HR > 2). Particularly, 90% (9 out of 10, excepting LMSM 14) of the BRCA-related LMSM modules can act as module biomarker in BRCA. These results show that most of LMSM modules are functionally implicated in BRCA. LMSM modules are mostly BRCA subtype-specific In this section, we firstly divide the 500 BRCA samples into five "intrinsic" subtypes (Luminal A, Luminal B, HER2-enriched, Basal-like and Normal-like). The numbers of LumA, LumB, Her2, Basal and Normal samples are 190, 155, 52, 85 and 18, respectively. Then we calculate the enrichment scores of the identified 17 LMSM modules in the BRCA subtype samples respectively (details in S5 Data). As illustrated in Fig 3, out of the 17 LMSM modules, 4 and 6 modules are identified as upregulated and down-regulated BRCA subtype-specific LMSM modules, respectively. For the up-regulated BRCA subtype-specific LMSM modules, the numbers of Basal-specific, LumB- Table 1. BRCA-related LMSM modules. L 2 is the number of BRCA genes in each LMSM module, K 2 represents the number of genes in each LMSM module, the number of BRCA genes in the dataset (M 2 ) is 4819, and the number of genes in the dataset (N 2 ) is 31055. specific and Normal-specific modules are 1, 1 and 2, respectively. The numbers of Basal-specific, LumB-specific and Normal-specific modules are 3, 1 and 2 respectively among the down-regulated BRCA subtype-specific LMSM modules. In particular, only 1 module (LMSM 2) can act as both up-regulated and down-regulated BRCA subtype-specific LMSM module. In total, the unique number of BRCA subtype-specific LMSM modules is 9, indicating that most of LMSM modules are BRCA subtype-specific. The performance of LMSM modules is significantly higher than baseline's performance in classifying BRCA subtypes For the identified 17 LMSM modules, the average Subset accuracy and Hamming loss in classifying BRCA subtypes is 0.7547 and 0.0892, respectively (details can be seen in S6 Data), The Subset accuracy and Hamming loss of the baseline are 0.3800 and 0.2480, respectively. By using Welch's t-test method, the Subset accuracy achieved using the 17 LMSM modules is significantly larger (better) than the Subset accuracy of the baseline (p-value < 2.20E-16), and the Hamming loss of the 17 LMSM modules is significantly smaller (better) than the Hamming loss of the baseline (p-value < 2.20E-16). The better performance than the baseline method indicates that LMSM modules are biological meaningful in classifying BRCA subtypes. Several lncRNA-related miRNA sponge interactions are experimentally confirmed For the ground truth used in the validation, we have collected 581 experimentally validated lncRNA-related miRNA sponge interactions associated with the matched lncRNA and mRNA expression data (details in S4 Data). After we merge the sponge lncRNA-mRNA pairs in the identified 17 LMSM modules, we have predicted 1471664 unique lncRNA-related miRNA sponge interactions (details at https://github.com/zhangjunpeng411/LMSM). For each LMSM module, the number of shared miRNAs, lncRNAs, mRNAs, predicted lncRNA-related miRNA sponge interactions can be seen in S7 Data. As shown in Table 3, there are 4 LMSM modules (LMSM 2, LMSM 3, LMSM 5 and LMSM 8) containing 14 experimentally validated lncRNA-related miRNA sponge interactions in total. It is noted that all the lncRNAs and mRNAs in these confirmed lncRNA-related miRNA sponge interactions are BRCA-related genes, indicating they may have potentially involved in BRCA. LMSM is capable of predicting miRNA targets LMSM use high-confidence miRNA-target interactions as seeds to predict miRNA-target interactions. A miRNA-mRNA or miRNA-lncRNA pair in a LMSM module has the potential to be a miRNA-target pair for the following reasons. Firstly, at sequence level, the sponge lncRNAs and mRNAs in each LMSM module have a significant sharing of miRNAs. Secondly, at expression level, the sponge lncRNAs and mRNAs in each LMSM module are highly correlated. As a result, the sponge lncRNAs and mRNAs of each LMSM module have a high chance to be target genes of the shared miRNAs. Thus, based on the identified LMSM modules, we have predicted 2820524 unique miRNA-target interactions (including 2023304 miRNA-lncRNA and 797220 miRNA-mRNA interactions) (details at https://github.com/ zhangjunpeng411/LMSM). For each LMSM module, the numbers of predicted miRNA-lncRNA interactions and miRNA-mRNA interactions can be seen in S7 Data. In addition, we investigate the intersection of the miRNA-target interactions predicted by LMSM with the other well-cited miRNA-target prediction methods. In terms of miRNA-mRNA interactions, we select TargetScan v7.2 [51], DIANA-microT-CDS v5.0 [52], starBase v3.0 [53] and miRWalk v3.0 [54] for investigation. We choose starBase v3.0 [53] and DIA-NA-LncBase v2.0 [39] for investigation in terms of miRNA-lncRNA interactions. As shown in the UpSet plot [55] of Fig 4A, the number of miRNA-mRNA interactions identified by all the five methods is only 21842. However, the percentage of overlap between LMSM and each of the other four methods achieves~63.74% (1289620 out of 2023304). As shown in Fig 4B, the number of miRNA-lncRNA interactions identified by all the three methods is only 1160. Since the miRNA-lncRNA interactions are still limited, most of the miRNA-lncRNA interactions (~93.90%, 748609 out of 797220) are individually predicted by LMSM. Comparison with graph clustering-based strategy Graph clustering-based strategy [12][13][14][15][16][17] is an alternative approach to identifying lncRNA related miRNA sponge modules. As there is no graph clustering-based strategy specifically designed for finding lncRNA related miRNA sponge modules, so we create a baseline Graph Clustering-based method (called GC in this paper) which uses well-known network construction and graph clustering methods as described in the following. The GC method includes two steps: i) identifying lncRNA related miRNA sponge interaction network, and ii) identifying lncRNA related miRNA sponge modules from the identified network. In step 1, we adapt the PLOS COMPUTATIONAL BIOLOGY well-cited Sensitivity Correlation (SC) method [24] implemented in the miRspongeR R package [56] to infer lncRNA related miRNA sponge interaction network. A lncRNA-mRNA pair is considered as an interacting pair in the network if they have significant sharing of the miR-NAs, significant correlation and adequate sensitivity correlation. We require that the pairs must share at least 3 miRNAs and their sensitivity correlation (the difference between correlation and partial correlation) must be larger than 0.1. The statistically significance of the miRNA sharing and positive correlations are tested using hypergeometric test and Welch's ttest respectively, with a significant level at 0.05. In step 2, we use the well-cited Markov cluster (MCL) algorithm [57] to infer lncRNA related miRNA sponge modules. Here, each obtained cluster corresponds to a module. Each module should contain at least 2 sponge lncRNAs and 2 target mRNAs. In total, by using the GC method, we have obtained 108 lncRNA related miRNA sponge modules. We compare LMSM and GC in terms of the percentage of BRCA-related modules, the percentage of module biomarkers in BRCA, the classification performance (mean Subset accuracy and mean Hamming loss) in classifying BRCA subtypes, and the number of validated lncRNArelated miRNA sponge interactions. As shown in Table 4, the comparison result indicates that LMSM always performs better than the GC method. The detailed results of the GC method can be seen in S8 Data. LMSM is robust To demonstrate the robustness of the LMSM workflow, we use the sparse group factor analysis (SGFA) method [58], instead of the WGCNA method to identify lncRNA-mRNA co-expression modules. The SGFA method is extended from the group factor analysis (GFA) method [59][60][61], and it can reliably infer biclusters (modules) from multiple data sources, and provide predictive and interpretable structure existing in any subset of the data sources. Given B biclusters to be identified, the SGFA method assigns each column (lncRNA or mRNA) or row (sample) a grade of membership (association) belonging to these biclusters. The range of the values of the associations is [-1, 1]. We use the absolute value of association (AVA) to evaluate the strength of lncRNAs and mRNAs belonging to a bicluster, and the cutoff of AVA is also set to 0.8. Specifically, we use the GFA R package [58] to identify lncRNA-mRNA co-expression modules. The parameter settings for inferring lncRNA-related miRNA sponge modules are the same. By using the SGFA method, we have identified 51 LMSM modules (details can be seen in S1 Data). The average size of these LMSM modules is 277.63 and the average number of the shared miRNAs is 135.65. There are 490 unique miRNAs mediating the 51 LMSM modules, and 84.90% (416 out of 490) miRNAs mediate at least two LMSM modules (details can be seen in S2 Data). As the result obtained using the WGCNA method, the result with the SGFA method also implies that the mediating miRNAs mostly act as crosslinks across different LMSM modules. In addition, by using a null-model-based p-value computation method, the identified 51 LMSM modules are also all statistically significant with adjusted p-value � 5.00E-06 (details can be seen in S3 Data). As shown in Table A of S1 File, 3 out of the 51 LMSM modules are functionally enriched in BRCA at a significant level (p-value < 0.05). Moreover, 49 out of the 51 LMSM modules are regarded as module biomarkers in BRCA (see in Table B of S1 File). The results indicate that most of LMSM modules are related to BRCA. We also compute the enrichment scores of the identified 51 LMSM modules in the BRCA subtype samples (details in S5 Data). As illustrated in Fig A of S1 File, out of the 51 LMSM modules, 33 and 24 modules are regarded as up-regulated and down-regulated BRCA subtype-specific LMSM modules, respectively. For the up-regulated BRCA subtype-specific LMSM modules, the numbers of Basal-specific, Her2-specific, LumB-specific and Normal-specific modules are 27, 2, 2 and 2, respectively. The numbers of Basal-specific, Her2-specific, LumA-specific, LumB-specific and Normal-specific modules are 2, 3, 15, 3 and 1 respectively for the down-regulated BRCA subtype-specific LMSM modules. Particularly, 16 modules can act as both up-regulated and down-regulated BRCA subtype-specific LMSM module. Overall, the unique number of BRCA subtype-specific LMSM modules is 41. This result also indicates that the identified LMSM modules are mostly BRCA subtype-specific. The average value of Subset accuracy and Hamming loss of the identified 51 LMSM modules in classifying BRCA subtypes is 0.6921 and 0.1135, respectively (details can be seen in S6 Data). In classifying BRCA subtypes, the baseline value of Subset accuracy and Hamming loss is 0.3800 and 0.2480, respectively. By using Welch's t-test method, the value of Subset accuracy for 51 LMSM modules is significantly larger (better) than the baseline value of Subset accuracy (p-value < 2.20E-16), and the value of Hamming loss for 51 LMSM modules is significantly smaller (better) than the baseline value of Hamming loss (p-value < 2.20E-16). The better performance than the baseline method also indicates that LMSM modules are biological meaningful in classifying BRCA subtypes. Moreover, we have predicted 605456 unique lncRNA-related miRNA sponge interactions in the identified 51 LMSM modules (details at https://github.com/zhangjunpeng411/LMSM). The number of the shared miRNAs, lncRNAs, mRNAs, predicted lncRNA-related miRNA sponge interactions of each LMSM module can be seen in S7 Data. Since the experimentally validated lncRNA-related miRNA sponge interactions are still limited, only 4 LMSM modules containing 4 lncRNA-related miRNA sponge interactions (see Table C of S1 File) are experimentally validated. All lncRNAs and mRNAs in the confirmed lncRNA-related miRNA sponge interactions are also BRCA-related genes. LMSM also has identified a large number of potential miRNA-target interactions (1646449 in total, including 435345 miRNA-mRNA and 1211104 miRNA-lncRNA interactions, details at https://github.com/zhangjunpeng411/LMSM). The number of predicted miRNA-lncRNA interactions, predicted miRNA-mRNA interactions, putative miRNA-lncRNA interactions and putative miRNA-mRNA interactions can be seen in S7 Data. As illustrated in Fig B of S1 File, the number of the miRNA-mRNA interactions identified by all the five methods is 4897 and the number of the miRNA-lncRNA interactions identified by all the three methods is 1149. Most of the identified miRNA-mRNA interactions by LMSM (~58.55%, 254910 out of 435345) are also predicted by one of the other four methods. In terms of the predicted miRNA-lncRNA interactions,~94.23% (1141232 out of 1211104) miRNA-lncRNA interactions are also individually predicted by LMSM. Finally, in terms of the percentage of BRCA-related modules, the percentage of module biomarkers in BRCA, the classification performance (mean Subset accuracy and mean Hamming loss) in classifying BRCA subtypes, and the number of validated lncRNA-related miRNA sponge interactions, LMSM also generally performs better than the GC method (see Table D of S1 File). Altogether, the above results are consistent with those obtained using the WGCNA method, indicating that our LMSM workflow is robust for studying lncRNA-related miRNA sponge modules. Discussion The crosstalk between different RNA transcripts in a miRNA-dependent manner forms a complex miRNA sponge interaction network and depicts a novel layer of gene expression regulation. Until now, several types of RNA transcripts, e.g. lncRNAs, pseudogenes, circRNAs and mRNAs, have been confirmed to act as miRNA sponges. Since lncRNAs are a large class of ncRNAs and function in many aspects of cell biology, including human cancers, we focus on identifying lncRNA related miRNA sponge modules in this work. By integrating multiple data sources, previous studies mainly investigate the identification of lncRNA related miRNA sponge interaction network. Based on the identified lncRNA related miRNA sponge interaction network, they use graph clustering algorithms to further infer lncRNA related miRNA sponge modules. Different from existing computational methods on lncRNA related miRNA sponge modules, in this work, we propose a novel method named LMSM to directly identify lncRNA related miRNA sponge modules from heterogeneous data. It is noted that the LMSM method depends on our presented hypothesis of miRNA sponge modular competition. In the hypothesis, miRNA sponges tend to form a group to compete with a group of target mRNAs for binding with miRNAs. We have applied the LMSM method to the BRCA dataset from TCGA. For the putative miRNA-target interactions, we integrate high-confidence miRNA-target interactions from several databases. The analysis results demonstrate that our LMSM method is useful in identifying lncRNA related miRNA sponge modules, and it can help with understanding regulatory mechanism of lncRNAs. LMSM is a flexible method to investigate miRNA sponge modules in human cancer. Firstly, any biclustering or clustering algorithm (e.g. the joint non-negative matrix factorization methods presented by Deng et al. [18] and Xiao et al. [19]) can be plugged in stage 1 of LMSM to identify lncRNA-mRNA co-expression modules. The only condition for using these algorithms is that they can be used to identify biclusters or clusters from high-dimensional expression data. Secondly, LMSM is a parametric model, and the parameter settings of LMSM can be replaced according to the practical requirements of researchers. For example, the threshold of the three metrics in stage 2 for identifying lncRNA related miRNA sponge modules can be looser or stricter. Thirdly, LMSM can also be extended to study other ncRNA (e.g. circRNA and pseudogene) related miRNA sponge modules. For instance, if we change the matched lncRNA expression data and the miRNA-lncRNA interactions to matched circRNA expression data and the miRNA-circRNA interactions respectively, the pipeline of LMSM is to identify circRNA related miRNA sponge modules. It is noted that each LMSM module contains many sponge lncRNAs and mRNAs, so it is hard to experimentally validate such a module by follow-up wet-lab experiments. This is a common issue of existing computational methods, including LMSM. We suggest that biologists can select some sponge lncRNAs and mRNAs of interest in each LMSM module, and then validate the modular competition between the selected sponge lncRNAs and target mRNAs. We believe that LMSM is still useful in shortlisting high-confidence sponge lncRNAs and mRNAs for experimental validation. For example, previous study [62] has shown that lncRNA MIR22HG is functionally complementary to lncRNA H19. In the identified LMSM module no. 2 (LMSM 2), lncRNA H19 is experimentally validated to compete with 10 target mRNAs (HMGA2, IGF2, ITGB1, TGFB1, VIM, RUNX1, CDH13, KLF4, TGFBI and VDR). Taken together, based on the hypothesis of miRNA sponge modular competition, we propose a new approach to identifying lncRNA related miRNA sponge modules by integrating expression data and miRNA-target binding information. Our method not only extends the ceRNA hypothesis, but also provides a novel way to investigate the biological functions and modular mechanism of lncRNAs in BRCA. We believe that our method can be also applied to other human cancer datasets assists in human cancer research.
9,864.4
2019-11-14T00:00:00.000
[ "Biology" ]
The Blood of Healthy Individuals Exhibits CD8 T Cells with a Highly Altered TCR Vb Repertoire but with an Unmodified Phenotype CD8 T cell clonal expansions (TCE) have been observed in elderly, healthy individuals as well in old mice, and have been associated with the ageing process. Both chronic latent and non-persistent viral infections have been proposed to drive the development of distinct non-functional and functional TCE respectively. Biases in TCR Vβ repertoire diversity are also recurrently observed in patients that have undergone strong immune challenge, and are preferentially observed in the CD8 compartment. Healthy adults can also exhibit CD8 T cells with strong alterations of their CDR3 length distribution. Surprisingly, no specific investigations have been conducted to analyze the CD8 T cell repertoire in normal adults, to determine if such alterations in TCR Vβ repertoire share the features of TCE. In this study, we characterized the phenotype and function of the CD8 population in healthy individuals of 25–52 years of age. All but one of the EBV-positive HLA-B8 healthy volunteers that were studied were CMV-negative. Using a specific unsupervised statistical method, we identified Vβ families with altered CDR3 length distribution and increased TCR Vβ/HPRT transcript ratios in all individuals tested. The increase in TCR Vβ/HPRT transcript ratio was more frequently associated with an increase in the percentage of the corresponding Vβ+ T cells than with an absence of modification of their percentage. However, in contrast with the previously described TCE, these CD8+ T cells were not preferentially found in the memory CD8 subset, they exhibited normal effector functions (cytokine secretion and cytotoxic molecule expression) and they were not reactive to a pool of EBV/CMV/Flu virus peptides. Taken together, the combined analysis of transcripts and proteins of the TCR Vβ repertoire led to the identification of different types of CD8+ T cell clone expansion or contraction in healthy individuals, a situation that appears more complex than previously described in aged individuals. Introduction Clonal CD8 + T cell expansion in healthy individuals has been reported as being associated with the ageing process [1,2,3]. Such expansions are frequently identified in the elderly (one third of adults over the age of 65 years develop CD8 clonal expansions) and in aged animals (e.g. mice .2 years of age) (reviewed in [4]). Among the typical features of clonal CD8 + T cell expansions (reviewed in [5]), these cells exhibit mainly a CD8 + memory T-cell phenotype based on the expression of cell surface markers such as CD45RA/CCR7, and respond poorly to TCR stimulation with low proliferation and cytokine production [6,7,8,9]. Whereas these clones are not associated with overt diseases or malignancies [2], they may negatively impact the immune system by generating gaps in the TCR repertoire. Chronic antigen stimulation is thought to drive the expansion of clonal CD8 + T cells, as individuals with such clonal CD8 + T cell expansions frequently test positive for persistent virus such as CMV [8]. Conversely, clonal CD8 + T cell expansions can arise in the absence of persistent virus after the successful resolution of an acute infection [10,11]. Interestingly, abundant CD8 + T cell clones are also observed in patients undergoing immune system stimulation, such as with allogeneic transplantation or chronic viral infection, and a less diverse TCR Vb repertoire with strong alterations of the CDR3 length distribution is observed [12,13]. In our previous reports, we have accumulated observations that peripheral CD8 repertoire alterations are not restricted to the elderly but can also be observed in apparently normal adults [12,13,14]. So far, such alterations have only been characterized for their altered CDR3 length distribution, and a detailed description and characterization of these clonal expansions is lacking. In this report, we performed a detailed study of the CD8 TCR Vb repertoire alterations as well as phenotype and function in healthy adult volunteers. The blood donors were specifically chosen as EBV-positive HLA-B8, as the necessary tools such as tetramers and known virus-derived peptides were available. Of note, only one out of 8 studied individuals was positive for CMV infection. Our data show that large changes in the CD8 TCR Vb repertoire can be identified by spectratyping in healthy adult individuals. These CD8 + clonal expansions, which correspond to an increase in the percentage of Vb x + CD8 + T cells with a dominant single CDR3 length, are more frequent than CD8 + clonal restrictions (i.e. no modification in the percentage of Vb x + CD8 + T cells with a dominant single CDR3 length). However, despite the use of strongly selected CDR3 length distributions, the CD8 + T cells do not exhibit a unique phenotype but instead display a normal effector function (in terms of cytokine secretion and cytotoxic molecule expression). Finally, these CD8 + clonal expansions do not contain EBNA-3 specific T cells and are not reactive to a pool of peptides derived from EBV, CMV and Flu viruses. Taken together, CD8 + clonal expansions are commonly identified in healthy adults and may exist independently of chronic virus infection. Results ''Unusual'' usage of Vb families within the CD8 + T cell compartment of healthy individuals The CD8 T cell TCR Vb repertoire in 8 healthy adult individuals (median age 39 years, range 25-52 years) was analyzed using the TcLandscape technology ( Figure 1) [15]. TcLandscape technology has proven to be a useful technic to globally assess TCR Vb repertoire. For each of the 26 Vb families (x-axis), the CDR3 length-distribution is measured (y-axis), and compared to a reference gaussian CDR3 length-distribution. The percentage of alteration of this Vb n CDR3 length-distribution is indicated by the red color code ranging from green color (i-e gaussian distribution) to red color (i-e highly altered CDR3 length distribution). Transcript amount of each Vb families is shown using the z-axis. For each individual, some Vb families exhibited an alteration in their CDR3 length distribution, highlighting clonal selections. One hurdle to the global assessment of the TCR Vb repertoire is the identification of an ''unusual'' usage of specific Vb families based on the % of alteration of the CDR3 length-distribution and the Vb/HPRT transcript ratio. In order to overcome this problem, we developed an unsupervised statistical method combining a normalization of the values for each Vb family with a global appraisal of the ''usual'' usage of Vb families. This method takes into account the different range for the qualitative (percentage of alteration of the CDR3 length distribution) and the quantitative (Vb/HPRT transcript ratio determined by qRT-PCR) parameters of the various Vb families and the size of the analyzed population. The third quartile (i.e. percentile 75%) was calculated for the Vb/ HPRT transcript ratio (Figure 1.B) and for the % alteration of CDR3 length distribution (Figure 1.C). This analysis revealed a strong heterogeneity in the association between CDR3 length distribution alteration and the Vb/HPRT transcript ratio. Indeed, only a limited number of Vb families exhibited both an altered CDR3 length distribution and a high Vb/HPRT transcript ratio (11% of Vb families were above the third quartile; Table 1). Hereafter, these Vb families will be referred to as ''P-Vb families'' as these Vb families can be visualized as red peaks in the TcLandscape view. P-Vb families are preferentially associated with an increase in percentage of Vb + CD8 + T cells Based on the literature, it is assumed that selection of a Vb x family identified by an alteration of the CDR3 length distribution is also associated with an increase in the percentage of Vb x + T cells characterized by flow cytometry [16]. To investigate whether the Vb families identified on the basis of their transcription were indeed also associated with an increase in the corresponding Vb + T cells, we characterized the frequency of each Vb family within the CD3 + CD8 + compartment. Using the same unsupervised statistical method, P-Vb x families with a percentage of Vb x + T cells above the third quartile were identified ( Figure 2). We restricted our analysis to the 14 Vb families for which anti-Vb mAb and Vb specific primers were available (by flow cytometry and TcLandscape respectively; Table S2). Among the P-Vb families, 71% were indeed found to be over-represented in terms of number of cells corresponding to the P-Vb families ( Table 2). Based on the variation of the number of the corresponding Vb + T cells, we thus identified two types of P-Vb families. Type 1 (referred to as clonal expansion), characterized by an increase in the frequency of P-Vb family T cells, was the most commonly observed and represented 10 out of 14 P-Vb families. Type 2 (referred to as clonal restriction), characterized by a normal frequency of P-Vb family T cells, was less frequent (4 out of the 14 P-Vb families studied). Intra and extracellular density of TCR Vb proteins on P-Vb + family T cells The presence of a high level of Vb transcripts is usually associated with an increase in the frequency of the corresponding Vb + CD8 + T cells. However, an increase in Vb protein expression per cell could be associated with a high level of Vb transcripts without any increase in the percentage of Vb + CD8 + T cells. To test this hypothesis, the number of TCR Vb proteins was evaluated at the surface and in the intracellular compartment of CD8 + T cells by combining anti-TCR Vb mAb staining and the use of Quantibright Beads (BD Biosciences). We observed that only 25% of P-Vb families with a type 1 selection (i.e. clonal expansion) exhibited an increase in TCR Vb protein at the cell surface (Vb11 ind.#04 and Vb22 ind.#05; Figure 3). No modification of the intracellular TCR Vb protein expression was observed for the P-Vb families with a type 1 selection. In contrast, P-Vb families with a type 2 selection (i.e. clonal restriction) did not express more TCR Vb protein at the cell surface but displayed increased intracellular TCR Vb protein expression (Vb18 and Vb23 ind.#01; Figure 3). CD8 + T cells with and without P-Vb families exhibit a similar phenotype Because clonal CD8 + T cell expansions exhibit mainly a memory T-cell phenotype [17,18], we characterized the phenotype of the ''Peak Vb families''. On the basis of CD45RA and CCR7 expression, four populations of CD8 T cells were classified as naïve cells (CD45RA + CCR7 + ), central memory (CM) cells (CD45RA 2 CCR7 + ), effector memory (EM) cells (CD45RA 2 CCR7 2 ) and effector (EMRA) cells (CD45RA + CCR7 2 ) [19]. No difference in the frequency of the 4 sub-populations was found between CD8 + T cells with P-Vb families and the whole CD8 + T cells (Figure 4.A). Down-regulation of CD28 is generally associated with CD8 + T cell differentiation, whereas up-regulation of CD25 is associated with an activated state. The inhibitory molecule CD279 is upregulated by exhausted CD8 + T cells. We further characterized the phenotype of these CD8 + T cells and we found that CD8 + T cells with P-Vb families exhibited weak expression of CD25 (median 9.7 vs. 19.2 between CD8 + T cells with P-Vb families and whole CD8 + T cells respectively) and a similar high level of CD28 0 0 . The X-axis displays the 26 Vb families analyzed, the Y-axis the CDR3 lengths, the Z-axis the Vb/HPRT transcript ratio and the colors represent the percentage of alteration. (B) Identification of Vb families with a corrected qRT-PCR value above the 75% percentile. Vb transcripts were quantified by qRT-PCR. Each dot represents the individual corrected qRT-PCR value for a given Vb family. The gray section represents the 75% percentile and defines the boundary of ''unusual'' usage of a Vb family. (C) Identification of Vb families with a corrected % of alteration value above the 75% percentile. The percentage of alteration for each Vb family was evaluated as previously described [14,15]. Each dot represents the individual corrected % of alteration value for a given Vb family. The gray section represents the 75% percentile and defines the boundary of ''unusual'' usage of a Vb family. See the statistical method within the Materials & Methods section for a detailed description of the statistical methodology. doi:10.1371/journal.pone.0021240.g001 (median 88.1 vs. 76.5 between CD8 + T cells with P-Vb families and whole CD8 + T cells respectively) (Figure 4.B). Taken together, CD8 + T cells with P-Vb families did not preferentially exhibit a unique phenotype. CD8 + T cells with P-Vb families secrete cytokines upon polyclonal stimulation A weak response of clonal CD8 + T cell expansions to polyclonal stimulation has been described [6,7,8,9]. To look at the function of CD8 + T cells with P-Vb families, cytokine production and cytotoxic molecule expression were analyzed. CD8 + T cells were first stimulated with PMA and ionomycin for 6 hours and production of IFN-c, TNF-a and IL-2 was analyzed ( Figure 5.A and B). The frequency of cells secreting at least one cytokine was similar for CD8 + T cells with P-Vb families and for the whole CD8 + T cell population (40.6%65.8 vs. 38.6%66.3 respectively). The quality of the CD8 response (i.e. the ability to secrete more than one cytokine) was then analyzed. The frequencies of single, double or triple-cytokine producers, as well as the nature of the cytokine secreted, were also comparable for CD8 + T cells with P-Vb families and for the whole CD8 + T cell population ( Figure 5.A and B). No difference in the expression of cytotoxic molecules (GZM-B and PERF) was observed between CD8 + T cells with P-Vb families and the whole CD8 + T cell population ( Figure 5.C). Collectively, these data obtained after 6 hours of stimulation, an optimal timing for the analysis of antigen-experienced T cell responses, suggest that CD8 + T cells with P-Vb families do not exhibit the effector function characteristics of memory cells, including the expression of cytotoxic molecules and the ability to secrete multiple cytokines, and primarily behave as normal CD8 + T cells. P-Vb families are distinct from the Vb families of EBNA-3A-specific CD8 + T cells Because chronic immune responses against herpes virus can shape the CD8 repertoire [8,20] and that HLA-B8 individuals have been shown to develop public anti-EBV responses [21] that can be studied by HLA-B*0801/EBNA-3A tetramers, we tested whether the observed TCR Vb repertoire alteration could be explained by EBV infection. An HLA-B*0801/EBNA-3A (FLRGRAYGL) pentamer was used to isolate EBV-specific cells. FLRGRAYGL peptides derived from the latent cycle antigen EBNA-3A is among the strongest EBV latent cycle epitopes [22,23]. TCR Vb repertoire usage was compared between CD8 + T cells and HLA-B*0801/EBNA-3A CD8 + T cells ( Figure 6). Highly restricted TCR Vb repertoires were observed in the HLA-B*0801/EBNA-3A CD8 + T cell fraction, characterized by the use of a limited number of Vb families and a high Vb/HPRT transcript ratio. A recurrent peak within the Vb6 family was observed in all patients (Figure 6), using a similar CDR3 length distribution (3 out of 4 patients exhibited a similar CDR3 length for the Vb6.1 family and 4 out of 4 for the Vb6.5 family; Figure S1). TCR sequencing of the Vb6.1 and 6.5 family PCR products identified two TCR sequences, sharing the TRBJ2-7 gene and two CDR3 sequences (AGC TT/CA or TCA GGA CAG GCC), similar to the public selection previously reported (6S8 gene analysis is shared between Vb6.1 and Vb6.5 families when these families are analyzed by TcLandscape; Table S2) [24]. Of interest, none of the P-Vb families were found in the HLA-B*0801/EBNA-3A CD3 + CD8 + cell fraction ( Figure 6). Discussion Significant changes in the blood CD8 T cell TCR Vb repertoire have been reported in normal mice and humans, and are usually associated with ageing of the immune system [2,6,25]. In some cases, CD8 Vb clonal expansions have been shown to occupy up to 50% of the total CD8Vb T cell repertoire in the elderly [2] and up to 80% of the total CD8Vb T cell repertoire in old mice [6]. It The same specific unsupervised statistical method was used to identify ''unusual'' usage of each Vb family at both the transcriptional and protein level. Vb families with an ''unusual'' usage at both the transcriptional and protein level are highlighted in bold. doi:10.1371/journal.pone.0021240.t002 has been proposed that infection with persistent or transient viruses could induce the expansion of CD8 V b clones [8,10,11]. We now report that healthy adult individuals also commonly exhibit alterations of the TCR Vb repertoire within the CD8 compartment, as identified by spectratyping. These P-Vb families are mainly associated with an increase in the corresponding percentage of Vb + T cells. Despite the use of a highly selected CDR3 length distribution, these CD8 + T cells do not exhibit a bias in their phenotype and maintain normal effector function after short-term polyclonal stimulation (in terms of cytokine secretion and expression of cytotoxic molecules). The ability to identify and to study CD8 + clonal expansions relies on the method used. CD8 + clonal expansions can be identified by measuring the size of the TCR Vb x CD8 + T cell population (using anti-Vb mAb) or by characterizing the magnitude of skewing of CDR3 length distribution. Most studies identify CD8 + clonal expansions when the percentage of a given TCR Vb CD8 + T cell in elderly individuals is 2 to 3 standard deviations above the mean of this TCR Vb in young individuals [2,6]. Spectratyping is used to confirm the clonality of the TCR within the CD8 + clonal expansion. However, the number of individuals included often precludes the use of criteria based on parametric statistics. Moreover, we observed that the distribution of the % of alteration of CDR3 length distribution and the Vb/ HPRT ratio was heterogeneous for some Vb families, suggesting a lack of Gaussian distribution in the usage of Vb families in blood T cells of healthy adults. Thus, we first developed an unsupervised and non-parametric methodology to undertake a more precise characterization of the T cell repertoire in healthy individuals. Spectratyping (to analyze CDR3 length distribution) and qRT-PCR (to measure Vb/HPRT ratio) were used to characterize the TCR Vb repertoire. Various analyses based on the use of flow cytometry can be used to measure the frequency of CD8 + T cells expressing a given TCR Vb as well as the expression of TCR Vb per cell, either at the cell surface or in the cytoplasm. This nonparametric analysis was designed to take into account the number of individuals included, the variability of usage between the different Vb families and the absence of normality in the usage of Vb families. Using this approach, we identified various usages of TCR Vb transcripts based either on the quantity of Vb/HPRT transcript ratio or the alteration of CDR3 length distribution. Modification of CDR3 length distribution was associated with an accumulation of transcripts in only half of cases. The characteristics of the CD8 + T cells we observed in our adult population are rather different from those published in aged populations. The average age of the healthy volunteers in this study was 39 years (age range 25-52 years; Table S1). Among the differences in their characteristics, the clonal CD8 + T cells exhibited a distribution in the various phenotype (naive, CM, EM and TEMRA cells), which was similar to that of the whole CD8 + T cell population. In contrast, previously published clonal CD8 + T cell expansions were mainly found to exhibit a memory phenotype. The phenotype of clonal CD8+ T cell expansions thus remained an open question. The origin of clonal CD8+ T cell expansions has not been identified and it is assume to be driven by various mechanisms. It is likely that memory CD8 T cell and clonal CD8+ T cell expansions represent two different entities. For instance, the memory compartment contributes to less than 1 percent of the total diversity of the TCR ab repertoire [26] and efficiently responds to a second antigeneic challenge. In contrast, the response to TCR stimulation to CD8 clonal expansion is diverse, ranging from an absence of response [7,11] to an efficient cytokine secretion ( figure 5 and [7]). The TCR reactivity of the CD8 clones found in our adult cohort remains to be elucidated, as for the clonal expansions of CD8 + T cells found in the elderly. As persistent viruses such as CMV or EBV are known to impact the CD8 repertoire, we investigated a possible reactivity for viral peptide of the CD8 + T cells with clonal expansions. It is important to note that whereas all 8 healthy individuals had tested positive for EBV, only one was positive for prior CMV infection. Thus, in our study, we excluded the possibility of CMV infection imprinting the TCR Vb repertoire. These clonal CD8 + T cells were not found in the HLA-B*0801/EBNA-3A CD3 + CD8 + cell fraction ( Figure 6). Moreover, stimulation with a pool of CMV, EBV and Flu peptides (CEF Class I Peptide Pool) did not elicit strong IFN-c production by clonal CD8 + T cells (Figure 7). Although this virus peptide pool triggers proliferation of the whole PBL population, it is noticeable that none of the P-Vb families studied was responsive. In addition, no advantage or disadvantage was found for the clonal expansions of CD8 + T cells in terms of cytokine response or expression of cytotoxic molecules upon polyclonal stimulation ( Figure 5). Our data suggest that the CD8 + clones are not directly related to the presence of persistent pathogens, as for previous studies on mouse CD8 + clonal expansions which occur in the majority of mice by 2 years of age even in a specific pathogen-free environment [3,6,27]. The distribution of CD8 + clonal expansions is random with a broad variety of TCR Vb chains detected in different CD8 + expansions even in a colony of genetically identical mice. It has to be noted that CD8 + clonal expansions have also been identified in mouse models in which viral agent had been successfully cleared by the immune system [11]. CD8 + clonal expansions induced by non-persistent antigen have been reported to retain effector functions [10]. Altogether, these observations suggest that, in healthy individuals, CD8 T cell clones escape postinfection homeostatic contraction. The specificity of these clones remains to be determined but it is likely that their specificity is towards non-persistent viruses. In conclusion, we show that the CD8 repertoire of normal adult humans is not Gaussian. The combined analysis of transcripts and proteins of the TCR Vb repertoire enabled the identification of different types of CD8 + T cell clone expansions or contractions in healthy individuals, a situation that appears more complex than previously described in aged individuals. Notably, these CD8 + T cell clones exhibit a diverse phenotype and differ from clonal CD8 + T cell expansions observed in the elderly in that they exhibit a normal response to polyclonal stimuli. Subjects and Ethics statement Peripheral blood lymphocytes (PBL) were collected from 8 HLA-B8 + EBV + healthy volunteers, enrolled by the Etablissement Français du Sang (EFS, Nantes, France) within the context of a research contract. All donors were informed of the final use of their blood and signed an informed consent. The approval of an ethical committee was thus not necessary. The demographic characteristics of the healthy volunteers are presented in Table S1. All patients were EBV + but only 1 out of 8 tested positive for CMV. Cell isolation PBL were separated by Ficoll density centrifugation (LMS Eurobio) or were enriched in CD3 cells by elutriation (DTC corefacility IFR 26, Nantes). Untouched CD8 + T cells were purified using the CD8 + T cell isolation kit (Miltenyi) according to the manufacture's recommendations. To isolate HLA-B*0801/EBN A-3A + T cells (referred to as HLA-B*0801/EBNA-3A CD8 + T cells), PBMC were stained with CD3-APC, CD8-FITC and PElabeled HLA-B*0801/FLRGRAYGL (EBV EBNA-3A) Pro5 TM MHC Pentamer (ProImmune). HLA-B8 EBV + CD8 + T cells were then isolated using a high-speed cell sorter (FACSAria; BD Biosciences). Purity was greater than 98%. seven-color flow cytometry. Boolean gating was done to separate the 7 distinct populations based on the production of IFN-c, TNF-a and IL-2 in any combination. (B) Frequency of the various cytokine producers (none to three cytokine producers) in CD8 + T cells and CD8 + T cells with P-Vb families. Characterization of the TCR Vb repertoire Transcription. RNA was extracted from purified CD8 + T cells or HLA-B8 EBV + CD8 + T cells using NucleoSpin RNA II (Macherey-Nagel) according to the manufacture's procedure. Total RNA was reverse-transcribed using an Invitrogen cDNA synthesis kit (Boehringer Mannheim). The TcLandscapeH was performed as previously described by combining the CDR3 length distribution (CDR3-LD) with each normalized amount of Vb transcript [14,15]. Briefly, CDR3-LD was determined by amplifying the cDNA by PCR using pairs of primers specific for each Vb gene [12] followed by length separation using a capillary sequencer (Applied Biosystems 3730) [28]. In parallel, the level of Vb family transcripts was measured by qRT-PCR and normalized by a housekeeping gene (HPRT). Vb frequency analysis. PBMC were stained with Alexa 700conjugated anti-CD3, Pacific-Blue-conjugated anti-CD8 antibodies and the various anti-Vb family antibodies included in the IOTestH Beta Mark (PN IM3497; Beckman Coulter) according to the manufacture's guidelines. The frequency of the 24 Vb families was evaluated by gating on CD3 + CD8 + T cells with FlowJo Software (TreeStar). Estimation of the TCR Vb expression per cell. The number of TCR Vb proteins expressed per CD8 + T cell was assessed using the QuantiBRITE TM PE beads (BD Biosciences) according to the manufacturer's guidelines. For each anti-TCR Vb-specific mAb, the Fluorochrome/Protein ratio was obtained from the antibody provider (Beckman Coulter). Flow cytometry CD8 + T cells and specific Vb n CD8 + T cells were analyzed by multi-color flow cytometry. Samples were stained with labeled antibodies directed against cell surface markers for 30 minutes at 4uC. Antibodies raised against the following antigens were used to characterize the memory/activated phenotype of the CD8 + T cells: FITC-conjugated CD28; Alexa 700-conjugated CD3; Pacific-Orange-conjugated CD8; PE-Cy7-conjugated CD197 (CCR7); PE-Cy5-conjugated CD45RA; Alexa 647-conjugated CD279; and PE-conjugated specific anti-Vb. All antibodies were purchased from BD Biosciences except for the anti-Vb antibodies which were purchased from Beckman Coulter. To detect intracellular proteins, samples were first labeled with LIVE/ DEAD Fixable Aqua stain according to the manufacturer's guidelines, and then labeled with PE-Cy7-conjugated anti-CD3 mAb, Alexa700-conjugated anti-CD8 mAb and the various PEconjugated anti-Vb mAb. The samples were then permeabilized and fixed with Perm/Fix reagent (eBiosciences), and finally stained for 30 min at 4uC with APC-conjugated anti-IFN-c mAb, V450conjugated anti-IL-2 mAb and FITC-conjugated anti-TNF-a mAb. Samples were acquired on a BD LSR II BD Biosciences and analyzed with FlowJo Software (TreeStar). Intracellular cytokine production Mitogenic stimulation. 1610e6 T cells were cultured in 96-U bottom plates for 6 hours in complete medium in the presence of PMA (20 ng/mL) and Ionomycin (400 ng/mL). After 1 hour, BFA (10 ng/mL) was added. Samples were stained as described in the ''Flow cytometry'' section to detect IFN-c, TNF-a and IL-2 secretion. Detection of cytokine-secreting CD8 + T cells following stimulation with viral peptides Purified PBMC were thawed, resuspended in complete medium (+10 U/mL DNase I) at a final concentration of 2-4610e6/mL and cultured overnight. Cells were stained with CFSE according Figure 6. CD8 + T cells with P-Vb families are distinct from EBNA3 latent epitope-specific Vb families. CD8 + T cells (white bar) and HLA-B*0801/EBNA-3A CD8 + T cells (black bar) were FACS-sorted from 4 HLA-B8 healthy individuals (HV #01 to #04). TCR Vb repertoire usage was characterized by qRT-PCR (left column; arbitrary unit) and by spectratyping (right column; % of alteration). Black arrows indicate the P-Vb families previously identified. doi:10.1371/journal.pone.0021240.g006 Figure 7. Stimulation by CMV/EBV/FLU peptides does not elicit the proliferation of CD8 + T cells with P-Vb families. CFSE labeled PBL were cultured for 6 days in the presence of CMV/EBV/FLU peptide cocktail (CEF peptide pool 2.5 ug/mL) and anti-CD49d and anti-CD28.2 mAb (2 ug/ mL). On day 3, 1 mL of fresh media was added. On day 6, the CFSE dilution was analyzed by gating on live CD3 + CD8 + cells and live CD3 + CD8 + Vb + cells. doi:10.1371/journal.pone.0021240.g007 to the manufacturer's recommendations and plated at 4610e6 cells/well in complete medium with CEF Class I Peptide Pool ''Plus'' (2.5 mg/mL; Cellular Technology Ltd.), anti CD28.2 mAb and anti-CD49d mAb (2 mg/mL) for 6 days. CEF Class I Peptide Pool ''Plus'' covers HLA class I restricted T cell epitopes of CMV, EBV and Flu virus. On day3, 1 mL of fresh medium was added. BFA (10 ng/mL) was added for the last 5 hours of stimulation. Samples were stained as described in the ''Flow cytometry'' section. Statistical methods There is no gold standard to identify the abnormal % of alteration or Vb n /HPRT ratio. Most studies identify CD8 + clonal expansions when the percentage of a given TCR Vb CD8 + T cells in aged individuals is 2 to 3 standard deviations above the mean of this TCR Vb in young individuals [2,6]. However, the methodology used in previous studies assumes the normality in the usage of Vb families, a large number of individuals and does not take into account the variability of usage between the different Vb n families. Of note, we observed that the distribution shape of the % of alteration and the Vb n /HPRT ratio were heterogeneous for some Vb n families, suggesting a lack of Gaussian distribution. Thus, we proposed a unsupervised and non-parametric methodology, specifically devoted to the identification of ''unusual'' usage of Vb families within the TCR Vb repertoire For each parameter measured for each Vb family, the 8 individual values were transformed by subtracting the median and by dividing by the InterQuartile Range (IQR). When all the transformed values regardless the Vb family were plotted together, a skewed distribution was observed with a threshold at the 75% percentile, that identified the unusual values. No assumption was associated with this method, which is completely non parametric and descriptive. It would not have been possible to propose statistical tests, especially regarding the heterogeneity of the distributions and the high number of parameters in comparison to the number of individuals. All the statistical analysis were performed using R software [29] and the figures were done using Graphpad Prism. Figure S1 HLA-B*0801/EBNA-3A CD8 + public clones. HLA-B*0801/EBNA-3A CD8 + T cells from 4 HLA-B8 healthy volunteers (HV #01 to #04) were FACS-sorted and the TCR Vb repertoire was analyzed by TcLandscape. (A) Spectratyping of Vb families 6.1 and 6.5 show the usage of similar CDR3 lengths across the individuals. (B) Identification of two public sequences within PCR products of Vb families 6.1 and 6.5. (EPS)
7,156.8
2011-06-27T00:00:00.000
[ "Biology", "Medicine" ]
Prognostic Value of Vascular-Expressed PSMA and CD248 in Urothelial Carcinoma of the Bladder Background Urothelial carcinoma of the bladder (UCB) is a common cancer of the urinary system. Despite substantial improvements in available treatment options, the survival outcome of patients with advanced UCB is unsatisfactory. Therefore, it is necessary to identify new prognostic biomarkers for monitoring and therapy guidance of UCB. In recent years, prostate-specific membrane antigen (PSMA) and CD248 have been identified promising candidate bio7markers. Methods In this study, we first examined PSMA and CD248 expression in tissues from 124 patients with UCB using immunohistochemical and immunofluorescent staining. We then analyzed the association between the expression of the two biomarkers and other clinicopathological features and prognosis. Finally, we performed bioinformatic analysis of CD248 and FOLH 1 (PSMA) using the TCGA-BLCA dataset to explore the underlying mechanism of PSMA and CD248 in the progression of UCB. Results Among the 124 cases, PSMA and CD248 were confirmed to be expressed in tumor-associated vessels. Vascular PSMA and CD248 expression levels were associated significantly with several deteriorated clinicopathological features. Furthermore, using univariate and multivariate Cox analyses, high vascular PSMA and CD248 expression levels were observed to be associated significantly with poor prognosis in patients with UCB. As risk factors, both PSMA and CD248 expression showed good performance to predict prognosis. Furthermore, combining these vascular molecules with other clinical risk factors generated a risk score that could promote predictive performance. Bioinformatic analysis showed that both PSMA and CD248 might contribute to angiogenesis and promote further progression of UCB. Conclusion Both PSMA and CD248 are specifically expressed in the tumor-associated vasculature of UCB. These two molecules might be used as novel prognostic biomarkers and vascular therapeutic targets for UCB. INTRODUCTION Urothelial carcinoma of the bladder (UCB), the commonest type of bladder cancer (BLCA), is the 10 th most common cancer, with an estimated 549,000 new diagnoses and 200,000 deaths annually worldwide (1). More than 90% of bladder cancers are derived from the urothelium and are known for their ability to metastasize and recur, especially when they invade muscles (2). About half of the patients with muscle-invasive UCB die of the disease, and traditional chemotherapy cannot increase their survival significantly. Although the survival of patients with non-muscle-invasive UCB is relatively good, tumor recurrence might occur in about two-thirds of them (3). In recent years, remarkable advances have been made in clinical prognostic diagnosis and effective immunotherapies, for instance, immune checkpoint inhibitor therapeutics; however, the response rate of patients with advanced UCB was only 30%, and the extension of survival time remains limited (4). Theoretically, tissue biomarkers can be used in UCB to predict oncological outcomes, such as recurrence and progression, as well as the response to intravesical drug perfusion. P53, the most common tumor suppressor gene that is mutated in most human cancers, is associated with the most aggressive non−muscle −invasive bladder cancer (NMIBC) and muscle-invasive bladder cancer (MIBC) as a prognostic biomarker (5). However, it cannot predict the response to Bacillus Calmette-Gueŕin (BCG) therapy, and the heterogeneity of the included studies and limitations related to immunohistochemistry precluded any clear conclusions (6,7). Meanwhile, a phase III trial designed to evaluate the benefit in patients with MIBC based on their p53 status for adjuvant cisplatin-based chemotherapy could not confirm the prognostic value of p53 alteration (8). Apoptosis biomarkers, such as survivin, might be associated with outcomes in NMIBC (9, 10); however, although survivin improved the accuracy of prediction of disease recurrence significantly in a subgroup of patients with pT2-3N0M0 disease (11), the evidence was insufficient for prognostic prediction in MIBC, and large prospective series are still lacking (12). Cell signaling pathway biomarkers such as ErbB (Erb-B2 receptor tyrosine kinase) and fibroblast growth factor receptor (FGFR) family members, angiogenesis biomarkers [vascular endothelial growth factor (VEGF), MVD (mevalonate diphosphate decarboxylase), and hypoxia inducible factor 1 alpha (HIF-1a)], and tumor cell invasion biomarkers (E-cadherin and N-cadherin) have been shown to be related to the outcomes of NMIBC and MIBC (13)(14)(15). However, none of the evaluated tissue biomarkers alone could be used to predict oncological outcomes with sufficient accuracy to change decisions in routine clinical practice (12,16). Thus, it is necessary to identify new independent prognostic biomarkers for monitoring and therapy guidance of UCB. Prostate specific membrane antigen (PSMA), also known as folate hydrolase 1 (FOLH1) or glutamate carboxypetidase II, is a type II transmembrane glycoprotein, which has folate hydrolase and neurocarboxypeptidase activity. PSMA is expressed specifically on prostate epithelial cells, and its expression is upregulated markedly in prostate cancer (PCa). In recent years, PSMA has also been found to be expressed in the vasculature of non-prostatic solid tumors, such as breast cancer (17,18), lung cancer (19,20), gastric cancer (21), colorectal cancer (21,22), kidney cancer (23), and glioblastoma (24,25), but not in normal vascular endothelial cells. Thus, PSMA has also been considered as an effective target for the cancers with vascular PSMA expression (26). CD248, also known as tumor endothelial marker 1 (TEM1) or Endosialin, is a transmembrane glycoprotein that belongs to the C-type lectin-like receptor family (27). Importantly, CD248 is overexpressed specifically in tumor-associated fibroblasts and pericytes residing in tumor blood vessels, but is barely expressed in normal tissues (28,29), making CD248 an oncofetal protein with potential as a biomarker and therapeutic target. In our previous study, we found that PSMA and CD248 were both expressed specifically in the vasculature in hepatocellular carcinoma (HCC), and vascular-expressed PSMA and CD248 might be used as prognostic markers and vascular therapeutic targets for HCC (30,31). We also found that overexpression of CD248 in renal cell carcinoma (RCC) was related to poor prognosis (32). Therefore, we wondered whether vascular-expressed PSMA and CD248 could be promising biomarkers for UCB. Although several studies have shown that PSMA is expressed in the vasculature of bladder cancer (33)(34)(35), these studies only examined limited samples, or did not analyze the association between vascular PSMA expression and clinicopathological features and prognosis systematically. Besides, the vascular expression pattern of CD248 and its prognostic value in UCB remains unknown. The present study aimed to determine whether vascularexpressed PSMA and CD248 are biomarkers for UCB. First, we examined PSMA and CD248 expression in 124 UCB tissues using immunohistochemistry (IHC), and analyzed the association between the expression levels of the two biomarkers and other clinicopathological features and prognosis. Then, we constructed PSMA-based and CD248-based prognostic signatures by integrating multiple clinical variables, which might promote the predictive accuracy. Finally, we performed bioinformatic analysis of CD248 and PSMA using the Cancer Genome Atlas (TCGA)-BLCA dataset to explore the underlying mechanism of PSMA and CD248 in UCB progression. Patients and Follow-Up In this retrospective study, 162 UCB specimens were chosen from patients who underwent surgery [transurethral resection of bladder tumor (TRUBT) or radical cystectomy (RC)] at Xijing Hospital from 2006 to 2018. Thirty-eight samples were excluded because (1) medical records or information was lacking; (2) the pathological diagnosis included other primary tumors; (3) there was missing follow-up data; and (4) the patients or their families refused to provide pathological specimens for clinical research. Finally, a total of 124 UCB specimens from 124 patients with UCB were included in this study. This study was approved by the Ethics Committee of Xijing Hospital, and all of the participating patients gave their informed written consent. Representative formalin-fixed paraffinembedded tumor blocks were obtained from the Department of Pathology at Xijing Hospital. Patients were followed up from the date of surgery, with an average follow-up period of 42 months (1-144 months). Detailed pathological diagnosis was provided by three experienced pathologists according to the American Joint Committee on Cancer (AJCC) Cancer Staging Manual (Version 9) (36) and the 2016 WHO Classification of Tumors of the Urinary System and Male Genital Organs (37). The clinicopathological features of the patients were obtained from the electronic medical records of Xijing Hospital. Immunohistochemistry Staining Four-micron-thick tissue pieces were cut from representative wax blocks of UCB tissues. Slides were then subjected to IHC to evaluate PSMA and CD248 expression, respectively. Briefly, slides were deparaffinized in xylene and rehydrated through a graded alcohol series, before the antigen was retrieved using a high temperature and pressure of 10 nM citrate buffer (pH 6.0). Endogenous peroxidase activity was inactivated using 3% H 2 O 2 , and non-specific binding was blocked using non-immune serum. Primary antibodies were then applied and incubated overnight in a humidified chamber at 4°C. Next day, the slides were washed three times with phosphate-buffered saline (PBS), followed by incubation with horseradish peroxidase (HRP)-labeled secondary antibody at room temperature for 30 min. The slides were then washed three times with PBS, and visualization was performed using 3, 3'-diaminobenzidine (DAB) chromogen for 2 to 3 min. The slides were counterstained with hematoxylin, rinsed in water, dehydrated in ascending concentrations of ethanol, subjected to xylene clearance, and cover-slipped permanently for light microscopy. The primary antibodies used for IHC were Antihuman CD248 (#ab204914, Abcam, Cambridge, UK), anti-human PSMA (#ab76104, Abcam), and anti-human CD31 antibody (#3528, Cell Signaling Technology, Danvers, MA, USA). The IHC kit was purchased from Fuzhou Maixin Reagent Co., Ltd. (Fuzhou, China). All procedures were carried out according to the manufacturer's instructions. Sections were analyzed under a light microscope (Leica DM2500; Leica Microsystems, Wetzlar, Germany), and images were acquired using a Leica DFC 490 system (Leica Microsystems). Evaluation of Staining CD31 staining in serial sections was used to identify the tumorassociated vasculature. To evaluate vascular PSMA and CD248 expression, first we randomly selected three fields under low magnification (100×) and counted the number of CD31 + vascular structures. Then, we chose the fields with a microvessel density (MVD) greater than 40 as hot-spot areas, and examined PSMA and CD248 expression in three of these fields under high magnification (200×). Vascular PSMA and CD248 expression was assessed in a semiquantitative manner. Lesions with no detectable PSMA or CD248 expression were scored as "0"; lesions with staining of PSMA or CD248 in 1-50% of the vasculatures were scored as "1"; and lesions with staining of PSMA or CD248 in >50% of the vasculatures were scored as "2". For statistical analysis, the samples with a staining score of 0 and 1 were grouped as "low expression", and samples with a staining score of 2 were grouped as "high expression". Nomogram Construction Cox regression analysis and logistic regression analysis were adopted to construct vascular-CD248/PSMA-based signatures, accompanied by clinicopathological variables [i.e., age, sex, clinical grade, invasive stage, differentiation status, and Ki-67 (marker of proliferation Ki-67) expression]. The regression coefficients were used to weight the variables of the model, and a nomogram was constructed for visualization. Bioinformatic Analysis of CD248 and PSMA Using TCGA-BLCA Dataset Data Source and Preprocessing First, 413 sets of BLCA data and 19 sets of non-tumor data were downloaded from the TCGA portal (https://portal.gdc.cancer. gov/). Then, transcriptomic data [RNA-Seq, as Fragments Per Kilobase of transcript per Million mapped reads (FPKM)] and clinical information were integrated according to their ID numbers, and within-array replicate probes were replaced with their average using the limma package of the R software. All data were processed and analyzed with the R software (https://www.rproject.org/). Prognostic Value Analysis Microenvironment Cell Population-counter (MCP-counter) in the R package was adopted to estimate "single sample" scores of endothelial cells from the TCGA-BLCA expression matrix. Then, patients in the TCGA-BLCA data were divided into high-expression and low-expression groups according to the median level of CD248, PSMA, and endothelial cell score, respectively. Kaplan-Meier survival analysis was performed to evaluate the prognostic value, and the Pearson correlation coefficient test was employed to assess the correlation between endothelial cells and CD248 and PSMA. P < 0.05 was considered statistically significant. Gene Ontology Enrichment Analysis Differentially expressed genes (DEGs) between tumor and normal tissue were analyzed using Wilcox tests. The P-value was adjusted using the false discovery rate (FDR), with filter criteria of FDR < 0.05 and |log2 fold-change [FC]| > 1. Then, CD248 and PSMA-correlated DEGs (Cor-DEGs) were selected using the Pearson correlation coefficient test with the filter criteria of |correlation coefficient| > 0.5 and P < 0.001. Subsequently, GO functional enrichment of Cor-DEGs were performed via clusterProfiler and enrichplot in the R package, and were visualized using a chord plot through Goplot in the R package. FDR < 0.05 was considered statistically significant. Transcription Factors-Based Regulatory Network The list of TFs was retrieved from the Cistrome website (https:// cistrome.org), and differentially expressed TFs (DETFs) were identified by matching with the DEGs. Meanwhile, univariate Cox regression analysis, with a threshold value of P < 0.01, was used to identify possible prognostic Cor-DEGs (PCor-DEGs). Subsequently, the correlation between DETFs and PCor-DEGs was analyzed, and a TFs-based regulatory network was established using Cytoscape 3.6.0. The threshold of a correlation coefficient > 0.3 and P < 0.05 was employed as the cutoff value. Statistical Analysis All statistical analysis was performed using IBM SPSS statistical software (version 26; IBM Corp., Armonk, NY, USA). Descriptive statistics, such as median, range, and absolute and relative frequencies, were calculated to define the characteristics of the study cohort. The chi-squared test was used to assess the association between PSMA or CD248 expression and various clinicopathological features. Survival time was defined from the day of surgery until death. A survival curve was generated using the Kaplan-Meier method and compared using the log-rank test. Hazard ratios (HR) with corresponding 95% confidence intervals (CI) were estimated using Cox proportional hazards models. A risk plot of the PSMA/CD248-based signature was prepared using the R software. The median risk score was used as the cutoff value. P values < 0.05 were considered to be statistically significant. Vascular Expression of PSMA in UCB and Its Comparison With Clinicopathological Parameters To evaluate PSMA expression in UCB, we performed IHC staining in UCB tissues from 124 patients. The clinicopathological parameters of the patients were summarized in Table 1. Among these patients, 62 (50.00%) showed PSMA expression in >50% of the tumor-associated vasculature (score 2), 22 (17.74%) showed PSMA expression in ≤50% of tumor-associated vasculature (score 1), while 40 (32.26%) did not show detectable PSMA expression (score 0) ( Table 2). Representative cases with different PSMA expression levels are shown in Figure 1A, and the vascular structure was confirmed by staining for CD31, a well-established endothelial cell marker. These results indicated that PSMA was specifically expressed in the vasculature of patients with UCB. Patients with PSMA expression in >50% of the tumor-associated vasculature were designated as the high PSMA expression group, while those with PSMA expression in ≤50% of the tumorassociated vasculature or no detectable PSMA expression were designated as the low PSMA expression group. We then analyzed the association between PSMA expression and clinicopathological features. The results showed that PSMA expression was associated with several clinicopathological features of UCB, such as the Ki-67 index, invasive stage, tumor metastasis, and clinical stage. According to the difference in the Ki-67 index, we divided all cases into two groups based on a Ki-67 index >15% and a Ki-67 indexes ≤15%, because it has been reported that the Ki-67 index of normal bladder tissue is less than 15% (38). Patients with high PSMA expression had significantly higher Ki-67 indexes compared with those of patients with low PSMA expression (c 2 = 11.960, P = 0.0005) ( Figure 1B and Table S1). For invasive stage, patients with muscle invasion had significantly higher PSMA expression levels than those with nonmuscle invasion (c 2 = 13.410, P = 0.0003) ( Figure 1C and Table S1). For tumor metastasis, patients with high PSMA expression were more likely to have more lymph node metastasis and distant metastasis than patients with low PSMA expression (c 2 = 4.594, P = 0.0321) ( Figure 1D and Table S1). Patients in clinical stage III-IV were more likely to have high PSMA expression than those in clinical I-II stage (c 2 = 7.942, P = 0.0048) ( Figure 1E and Table S1). PSMA expression was not associated significantly with other clinicopathological parameters, such as age, sex, tumor differentiation, and tumor invasion (Table S1). Vascular Expression of CD248 in UCB and Its Comparison With Clinicopathological Parameters To evaluate CD248 expression in UCB, we further performed IHC staining of CD248 in UCB tissues from the same 124 patients. Among these patients, 78 (62.90%) showed CD248 expression in >50% of the tumor-associated vasculature (score 2), 38 (30.65%) showed CD248 expression in ≤50% of the tumor-associated vasculature (score 1), respectively, while 8 (6.45%) did not show detectable CD248 expression (score 0) ( Table 2). Representative cases with different CD248 expression levels are shown in Figure 2A, and the vascular structure was confirmed by the staining for CD31. These results indicated that CD248 was also specifically expressed in the vasculature of a subset of patients with UCB. The results showed that difference in CD248 expression remained statistically significant for the different Ki-67 index groups. Patients with high CD248 expression had a significantly higher Ki-67 index compared with that in patients with low PSMA expression (c 2 = 6.004, P = 0.0143) ( Figure 2B and Correlation Between the Expression of CD248 and PSMA in UCB Vessels On the basis of the previous results, we performed Spearman correlation analysis on the data, and the results showed that the expression of CD248 in the tumor-associated vasculature correlated significantly and positively with PSMA expression (r = 0.3935, P < 0.0001) ( Table 2). To intuitively evaluate CD248 and PSMA expression in vessels of UCB, we performed IHC and IF staining in UCB tissues. Figure 3A shows that in the serial paraffin sections of UCB tissues, using platelet endothelial cell adhesion molecule-1 (PECAM-1/CD31) as the positive control, the positive expression sites of PSMA and CD248 in blood vessels were basically the same. There was a positive correlation between the expression of CD31, PSMA, and CD248 in UCB, which was verified using data from the GEPIA database (http://gepia.cancer-pku.cn/) ( Figure 3B). Furthermore, as shown in Figure 3C, by co-staining with CD31, both CD248 and PSMA were positive in the blood vessels of UCB, especially in the neovascularization, and their expression locations were basically the same. Prognostic Value of Vascular CD248 and PSMA Expression in UCB Analysis of data from the GEPIA database showed that overexpression of CD31 could partly predict worse prognosis of patients with UCB ( Figure S1). Therefore, to determine the prognostic value of PSMA and CD248 expression in UCB, we first analyzed the association between vascular PSMA or CD248 expression and overall survival (OS). Survival curves of patients with different CD248 and PSMA expression levels were generated. As shown in Figure 4A, B, patients with high PSMA and CD248 expression tended to have a shorter survival time compared with patients with low PSMA and CD248 expression, respectively (P < 0.001). Then, we performed a semiquantitative receiver operating characteristic (ROC) curve analysis and compared the results at the patient-level for UCB to determine the ability to predict the survival outcome of patients according to the expression of these two biomarkers in UCB vessels. As shown in Figure 4C, the ROC curves of PSMA expression and CD248 expression had sensitivities of 96.55 and 75.86%, respectively, and specificities of 64.21% and 74.74%, respectively. The area under the ROC curve (AUC) values of PSMA expression and CD248 expression were 0.804 (95% CI, 0.723-0.870) and 0.753 (95% CI, 0.668-0.826) respectively, which indicated that the two biomarkers might have similar prognostic prediction performances (P = 0.2807). Furthermore, we carried out the Cox regression analysis to evaluate the prognostic factors for OS of UCB. Univariate analysis showed that vascular PSMA expression (P < 0.001), vascular CD248 expression (P < 0.001), age (P = 0.016), clinical stage (P < 0.001), invasive stage (P < 0.001), and Ki-67 index (P < 0.001) could be used as prognostic factors ( Table 3). Multivariate analysis was then carried out between other prognostic factors of UCB and the vascular expression of PSMA or CD248, respectively. Multivariate analysis 1 revealed that vascular PSMA expression (P < 0.001), clinical stage (P < 0.001), and Ki-67 index (P = 0.002) were associated with OS of UCB (Table 3), meanwhile, multivariate analysis 2 revealed that vascular CD248 expression (P < 0.001), clinical stage (P < 0.001), and Ki-67 index (P<0.001) were associated with OS of UCB ( Table 3). Vascular CD248/PSMA Based Signature With an Enhanced Predictive Performance We weighted each prognostic factor according to the results of the multivariate analysis in Table 3. The specific contents were described as follows: High expression of PSMA was defined as "2", and low expression was defined as "1"; high expression of CD248 was defined as "2", and low expression was defined as "1"; clinical stage III-IV was defined as "2", and the clinical stage I-II was defined as "1"; and a Ki-67 index >15% was defined as "2", and an index ≤15% was defined as "1". Subsequently, the risk score for each patient was calculated using the following formula: According to the median risk scores of PSMA (11.925) and CD248 (13.298), individuals were sorted into a high-risk (n = 67) and a low-risk group (n = 57) according to PSMA and a high-risk (n = 66) and a low-risk group (n = 58) according to CD248. The Kaplan-Meier survival analysis showed that the prognoses according to the PSMA-related risk score and CD248-related risk score were both worse in the high-risk group than in the lowrisk group (P1 < 0.0001, P2 < 0.0001, Figures 4D, E). Then, we ranked patients by their PSMA and CD248-related risk scores and analyzed their survival status, respectively. The results showed a greater number of deaths in the high-risk group, especially according to the PSMA and CD248 risk score ( Figures 4F, G). Then, we constructed ROC curves of the PSMA and CD248-related risk score, and made a comparison with the corresponding PSMA and CD248 expression levels, respectively. As shown in Figure 4H, the AUC value of the PSMA-related risk score was statistically larger than that of PSMA expression (P = 0.0223) ( Figure 4H). However, there was no statistical difference between the AUC values of CD248 expression and the CD248-related risk score (P = 0.1093) ( Figure 4I). A Sankey plot was used, which intuitively illustrated that as an independent risk factor, PSMA-or CD248-related death was largely associated with the strong positive expression group, instead of the weakly positive or negative group ( Figure 4J). The nomogram of the vascular-CD248/PSMA-based signature constructed by Cox regression analysis could be used to predict the 5-year survival probability and median survival time of patients with UCB ( Figures 4K, M). In addition, another nomogram constructed using logistic regression analysis could be employed to predict the risk of death (Figures 4L, N). Prognostic Value of CD248 and PSMA in TCGA-BLCA Survival analysis indicated that highly CD248 expression accompanied a bad prognosis (P < 0.05, Figure 5A), while the expression level of PSMA did not significantly affect the overall survival of patients with BLCA (P > 0.05, Figure 5B). Additionally, patients with a high endothelial cell score exhibited a worse survival outcome (P < 0.05, Figure 5C). The endothelial cell score was related positively to the expression level of CD248 and PSMA (P < 0.0001, Figures 5D, E), indicating that these two genes might contribute to angiogenesis, which was consistent with our aforementioned results. GO Enrichment Analysis of Cor-DEGs in TCGA-BLCA To explore the function of CD248 and PSMA in angiogenesis, GO enrichment analysis of Cor-DEGs was performed. First, we selected 343 Cor-DEGs from among 3,126 DEGs in the TCGA-BLCA dataset, and the top 20 Cor-DEGs that correlated with CD248 and PSMA were adopted to develop a co-expression heatmap (Figures 5F, G; Supporting Data 1 and 2). Then, as shown in Figure 5H, "Regulation of angiogenesis", "Endothelial cell proliferation", "Endothelial cell migration", "Endothelial cell differentiation", "Endothelial tube morphogenesis", "Sprouting angiogenesis", and "Vascular endothelial growth factor signaling pathway" GO terms were identified as significantly enriched by Cor-DEGs ( Figure 5H, FDR < 0.05). TFs-Based Regulatory Network for PCor-DEGs To explore the underlying mechanism of the Cor-DEGs in BLCA angiogenesis and progression, 77 DETFs and 96 PCor-DEGs were screened from the DEGs and Cor-DEGs, respectively ( Figures 5I, J; Supporting Data 3 and 4). Then, a TFs-based regulatory network for PCor-DEGs was visualized ( Figure 5K; Supporting Data 5). In the regulatory network, several hub genes with maximum intramodular connectivity were identified (i.e., GATA6, SOX17, MEF2C, and SRF), which might propose novel insights for tumor angiogenesis. DISCUSSION To the best of our knowledge, this is the first study to illustrate the expression pattern of CD248 in UCB and clarify the relationship between the expression of two tumor-associated vascular biomarkers (PSMA and CD248) and the clinicopathological features, prognosis, and survival of patients with UCB systematically. The tumor vasculature is the pathological basis for the growth, invasion, and metastasis of solid tumors; therefore, vasculature-targeted diagnosis and therapy has also been proven to be an effective antitumor strategy (39). Angiogenesis inhibitors have been shown to have effective antitumor activity in a broad spectrum of cancer types (40). Although some traditional vascular markers, such as vascular endothelial growth factor (VEGF), vascular endothelial growth factor receptor 2 (VEGFR2), and delta-like 4 (DLL4), have been studied in the diagnosis and treatment of bladder cancer (41,42), their ability to predict clinical prognosis is limited, resulting in an unvalidated therapeutic effect in UCB (43). Therefore, novel molecular markers specifically expressed in the tumor-associated vasculature, which can not only improve diagnosis and prognosis prediction, but also provide novel treatment strategies for UCB, are urgently required. We first examined PSMA and CD248 expression, respectively, in 124 patients with UCB using IHC staining and confirmed that both PSMA and CD248 are specifically expressed in the vasculature of UCB. The expression of PSMA was associated significantly with other clinicopathological features, such as metastasis, clinical stage, invasive stage, and the Ki-67 index, while the vascular expression of CD248 was significantly associated only with the Ki-67 index. The nuclear protein Ki-67 is generally expressed only in proliferating cells (44), and the Ki-67 index is a predictive indicator of tumor growth and progression (45). In this study, the association between CD248 expression and the Ki-67 index could partially indicate a correlation with tumor metastasis and tumor invasion, although the association was not statistically significant. Furthermore, vascularly expressed PSMA and CD248 could be used as independent risk factors for UCB. Then, we further analyzed the correlation between the expression of PSMA and CD248 in UCB, and confirmed that there was a positive correlation between these two molecules, which was consistent with the results of analysis in the GEPIA database. Using CD31 as a positive control, serial section staining of UCB tissues intuitively proved the consistent expression sites of PSMA and CD248 in UCB vessels. The above results also confirmed that both PSMA and CD248 are expressed in UCB vessels and might serve as potential tumor-associated vascular biomarkers. In addition, vascular PSMA and CD248 expression levels were associated with the prognosis of patients with UCB. Patients with high vascular PSMA or CD248 expression tended to have a shorter OS than patients with low vascular expression. The predictive accuracy of the two biomarkers (AUC CD248 = 0.753, AUC PSMA = 0.804) were regarded as acceptable. Meanwhile, there was no significant difference between the prognostic predictive performance of vascular PSMA and CD248 expression (P = 0.2807). According to the results of multivariate analysis, taking corresponding hazard ratios as coefficients, we calculated the risk score of each patient using PSMA-and CD248-based signatures. The prognostic predictive performance of the PSMA-related risk score was significantly better than PSMA expression only (AUC risk score = 0.830, AUC PSMA = 0.804, P = 0.0223), while there was no significant difference between the CD248-related risk score and the corresponding expression. Therefore, combining PSMA expression with clinicopathological parameters might promote clinical practicability, while the CD248 expression level could be used independently to predict patient prognosis. To facilitate clinical application, nomograms of PSMA-and CD248-based signatures were prepared. PSMA expression in UCB vasculature has been confirmed previously. Mary et al. (33) and Chang et al. (34) examined the PSMA expression pattern in various subtypes of bladder cancer and found that PSMA was expressed in the tumor vasculature of all the bladder cancer samples, which was consistent with the results of the present study. However, they did not further analyze the relationship between PSMA expression and patient prognosis. Gala et al. (35) examined PSMA protein expression in three normal tissues and four transitional cell carcinoma (TCC) tissues and found that both normal urothelium and TCC had positive PSMA expression; however, they did not clarify vascular PSMA expression and did not analyze the clinical significance and prognostic value of vascular PSMA expression; meanwhile, the number of samples was quite limited. Schreiber et al. (46) reported that PSMA was expressed in both the parenchyma and vessels of urothelial cell carcinoma (UCC) and that the expression was associated with tumor grade and stage, rather than tumor recurrence and recurrence-free survival. However, they did not investigate the prognostic value of PSMA in terms of OS. To offset the limitations of previous studies, the present study included a relatively higher number of cases and illustrated the vasculature expression of PSMA, and its prognostic value as a risk predictor intuitively. To date, the expression pattern of CD248 in UCB has not been studied systemically. CD248 has been reported to be expressed by tumor vessel-associated pericytes (47) and stromal fibroblasts (48) in a wide variety of human tumors with different histologies, but not in normal vessels (27). CD248 has also been classified as a marker of tumor vesselassociated pericyte cells (49) and as a selective endothelial precursor cell marker (50). Pericytes provide structural support for endothelial cells and thus stabilize the vasculature. In our previous study, we found that CD248 was expressed specifically in HCC and RCC, and overexpression of CD248 was related to a poor prognosis (31,32). Davies et al. (51) reported that overexpression of CD248 correlated negatively with the clinical outcome of patients with breast cancer. The above results confirmed the importance of vascular CD248 expression in tumors for predicting prognosis. In this study, using CD31 as the positive control, we evaluated and calculated the expression of CD248 in blood vessels and concluded that high vascular expression of CD248 is a significant predictive risk factor for poor prognosis in patients with UCB. To verify the results of our research, we performed bioinformatic analyses of CD248 and PSMA using a TCGA-BLCA dataset. The endothelial cell score was related positively to the expression level of CD248 and PSMA, indicating that those two genes might contribute to angiogenesis, which was consistent with our aforementioned results. To explore the function of CD248 and PSMA in angiogenesis, GO enrichment analysis of Cor-DEGs was performed. The results suggested that several angiogenesis-promoting GO terms were significantly enriched, including endothelial tube morphogenesis, endothelial cell proliferation, migration and differentiation, regulation of angiogenesis, vascular permeability, and blood vessel diameter, which might provide insights into the association with angiogenesis mentioned above. To explore the underlying mechanism of Cor-DEGs in UCB angiogenesis and progression, a TFs-based regulatory network for PCor-DEGs was visualized, and several hub genes (i.e., GATA6, SOX17, MEF2C, and SRF), with maximum intramodular connectivity, were identified, which might play a vital role in PSMA/CD248-related angiogenesis regulation and in UCB progression. Regardless of the prognostic value of PSMA and CD248, their large extracellular domains can be recognized by antibodies, peptides, RNA aptamers, and small molecules, making them ideal molecules for targeted therapy. For example, a phase I trial of MORAb-004, a humanized monoclonal antibody engineered to target CD248, was performed in multiple solid tumor types including pancreatic neuroendocrine, hepatocellular, and sarcoma (52). The specific expression of PSMA and CD248 in the vasculature of UCB will contribute to targeted therapy. In our previous studies, we confirmed a PSMA-specific single-chain antibody fragment (scFv) (termed gy1) (53) and a CD248-specific scFv named scFv78 (54) could specifically recognize the extracellular domains of PSMA and CD248, respectively. Furthermore, we reconstructed this scFv of PSMA into a human monoclonal PSMA antibody (PSMAb) and provided evidence that PSMAb could be specifically internalized into PSMA+ prostate cancer cells with high binding affinity in vitro and in vivo. In addition, we confirmed that PSMAb could inhibit tumor growth through antibody-dependent cell-mediated cytotoxicity (ADCC) and complement-dependent cytotoxicity (CDC) in PSMA+ castration-resistant prostate cancer cell xenografts in vivo (55). Meanwhile, we further demonstrated that an endosialin-specific antibody, IgG78, could inhibit the growth of HCC effectively in both subcutaneous and orthotopic models (31). Based on the above research, both PSMA and CD248 might represent ideal therapeutic targets for UCB. There have been some limitations in this study. First, the sample size was relatively small, especially for advanced or metastatic cases, which might be the reason why CD248 expression didn't show significant difference in tumor metastasis and tumor invasion. A considerable number of patients with advanced or metastatic disease no longer had indications for surgical treatment, thus their pathological specimens could not be obtained, and they could not be included in this study. Second, we mainly focused on UCB in this study and didn't include other pathological subtypes, such as glandular and squamous carcinoma, even though UCB accounts for more than 90% bladder cancers. Notably, we should explore the prognostic value of vascular-expressed molecules in future studies for a comprehensive investigation of different pathological subtypes. Finally, the underlying mechanism of PSMA and CD248 in the progression of UCB was explored only via a bioinformatics-based study. Therefore, GATA6mediated tumor angiogenesis should be explored by further laboratory investigations. In summary, our study confirmed that PSMA and CD248 were expressed in the vasculature of UCB. Both of them were associated with deteriorated clinicopathological features and could be used as novel prognostic markers for UCB. Bioinformatic analyses further revealed the possible functions of PSMA and CD248 in angiogenesis regulation, which might partly explain the progression of UCB and provide potential diagnostic and therapeutic targets. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. ETHICS STATEMENT The study was approved by the Ethics Committee of the Xijing Hospital, Fourth Military Medical University. Informed consent was obtained from all participants included in the study. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
7,999.6
2021-11-17T00:00:00.000
[ "Medicine", "Biology" ]
Reuse of Eggshell Waste and Recycled Glass in the Fabrication Porous Glass–Ceramics : This study was conducted to fabricate and characterize of glass–ceramic foam derived from soda-lime silica (SLS) glass waste and eggshell (ES) waste as a foaming agent by using empirical formula [ES] × [SLS] 100 − x where x = 1 wt %, 3 wt %, 6 wt %, 9 wt %. The samples undergo a heat-treatment process at temperature 800 ◦ C with a heating rate of 10 ◦ C / min. The properties of the samples were measured by average density measurement and linear expansion. The structural properties were studied by XRD, FESEM and FTIR concerning the di ff erent composition of the foaming agent while the mechanical properties were determined by compressive strength using UTM. The lowest density and compressive strength were achieved by 0.326 g / cm 3 and 0.04 MPa, respectively with the highest linear expansion at 77.33% by the addition of 3 wt % of ES. Moreover, the cristobalite phase (SiO 2 ) were identified after the heat treatment process. The production of foam glass–ceramics using SLS glass and ES can be applied to prepare di ff erent type of porosity that gives benefit to the environment and energy usage. Introduction Globally, the increase of organic and inorganic waste caused by increasing human population growth as well as their lifestyle. Discarded waste such as waste glasses give a negative impact on the environment. For instance, the discarded cathode ray tube (CRT) panel from the electronic component and soda-lime-silica glass from the automotive industry are polluted with heavy metals [1,2]. To overcome this problem, both raw materials are utilized for making valuable materials such as manufacturing stoneware tiles, production of concrete blocks, fabrication of glass-ceramics materials, and the manufacture of foam glass-ceramics [3]. There is a wide sector application used to insulate the materials such as the construction industry, biochemistry industry, material engineering, military security, etc. [4]. Foam glass-ceramic is a thermal insulation material that has attracted much attention nowadays, function to retard heat flow due to its properties such as high thermal resistance. Generally, the foam glass is a solid bulk material, shaped by mixing the glass powder and foaming agent and then undergo a sintering process to form a porous structure. After the softening point temperature of the glass is reached, the solid phase turns to viscous liquid and facilitates the bubbles to grow inside the glass melt, hence foam formation has occurred when solidified the glass melt. The glass can provide an amorphous phase that led to better thermal insulation material s appearance. The pores were generated The raw material used in this research were SLS glass waste bottle, ES as a foaming agent and polyvinyl alcohol (PVA) as a binder. The SLS glass waste was collected from the glass bottle of sauce (Brand: Life) from Pizza Hut IOI City Mall Putrajaya, Malaysia. The waste glass is usually contaminated from dirt and it must be washed before crushing. ES was collected from recycling containers located at Sri Serdang, Selangor, Malaysia restaurants which then rinsed with water and dried for 24 h in the dry oven before crushing. This process used to completely remove all dirt on the ES. Waste SLS glass and ES were crushed into the small glass cullet. The glass cullet then placed into milling jar with ceramic balls and was left at milling machine (US Stoneware Corp) for 24 h at the speed of 50 rpm to get a fine glass powder. Then, the glass powder was filtered to obtain the powder with 45 µm particle size using stainless steel sieve while the ES ground using pestle and mortar and sieve into 45 µm size. Figure 1 shows the flowchart of the sample preparation. First, SLS glass and ES powder were prepared with different weight percentages and weighed using a digital analytical balance. Both powders were put in a bottle according to the empirical formula and mix by using pestle and mortar. Then, the homogenous powder was pressed using a (Specac, Ltd., Orpington, UK) hydraulic pressing machine, with an applied load of 5 MPa for 10-15 min and transformed into pellet form, 13.10-mm diameter. The pellets placed into the alumina boat and went for the heat-treatment process. All samples were heat-treated at 800 • C for 1 h by using the electric furnace. After the heat treatment process, the pellets were grounded into 45 µm in size and were sent to characterization. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 12 samples were heat-treated at 800 °C for 1 h by using the electric furnace. After the heat treatment process, the pellets were grounded into 45 μm in size and were sent to characterization. Wavelength dispersive X-ray fluorescence (WD-XRF), (Bruker, S8 Tiger, Billerica, MA, USA) was used to analyze SLS and ES powder to identify chemical composition in both raw materials. Thermogravimetric Analyzer-Mettler Toledo (TGA/SDTA 851, Columbus, OH, USA) was used to determine the weight loss decomposition of ES with the temperature in the range of 27-1000 °C. The finding results as shown in Table 1 and Figure 1 for XRF and TGA analysis, respectively. The bulk density (ρ) of the produced foam was measured by the Archimedes principle and the formula as shown in Equation (1). = = (1) Wavelength dispersive X-ray fluorescence (WD-XRF), (Bruker, S8 Tiger, Billerica, MA, USA) was used to analyze SLS and ES powder to identify chemical composition in both raw materials. Thermogravimetric Analyzer-Mettler Toledo (TGA/SDTA 851, Columbus, OH, USA) was used to determine the weight loss decomposition of ES with the temperature in the range of 27-1000 • C. The finding results as shown in Table 1 and Figure 1 for XRF and TGA analysis, respectively. The bulk density (ρ) of the produced foam was measured by the Archimedes principle and the formula as shown in Equation (1). where W air is a weighted sample in air, W water is a weighted sample in water and p is a density of water. Appl. Sci. 2020, 10, 5404 4 of 12 The porosity was evaluated by comparing Archimedes analysis to true density using Brunauer-Emmett-Teller (BET) surface analysis. The ratio between density as follows: where p b is a bulk density, p t is a true density. The linear expansion (LE) of the produced material was measured by subtracting the diameter of the glass-ceramic pellet (D f ) after it was treated to the diameter of the glass-ceramic before treated (D i ) using digital vernier caliper (Mitutoyo 500-196-20, Kanagawa, Japan). The percentage linear expansion of the treated foam glass-ceramic sample was determined using Equation (3). X-ray diffraction (XRD) was made from MPD X'PERT PRO PANalytical's (Philips, PW 3040/60, Almelo, The Netherlands and Malvern, UK) with Cu-Kα radiation in the 20 range from 20 • to 80 • . The software X Pert Highscore plus software (PANalytical) was used to identify the crystal phase of foam glass-ceramics. The crystal phase content or crystallinity (%) was evaluated by Origin 8 software based on a semicrystalline pattern of XRD. The formula of the crystal phase is where C is the crystallinity or crystal content (%), A C is the diffraction peak area of the crystalline phase and A A is the scattering peak area of the amorphous phase. The diffraction peak area of the crystalline phase was determined by the calculation of full width at half maximum (FHWM). The sample undergoes FTIR analysis to identify the functional group of the sample from spectra recorded using FTIR Nicolet 6700 (Thermo Nicolet, Waltham, MA, USA) with 400-4000 nm of wavenumber. The morphology and microstructure of the foam glass-ceramics were analyzed by using FESEM Nova NanoSEM 230 (FEI, Hillsboro, OR, USA) and the size of pores diameter was evaluated by ImageJ software. The compressive strength was measured by using Universal Testing Machine (Instron 5566, Norwood, MA, USA) with a 10 kN load of compression and crosshead speed at 0.5 mm/min. Before testing the strength, the inhomogeneous surface of foam glass-ceramics was refined by using a hacksaw and SiC abrasive paper then all the samples were cut to get a standardized size of~14.0 mm × 8.0 mm of the disc. The compressive strength was determined by stress-strain curve analysis and recorded the average for each sample (n = 5). Composition of SLS Glass and ES Each of the precursor SLS glass and ES were chemically analyzed by using Wavelength Dispersive X-ray Fluorescence (WDXRF). The elemental analysis was measured in the oxide form as tabulated in Table 1. The chemical composition of the SLS glass waste in this research was similar to the SLS glass bottles. The ES used as a foaming agent mainly contained calcium carbonate (CaCO 3 ). From Table 1, there was 95 wt % account for the element of SiO 2 , CaO and Na 2 O remarked the main constituents of the precursor glass. Meanwhile, the remaining percentage of other oxide elements were Al 2 O 3 and MgO from the total composition of the precursor glasses. ES powder undergoes TGA analysis to determine the weight loss decomposition of calcium carbonate, CaCO 3 to carbon dioxide, CO 2, and calcium oxide, CaO in the ratio 1:1. Figure 2 illustrates the TGA analysis of ES showed that the loss mass of calcium carbonate. The calcium carbonate starts to decompose about 6% of mass at 680 • C of temperature at the first stage and the second stage, the mass loss of calcium carbonate was intensified by 41% to carbon dioxide, CO 2 and the other 53% was calcium oxide, CaO at 810 • C of temperature. The finding percentage mass loss of calcium oxide was agreed with XRF analysis as shown in Table 1. After 810 • C of temperature, there was no more decompose of calcium carbonate. The weight loss of carbonate relative to the temperature in this research is similar to the previous study which states that decompose temperature of eggshell in the range of 700-850 • C of temperature [14]. Therefore, 800 • C of temperature was selected as the sintering temperature in this research. Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 12 the TGA analysis of ES showed that the loss mass of calcium carbonate. The calcium carbonate starts to decompose about 6% of mass at 680 °C of temperature at the first stage and the second stage, the mass loss of calcium carbonate was intensified by 41% to carbon dioxide, CO2 and the other 53% was calcium oxide, CaO at 810 °C of temperature. The finding percentage mass loss of calcium oxide was agreed with XRF analysis as shown in Table 1. After 810 °C of temperature, there was no more decompose of calcium carbonate. The weight loss of carbonate relative to the temperature in this research is similar to the previous study which states that decompose temperature of eggshell in the range of 700-850 °C of temperature [14]. Therefore, 800 °C of temperature was selected as the sintering temperature in this research. Physical and Mechanical Analysis Bulk density of the foam glass-ceramics for 1, 3, 6 wt% and 9 wt% ES were 1.215, 0.326, 0.421 and 0.822 g/cm 3, respectively. The trend for the bulk density was similar to the compressive strength value. Figure 3 indicates the trend of bulk density-porosity of foam glass-ceramics at different content of ES. Porosity is calculated with the constant value of ρpowder = 2.5 g/cm 3 . The porosity was indirectly proportional to the bulk density. Figure 2 shows the minimum density was achieved from 3 wt% ES with 87% of porosity. Meanwhile, the maximum density was 1.215 g/cm 3 from 1 wt% ES with the lowest porosity of 51.4%. The value is in the range of commercial foam glass, which is 0.1-1.2 g/cm 3 [15]. Linear expansion of a sample is the changing of size and dimension of the sample thus higher of linear expansion percentage contributes to the reducing volume of the sample. The linear expansion curve as shown in Figure 4, shows that the linear expansion value have an insignificant difference between 1 wt% and 9 wt% ES. The maximum linear expansion from 3 wt% ES with the value was 77.33% meanwhile the minimum linear expansion was 59.62% from 9 wt% ES. Therefore, decreasing in a volume of foam glass-ceramics would give to the high density. The difference value of density caused by the difference content of ES. The higher content of ES would have an increasing amount of crystallite hence give increasing the viscosity of the system [14]. Physical and Mechanical Analysis Bulk density of the foam glass-ceramics for 1, 3, 6 wt % and 9 wt % ES were 1.215, 0.326, 0.421 and 0.822 g/cm 3, respectively. The trend for the bulk density was similar to the compressive strength value. Figure 3 indicates the trend of bulk density-porosity of foam glass-ceramics at different content of ES. Porosity is calculated with the constant value of ρ powder = 2.5 g/cm 3 . The porosity was indirectly proportional to the bulk density. Figure 2 shows the minimum density was achieved from 3 wt % ES with 87% of porosity. Meanwhile, the maximum density was 1.215 g/cm 3 from 1 wt % ES with the lowest porosity of 51.4%. The value is in the range of commercial foam glass, which is 0.1-1.2 g/cm 3 [15]. Linear expansion of a sample is the changing of size and dimension of the sample thus higher of linear expansion percentage contributes to the reducing volume of the sample. The linear expansion curve as shown in Figure 4, shows that the linear expansion value have an insignificant difference between 1 wt % and 9 wt % ES. The maximum linear expansion from 3 wt % ES with the value was 77.33% meanwhile the minimum linear expansion was 59.62% from 9 wt % ES. Therefore, decreasing in a volume of foam glass-ceramics would give to the high density. The difference value of density caused by the difference content of ES. The higher content of ES would have an increasing amount of crystallite hence give increasing the viscosity of the system [14]. The compressive strength of foam glass-ceramics was determined by stress-strain curve analysis as shown in Figure 5. From the stress-strain curve, the ultimate strength point was determined beyond before the cracking of the samples [16,17]. The curve for 3 wt% of ES was slightly invisible because of the minimum value of the strength of 0.04 MPa. The maximum compressive strength from 9 wt% ES which was 0.81 MPa. Figure 6 shows the relation of porosity-compressive strength respective to the different content of the foaming agent. From the graph, the compressive strength was decreased drastically from 1 wt% to 3 wt% ES and increase slightly as adding more ES content to the 6 wt% and 9 wt%. The lowest strength from 3 wt% ES which was 0.04 MPa with the highest porosity. This is indicating that low in strength caused by high in porosity attributed to the expansion of the samples. Meanwhile, there was a significant difference in compressive strength between 1 wt% and 3 wt% ES. There may be an insufficient amount of CO2 gas generated in from 1 wt% ES. In fact, the high content of SiO2 can cause a lack of liquid phase thus no expansion [18]. The compressive strength of foam glass-ceramics was determined by stress-strain curve analysis as shown in Figure 5. From the stress-strain curve, the ultimate strength point was determined beyond before the cracking of the samples [16,17]. The curve for 3 wt% of ES was slightly invisible because of the minimum value of the strength of 0.04 MPa. The maximum compressive strength from 9 wt% ES which was 0.81 MPa. Figure 6 shows the relation of porosity-compressive strength respective to the different content of the foaming agent. From the graph, the compressive strength was decreased drastically from 1 wt% to 3 wt% ES and increase slightly as adding more ES content to the 6 wt% and 9 wt%. The lowest strength from 3 wt% ES which was 0.04 MPa with the highest porosity. This is indicating that low in strength caused by high in porosity attributed to the expansion of the samples. Meanwhile, there was a significant difference in compressive strength between 1 wt% and 3 wt% ES. There may be an insufficient amount of CO2 gas generated in from 1 wt% ES. In fact, the high content of SiO2 can cause a lack of liquid phase thus no expansion [18]. The compressive strength of foam glass-ceramics was determined by stress-strain curve analysis as shown in Figure 5. From the stress-strain curve, the ultimate strength point was determined beyond before the cracking of the samples [16,17]. The curve for 3 wt % of ES was slightly invisible because of the minimum value of the strength of 0.04 MPa. The maximum compressive strength from 9 wt % ES which was 0.81 MPa. Figure 6 shows the relation of porosity-compressive strength respective to the different content of the foaming agent. From the graph, the compressive strength was decreased drastically from 1 wt % to 3 wt % ES and increase slightly as adding more ES content to the 6 wt % and 9 wt %. The lowest strength from 3 wt % ES which was 0.04 MPa with the highest porosity. This is indicating that low in strength caused by high in porosity attributed to the expansion of the samples. Meanwhile, there was a significant difference in compressive strength between 1 wt % and 3 wt % ES. There may be an insufficient amount of CO 2 gas generated in from 1 wt % ES. In fact, the high content of SiO 2 can cause a lack of liquid phase thus no expansion [18]. Structural Analysis As discussed on the physical properties of foam glass-ceramics earlier, the finding can be verified through microstructure analysis as shown in Figure 7. Figure 7 shows that the pore distribution closed pores of foam glass-ceramics at different content of ES. Based on Figure 7a, it shows that the small pores appeared with homogeneously distributed. The average size of the pore diameter was 682 μm. Increasing the ES content to the 3 wt% gives a significant difference in the pore size as shown in Figure 7b. Therefore, the diameter of the pore distributed from 3 wt% ES was in the range of 870-1500 μm. The pores revealed from 6 wt% ES as shown in Figure 7c Structural Analysis As discussed on the physical properties of foam glass-ceramics earlier, the finding can be verified through microstructure analysis as shown in Figure 7. Figure 7 shows that the pore distribution closed pores of foam glass-ceramics at different content of ES. Based on Figure 7a, it shows that the small pores appeared with homogeneously distributed. The average size of the pore diameter was 682 μm. Increasing the ES content to the 3 wt% gives a significant difference in the pore size as shown in Figure 7b. Therefore, the diameter of the pore distributed from 3 wt% ES was in the range of 870-1500 μm. The pores revealed from 6 wt% ES as shown in Figure 7c Structural Analysis As discussed on the physical properties of foam glass-ceramics earlier, the finding can be verified through microstructure analysis as shown in Figure 7. Figure 7 shows that the pore distribution closed pores of foam glass-ceramics at different content of ES. Based on Figure 7a, it shows that the small pores appeared with homogeneously distributed. The average size of the pore diameter was 682 µm. Increasing the ES content to the 3 wt % gives a significant difference in the pore size as shown in Figure 7b. Therefore, the diameter of the pore distributed from 3 wt % ES was in the range of 870-1500 µm. The pores revealed from 6 wt % ES as shown in Figure 7c was insignificant different from pore distribution between 3 wt % ES. However, the microstructure shows that there was an inhomogeneous pore distribution; indicate small pores which the size of diameter in the range of 250-350 µm while for large pores in the range of 861-1200 µm of diameter. This phenomenon occurs due to the high internal pressure inside the pores. Indeed, the high content of the foaming agent would increase the gas pressure and outweighs the strength of pore walls, thus making open pores in the surface [19]. Moreover, the previous research by using banana leaves as a foaming agent stated that the samples contain high banana leaves could cause the release of gases as a result break of the pore's wall and coalescence of pores occur [20]. Meanwhile, as an increasing amount of ES to 9 wt %, it can see the decrease in pores, 508-700 µm of diameter as shown in Figure 7d due to the high viscosity of the system. The changing of viscosity influenced by the crystallization process inside the glass melt [21]. The previous research study by using coal fly ash and CaCO 3 revealed that pores are homogenized sized obtained with 6% of calcium carbonate [22]. Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 12 different from pore distribution between 3 wt% ES. However, the microstructure shows that there was an inhomogeneous pore distribution; indicate small pores which the size of diameter in the range of 250-350 μm while for large pores in the range of 861-1200 μm of diameter. This phenomenon occurs due to the high internal pressure inside the pores. Indeed, the high content of the foaming agent would increase the gas pressure and outweighs the strength of pore walls, thus making open pores in the surface [19]. Moreover, the previous research by using banana leaves as a foaming agent stated that the samples contain high banana leaves could cause the release of gases as a result break of the pore's wall and coalescence of pores occur [20]. Meanwhile, as an increasing amount of ES to 9 wt%, it can see the decrease in pores, 508-700 μm of diameter as shown in Figure 7d due to the high viscosity of the system. The changing of viscosity influenced by the crystallization process inside the glass melt [21]. The previous research study by using coal fly ash and CaCO3 revealed that pores are homogenized sized obtained with 6% of calcium carbonate [22]. Figure 8 shows the XRD pattern of 1 wt%, 3 wt%, 6 wt% and 9 wt% ES at 800 °C of temperature. There was a sharp peak of cristobalite (SiO2, 96-900-8228) at 1 wt% ES because of the high content of SiO2 while a little growth of wollastonite (CaSiO3, 96-900-5778) with 33.26% of crystallinity. Percent of the crystallinity of the semicrystalline phase was determined and evaluated. The amorphous region exists as a broad hump in the range of 21-35° of 2θ meanwhile the crystalline phase was formed underlying the amorphous phase. The observed peak of the crystalline phase was compared with the reference pattern by the Crystallography Open Database (COD). Therefore, the finding peak in this research was cristobalite (SiO2, 96-900-8228) located at ~22° of 2θ and wollastonite (CaSiO3, 96- Figure 8 shows the XRD pattern of 1 wt %, 3 wt %, 6 wt % and 9 wt % ES at 800 • C of temperature. There was a sharp peak of cristobalite (SiO 2 , 96-900-8228) at 1 wt % ES because of the high content of SiO 2 while a little growth of wollastonite (CaSiO 3 , 96-900-5778) with 33.26% of crystallinity. Percent of the crystallinity of the semicrystalline phase was determined and evaluated. The amorphous region exists as a broad hump in the range of 21-35 • of 2θ meanwhile the crystalline phase was formed underlying the amorphous phase. The observed peak of the crystalline phase was compared with the reference pattern by the Crystallography Open Database (COD). Therefore, the finding peak in this research was cristobalite (SiO 2 , 96-900-8228) located at~22 • of 2θ and wollastonite (CaSiO 3 , 96-900-5778) located 27 • , 29 • , 30 • and 32 • of 2θ. As increasing ES content to 3 wt %, 6 wt % and 9 wt %, the crystallinity was 37.89%, 38.19% and 66.80%, respectively. The high crystal content from 9 wt %, which contributes to the high viscosity of glass melt. Consequently, it would give an Homogeneous silica is formed only by Si-O-Si bonds meanwhile heterogeneous silica atom is formed with different bonds other than silica. An asymmetric stretch network of silica is increased with the presence of heteroatom meanwhile symmetric of silica bonding is decreased. The band of Ca-O was slight intense for 6 wt % and 9 wt % ES at 600 cm −1 of wavenumber compared to the 1 wt % and 3 wt % ES due to the growth of CaSiO 3 as found in XRD analysis meanwhile, the band of symmetric, vs. SiO 2 at~750 cm −1 of wavenumber were decreased due to the bonding of heteroatom, Ca-Si-O. The wavenumber of asymmetric, v a of Si-O was located in the range of 800-1250 cm −1 indicate the main strong peak with the rich in phase because of the presence of silica in SLS glass powder. Moreover, the wider band at 950 cm −1 of wavenumber was caused by the superposition of the absorption peak related to the stretching vibration of bridging and nonbridging Si-O chemical bonds [24]. Consequently, it promotes the migration of the ions, Ca, and change the structural silica network. The band at 1440-1450 cm −1 of wavenumber assigned that band of carbonate, C-O group. As increasing the content of ES to 9 wt % of ES the band indicates strongly intense at 1450 cm −1 of wavenumber compared to the band at 1 wt %-6 wt % of ES. This is because of the partial decomposition of calcium carbonate from ES to carbon dioxide at 810 • C of temperature as discussed from TGA analysis. There is a little amount of calcium carbonate at high content of ES since the other previous study stated that calcium carbonate is undetectable above 824 • C of temperature [25]. Lastly, there was a slight peak located in the range of 2050-2250 cm −1 represent the ambient of carbon dioxide, CO 2 the main element in this research [26]. strong peak with the rich in phase because of the presence of silica in SLS glass powder. Moreover, the wider band at 950 cm −1 of wavenumber was caused by the superposition of the absorption peak related to the stretching vibration of bridging and nonbridging Si-O chemical bonds [24]. Consequently, it promotes the migration of the ions, Ca, and change the structural silica network. The band at 1440-1450 cm −1 of wavenumber assigned that band of carbonate, C-O group. As increasing the content of ES to 9 wt% of ES the band indicates strongly intense at 1450 cm −1 of wavenumber compared to the band at 1 wt%-6 wt% of ES. This is because of the partial decomposition of calcium carbonate from ES to carbon dioxide at 810 °C of temperature as discussed from TGA analysis. There is a little amount of calcium carbonate at high content of ES since the other previous study stated that calcium carbonate is undetectable above 824 °C of temperature [25]. Lastly, there was a slight peak located in the range of 2050-2250 cm −1 represent the ambient of carbon dioxide, CO2 the main element in this research [26]. Conclusions In present work, foam glass-ceramics with low in bulk density and relatively high compressive strength can be produced from a mixture of waste SLS glass and ES as a foaming agent by heat treatment process at 800 °C of temperature for 60 min. The content of ES gives an effect on bulk density, porosity, linear expansion and compressive strength. Foam glass-ceramics that have low in bulk density (0.421 g/cm 3 ) and high compressive strength (0.22 MPa) from 6 wt% ES. In terms of structural properties, the increase in the content of ES led to the smaller number of pores caused by high densification of the glass matrix. Moreover, the highest crystal phase content such as wollastonite would hinder the foam formation and give an impact to the properties of foam glassceramics that influenced by the crystallization process in the glass melt. In a nutshell, the preparation of foam glass-ceramic as the porous structure derived from waste materials would give benefit to the economy and environment issue. It is a useful material to the various application in the industry depending on the type of porous structure. Conclusions In present work, foam glass-ceramics with low in bulk density and relatively high compressive strength can be produced from a mixture of waste SLS glass and ES as a foaming agent by heat treatment process at 800 • C of temperature for 60 min. The content of ES gives an effect on bulk density, porosity, linear expansion and compressive strength. Foam glass-ceramics that have low in bulk density (0.421 g/cm 3 ) and high compressive strength (0.22 MPa) from 6 wt % ES. In terms of structural properties, the increase in the content of ES led to the smaller number of pores caused by high densification of the glass matrix. Moreover, the highest crystal phase content such as wollastonite would hinder the foam formation and give an impact to the properties of foam glass-ceramics that influenced by the crystallization process in the glass melt. In a nutshell, the preparation of foam glass-ceramic as the porous structure derived from waste materials would give benefit to the economy and environment issue. It is a useful material to the various application in the industry depending on the type of porous structure.
6,948.2
2020-08-05T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Nanocomposites of Polyaniline and Cellulose Nanocrystals Prepared in Lyotropic Chiral Nematic Liquid Crystals Stable lyotropic chiral nematic liquid crystals (N-LCs) of cellulose nanocrystals (CNs) were prepared via hydrolysis using sulfuric acid. The lyotropic N-LCs were used as an asymmetric reaction field to synthesize polyaniline (PANI) onto CNs by in situ polymerization. As a primary step, we examined the mesophase transition of the N-LCs of CNs suspension before and after in situ polymerization of aniline (ANI) by polarizing optical microscopy.The structure of nanocomposites of PANI/CNs was investigated at a microscopic level using Fourier transform infrared spectroscopy and X-ray diffraction. Influence of the CNs-to-ANI ratio on the morphology of the nanocomposites was also investigated at macroscopic level by scanning electron and transmission electron microscopies. It is found that the weight ratio of CNs to aniline in the suspension significantly influenced the size of the PANI particles and interaction between CNs and PANI. Moreover, electrical properties of the obtained PANI/CNs films were studied using standard four-probe technique. It is expected that the lyotropic N-LCs of CNsmight be available for an asymmetric reaction field to produce novel composites of conjugated materials. Introduction Conductive polymers have been extensively investigated and widely used in such products as electrolytic capacitors and secondary batteries [1].Polymers have become essential for lightweight, high performance batteries used in notebook computers, cellular phones, and other portable equipment.Conductive polymers are also being studied for their use as materials in molecular devices, called the ultimate electronic devices [2].Conductive polymers offer the promise of achieving next-generation displays and energy sources [3].Thus, although many conductive polymers have been developed for various applications, polyaniline (PANI) has received great attention due to its simple and facile synthesis, good environment stability, and controllability. Nanocomposites of conducting polymer and cellulose have attracted much attention because it has recently been shown that it is possible to manufacture redox polymer-based electrodes and batteries with high capacities and very good recycling performances [4][5][6]. Cellulose is the most abundant and renewable biopolymer in the world, which is widely distributed in many higher plants, some marine animals (e.g., tunicates), algae, fungi, bacteria, and so on, so it is considered as a prime candidate for replacing oil-based feedstock and an almost inexhaustible source of the raw material for the increasing demand of environmentally friendly and biodegradable products [7][8][9].The higher plants such as cotton particularly possess high cellulose content and the CNs of the plants with diameters in the range of 5-20 nm and aspect ratio of about 1 to 100 times are commonly produced through acid hydrolysis [10,11].The CNs possess several advantages such as low cost, low density, nontoxicity, renewable nature biodegradability, and in particular they can form stable lyotropic N * -LCs phase above a critical concentration [12][13][14].It has been reported that the N * -LCs phase was used as a hard template to synthesize other new materials with chiral nematic structures [15,16].In this thesis, we attempted to prepare novel nanocomposites of conductive polymer and CNs in the cellulose N * -LCs reaction field for in situ asymmetric polymerization. The composites of PANI and cellulose have been received great attention in recent years due to their interdisciplinary character with new properties and applications [17][18][19].Yin et al. [17] prepared the PANI-cellulose composites using microcrystalline cellulose extracted from corn straw powder as raw material, and mainly investigated the electrical properties of the composites activated with various acids.Mattoso et al. [18] successfully produced electrically conductive nanocomposites made from PANI and cellulose nanofibrils (CNFs) by the in situ polymerization of aniline onto CNF.Shi et al. [19] developed a new route to construct supramolecular complex of PANI and cellulose through the noncovalent interaction.The composite films displayed highly homogeneous structure and improved mechanical properties as a result of good miscibility between PANI and cellulose.The electrical conductivity of the composite films could be enhanced significantly via doping of acid and the carbon black.Akagi [15] reported the polymerization of acetylene in an asymmetric reaction field constructed with chiral nematic LCs and showed that polyacetylene films formed from helical chains and fibrils can be synthesized.However, as far as we know, the systematic research on the preparation of the nanocomposites of PANI and CNs in the lyotropic N * -LCs reaction field is rarely exploited.The purpose of this study is to prepare stable lyotropic N * -LCs of CNs to be used as an asymmetric LC reaction field to synthesize PANI onto CNs by in situ polymerization.As a primary step, we examined the mesophase transition of the N * -LCs of CNs suspension before or after in situ polymerization of aniline by polarizing optical microscopy (POM).The structure of nanocomposites of PANI/CNs was investigated at a microscopic level using Fourier transform infrared spectroscopy and X-ray diffraction.Influence of the CNs-to-aniline ratio on the morphology of the nanocomposites was also investigated at macroscopic level by scanning electron and transmission electron microscopies.Moreover, electrical properties of the obtained PANI/CNs films were studied using standard fourprobe technique. Preparation of CNs Suspension. CNs suspension was prepared by acid hydrolysis according to [10,11].The cotton cellulose (Whatman CF11) was mixed with 64 wt% sulfuric acid, stirred at 45 ∘ C for 3 h, and diluted with cold distilled water to stop the reaction.Then the acid was removed through centrifugation and prolonged dialysis with distilled water until the PH outside dialysis bag was neutral.The sample thus obtained was concentrated by osmotic compression using dialysis bag with molecular weight cutoffs of 14000 and a 15 wt% poly(ethylene glycol) ( = 20000) solution, and the nanocrystal aggregates were disrupted by sonication about 10 minutes under ice-water bath.The cellulose suspension with desired concentration was obtained through dilution with distilled water.The concentration of the sample suspension was measured gravimetrically before and after evaporation of the water. Preparation of PANI/CNs Nanocomposites. 1 M aqueous hydrochloric acid (HCl) solution was prepared by adding concentrated HCl to distilled water under stirring, and cooled about 0 ∘ C in freezer.Then 0.25 g ammonium peroxydisulfate (APS) and 0.1 mL aniline (ANI) were dissolved in 50 mL and 67 mL of 1.0 M HCl solution, respectively.The APS/HCl solution was added to the 50 mL of CNs suspension.Then the above mixture was brought to the desired temperature and ANI/HCl solution was added to start the polymerization of ANI.The polymerization was carried out by using magnetic stirring in ice-water bath.Before the start of the polymerization, the prepared suspension exhibited a white color.However, the color of this suspension turned gradually from white into blue, at last into dark green and stabilized thereafter.The dark green is the characteristic color of PANI in emeraldine oxidation state.We removed the excessive HCl, APS, and byproducts through centrifugation and dialysis for 3 days.The nanocomposites suspensions with different concentrations were diluted with distilled water and dispersed by sonication.The nanocomposites films were prepared by casting the asdoped PANI/CNs suspensions onto microscope glass slides and dried at room temperature for 48 h.The thickness of the dried conducting polymer composites ranged from 0.05 to 0.1 mm depending on the polyaniline concentration relative to CNs. The structure and morphologies were characterized by using scanning electron microscope (SEM, JEOL, JSM-6700F), transmission electron microscope (TEM, JEOL, JEM-2100), and X-ray diffraction (XRD, Rigaku Corporation, D/MAX-2500/PC).All samples were not stained.The ultraviolet-visible (UV-Vis) spectra of the nanocomposites in doped and dedoped states were recorded by UV-Vis spectrophotometer (SHIMADZU, UV-2400PC).Fourier transform infrared (FT-IR) spectra of the PANI/CNs nanocomposites, PANI, and cellulose were recorded with a FT-IR spectrometer (Bruker Corporation, VERTEX70) via grinding the film thoroughly with KBr powder and pressing into pellets.The optical textures of the suspensions of CNs and PANI/CNs were observed under a polarizing optical microscope (POM, Jiangnan Optics XP213) equipped with digital camera.The conductivities of the PANI/CNs films were measured by the four-probe technique and each sample was measured five times and the average value was taken. Results and Discussion Figure 1 shows POM images of CNs and PANI/CNs suspensions.A fingerprint texture characteristic of N * -LCs phase was observed (Figure 1(a)).The distance between the striae corresponds to a half helical pitch of the N * -LCs.Note that as the concentration of the CNs in the suspensions increases, the helical pitch observed in POM decreases gradually.The pitch was about 15 m, which is consistent with the reported results [11].After in situ polymerization of ANI in the N * -LCs, the finger-print texture cannot be observed by POM, as shown in Figure 1(b).But the fingerprint texture still maintained after a drop of ANI/HCl solution was added in the N * -LCs suspension for several days, and the pitch became smaller than 7 m (Figure 1(c)).This is probably because the N * -LCs may be depressed by the introduction of "impurities, " namely, the monomer, catalyst, or the resulting polymer, so the domain size of the chiral nematic phase decreased.The films of PANI/CNs, which are reasonably flexible and strong, were produced in doped state using HCl as shown in Figure 1(d).The FT-IR spectra of cellulose, PANI, and PANI/CNs film are shown in Figure 2. The peaks near 3411, 3270, 2900, 1060, 710 cm −1 in Figure 2(a) were associated with cellulose. A broad band at 3411 cm −1 was assigned to the stretching of hydroxyl groups.The peaks at 2900 and 1060 cm −1 arose from the C-H stretching, and the C-O-C pyranose ring skeletal vibration, respectively.The peaks at 3270 and 710 cm −1 were attributed to the phase of cellulose.Figure 2(b) was the FT-IR spectrum of PANI in emeraldine oxidation state.The peaks of 1566 and 1487 cm −1 originated from the stretching vibration of N=Q=N and N-B-N structures, respectively (B and Q represent benzenoid and quinoid moieties in the PANI chains), and the peaks at 1299 and 1140 cm −1 were assigned to the stretching of the C-N band and the aromatic C-H in-plane bending.Moreover, these characteristic peaks of CNs and PANI were also found in those of the PANI/CNs composites.The results indicated that the in situ polymerization of ANI in the N * -LCs still retained their chemical structures well. The UV-Vis spectra of the doped and dedoped forms of the PANI/CNs nanocomposites showed that after dedoping the peak of excitonic transition was shifted from 622 nm to 764 nm as shown in Figures 3(a) and 3(b), indicating higher electronic transition energy.After redoping the sample with HCl, it is found that the peaks were similar to those in original suspension, and this confirmed that this procedure between doping state and dedoping state was reversible. The influence of the ratio of CNs to ANI by weight on the structure and morphologies of the PANI/CNs nanocomposites was investigated by SEM, as shown in Figure 4.As shown in Figure 4(a), the length of rod-like CNs ranges from 100 to 200 nm and width average is 20 nm; the axial ratio is 5 to 10.All of the nanofibrils form a nematic phase which performs as the finger-print texture on the microdomain.And after polymerization, the micrometer-scale domain of the nematic phase decreased to nanometer-scale.It is very interesting to note that the ratio of CNs to ANI significantly affected the size of PANI particles and interaction between PANI and CNs.The size of PANI particles was about 200 nm and the PANI particles were separated from CNs surface when the ratio of CNs to ANI was 5.69 (Figure 4(b)), while the PANI particles were significantly decreased to about 20 nm in size and absorbed on the surface of CNs when the ratio of CNs to ANI was 56.9 (Figure 4(c)).The hydrogen bands between hydroxyl groups of CNs and amine groups of aniline might serve as a traction force to assist the growing of the PANI over cellulose and avoid the large-scale aggregate formation [20].The increase in the ratio of CNs to ANI prevented the primary aggregation of PANI particles and the CNs acted as the separant and dispersant during the in situ polymerization of ANI which eventually decreased the PANI size [21]. In order to further prove the previous speculation, we used TEM to characterize the PANI/CNs nanocomposites.It is observed from Figure 5(a) that rod-like CNs are about 200 nm in length and about 20 nm in width, which is consistent with the SEM results.It is surprising that much smaller particles about 3 nm existed on the surface of CNs, as shown in Figure 5(b).According to the previous work [22][23][24], they might be PANI nanoparticles.It is observed that PANI particles deposited on the surface of the CNs with little PANI aggregates especially in the optimized reaction condition.With weight being lighter, smaller PANI particles are much easier to be absorbed on the surface of cellulose.This demonstrated that the nanofibrils work as a good template for polymerization. Figure 6 shows the crystal structure of CNs and the PANI/CNs films by XRD.Both films exhibited three peaks at 2 = 14.84, 16.44, and 22.74 ∘ , assigned to the (110), (110), (200) planes of crystalline cellulose , respectively [25].It also indicated that the crystallinity of composite film was lower than that of CNs film.This could be because of the amorphous PANI and the association of PANI and CNs. Conductivities as high as 10 −2 S/cm were obtained for the PANI/CNs films which belonged to semiconductor, as shown in Table 1, and the conductivity almost decreased with increase of the CNs-to-ANI ratio.The results were consistent with the reported one for PANI and cellulose nanofibrils, [18] when the polymerization was carried out under diluted conditions similar to those used in the present work. Conclusions The PANI/CNs nanocomposites were prepared for the first time by using the in situ polymerization with the lyotropic N * -LCs of CNs.UV-Vis spectra of the PANI/CVs nanocomposites verified the PANI being in the doped emeraldine salt form.It is found that the PANI particle size and conductivity of the nanocomposites decreased with increase of CNs-to-ANI ratio; the interaction between CNs and PANI was affected by the CNs-to-ANI ratio.Though the N * -LCs mesophase was unstable during the in situ polymerization of ANI, the goal of this work is to develop a promising method for in situ asymmetric polymerization of ANI in the N * -LCs of CNs and also to expand the possibility of application of nanocomposites of PANI/CNs.Obviously, many open questions such as stability and controllability of the helical sense and twisting power of the lyotropic N * -LCs and the possible influence of the N * -LCs used for preparing PANI/CNs should be answered experimentally and theoretically to fully understand the properties of the PANI/CNs nanocomposites. Figure 1 : Figure 1: POM images of cellulose suspension after hydrolysis (a), suspension of PANI/CNs nanocomposites (b), cellulose suspension added a drop of aniline/HCl (c), and photograph of a flexible film of PANI/CNs doped with HCl (d).
3,395
2013-04-09T00:00:00.000
[ "Materials Science" ]
Stratifying risk of disease in haematuria patients using machine learning techniques to improve diagnostics Background Detailed and invasive clinical investigations are required to identify the causes of haematuria. Highly unbalanced patient population (predominantly male) and a wide range of potential causes make the ability to correctly classify patients and identify patient-specific biomarkers a major challenge. Studies have shown that it is possible to improve the diagnosis using multi-marker analysis, even in unbalanced datasets, by applying advanced analytical methods. Here, we applied several machine learning algorithms to classify patients from the haematuria patient cohort (HaBio) by analysing multiple biomarkers and to identify the most relevant ones. Materials and methods We applied several classification and feature selection methods (k-means clustering, decision trees, random forest with LIME explainer and CACTUS algorithm) to stratify patients into two groups: healthy (with no clear cause of haematuria) or sick (with an identified cause of haematuria e.g., bladder cancer, or infection). The classification performance of the models was compared. Biomarkers identified as important by the algorithms were also analysed in relation to their involvement in the pathological processes. Results Results showed that a high unbalance in the datasets significantly affected the classification by random forest and decision trees, leading to the overestimation of the sick class and low model performance. CACTUS algorithm was more robust to the unbalance in the dataset. CACTUS obtained a balanced accuracy of 0.747 for both genders, 0.718 for females and 0.803 for males. The analysis showed that in the classification process for the whole dataset: microalbumin, male gender, and tPSA emerged as the most informative biomarkers. For males: age, microalbumin, tPSA, cystatin C, BTA, HAD and S100A4 were the most significant biomarkers while for females microalbumin, IL-8, pERK, and CXCL16. Conclusions CACTUS algorithm demonstrated improved performance compared with other methods such as decision trees and random forest. Additionally, we identified the most relevant biomarkers for the specific patient group, which could be considered in the future as novel biomarkers for diagnosis. Our results have the potential to inform future research and provide new personalised diagnostic approaches tailored directly to the needs of the individuals. Introduction Haematuria is defined as the visible presence of red blood cells (RBCs) in urine (gross haematuria) or at least three RBCs in highpowered field upon microscopic evaluation of a urine sample.Prevalence of microhaematuria among the general population is relatively high.It was estimated that in 2.4 -31.1% of total urine samples, RBCs are detectable in concentrations exceeding a fixed reference threshold (1)(2)(3)(4).A haematuria patient population can be heterogenous with differences in age, gender, risk factors, geographical diversity etc., and it can have very different aetiology, including the presence of genitourinary malignant diseases.While most commonly cases of haematuria are non-malignant (e.g., infection, kidney or bladder stones, benign prostate enlargement, menstrual blood contamination), first-stage assessment should always be focused on the physical examination and collection of patient history, current treatment (e.g., anticoagulants) (5), lifestyle (e.g., smoking, alcohol consumption, strenuous physical activity), occupational hazards and risk factors (6).Dipstick urine analysis can be performed to confirm or exclude some causes of haematuria, for example, infection.For non-obvious cases, further investigation should be performed.Currently, cystoscopy together with urine cytology is the gold standard for bladder cancer diagnosis.Cystoscopy is an invasive procedure which is not without risk e.g., infection, bleeding, and pain.Computed tomography (CT) urography is warranted for patients who require upper urinary tract investigation, which raises concerns of radiation exposure (7).A retrospective study by Georgieva et al. (8) compared the benefits, harms, and costs of different haematuria evaluation guidelines and showed that guidelines which missed the fewest cancers also generated the highest number of radiation-induced cancers, falsepositive cases, and diagnostic procedures costs (Table 1).They also showed that uniform CT imaging for patients is associated with a limited increase in cancer detection, high personal cost and is generally uneconomical. Given the high prevalence of haematuria, the numerous potential causes, and the significant human and financial costs involved, the development of non-invasive diagnostic tests, based on biomarkers from urine or blood samples, would be a major step forward.However, this presents a significant challenge.To date, only two biomarkers -nuclear matrix protein (NMP22) and bladder tumour antigen (BTA) -have been approved by the Food and Drug Administration (FDA) for the detection and monitoring of bladder cancer.Unfortunately, commercially available tests for both biomarkers have low specificity and high false-positive rates (12,13).Data shows that combining biomarker screening (NMP22) with cytology may improve patient screening (14), but current guidelines do not recommend the use of urinary tumour biomarkers or cytology in the initial evaluation of microhaematuria.To improve the diagnostic pathway, current research has focused on shifting towards a multi-biomarker approach.This approach has been proven to provide improvements in cancer detection (15, 16) while also being cost-effective in differentiating patients with benign and malignant disease (17).The complexity of the diverse causes of haematuria necessitates studies with a large number of possible biomarkers, with the associated challenge of identifying the most informative without creating false discoveries.This makes multibiomarker studies more complex and less tractable, creating a need for computational tools to generate personalised insights from the available data (18-20). Numerous studies have proved that by using advanced analytical methods, it is possible to create algorithms that can improve patient diagnosis with multiple biomarker analysis (15,21,22).Machine Learning (ML) (23,24), especially, has been able to produce unique insights using different data sources (25)(26)(27).One of the major challenges of traditional ML models is poor generalisation, due in part to low robustness to unbalanced distribution of classes within a dataset, which is a common scenario in medical data.These models pay equal attention to the majority and minority classes.As a result, they often perform poorly on the minority class, especially when the imbalance in the data is extreme (28).Data dimensionality is another major challenge for ML algorithms, especially when dealing with small datasets where the number of features exceeds the number of samples and where different types of data (e.g.continuous or categorical) are present.Non-meaningful parameters need to be separated to subtract hidden information and provide actionable insights to clinicians.This could be achieved at the level of domain experts and datadriven features that could be incorporated into the model design.The final challenge for ML, and currently a requirement for any clinical decision support system, is explainability.Explainability is a property of an AI algorithm that allows a human to understand why a particular decision was made.In practice, explainability can either be an inherent property of an algorithm, or it can be approximated by other methods.Many modern ML methods can outperform humans in certain analytical tasks (e.g., pattern recognition), but they lack explainability, so the explanation must be approximated.On the other hand, the performance of traditional explainable methods is usually inferior to modern state-of-the-art methods such as neural networks, so the trade-off between performance and explainability is a major challenge for modern clinical decision support systems. The Haematuria Biomarker (HaBio) dataset ( 22) is a unique collection of data illustrative of a patient population presenting with haematuria and includes an extensive range of biomarkers preselected based on literature searches and clinical experience.At the same time, HaBio presents all the challenges for ML described above.Considering the need for novel biomarker discovery for haematuria patients' stratification and ensuring the models explainability, we analysed the HaBio cohort using various ML algorithms, including the recently developed CACTUS explainable classification algorithm (29, 30).To facilitate the diagnosis procedure and provide actionable insights for clinical patient management, we have provided a selection of biomarkers that could be useful in clinical practice, along with their possible decision boundaries. HaBio cohort The HaBio Study was a three-way collaborative project between Queen's University Belfast, Northern Ireland Health Trusts and Randox Laboratories Ltd. HaBio was funded by Invest Northern Ireland and Randox Laboratories Ltd. Ethical approval was obtained from the Office for Research Ethics Committee Northern Ireland (11/NI/0164) to recruit patients who satisfied the HaBio study inclusion criteria (22).The protocol for HaBio was also reviewed by hospital review boards and was conducted according to the Standards for Reporting of Diagnostic Accuracy (STARD) (31).A total of n=677 patients were recruited to HaBio, of which n=2 patients were excluded due to incomplete data.Therefore, the complete dataset is available for n=675 patients (n=485 males and n=190 females).There are significantly more males (2.5:1 ratio of males to females) which reflect "real world" urology patterns of presentation to haematuria clinics at the time of recruitment.This observation is borne out by the large number of men with benign prostatic hyperplasia (BPH) as a cause for haematuria.Within each gender there was a 2:1 ratio of non-cancer versus cancer (males 1.9:1 (319:166); females 2.7:1 (139:51), Figure 1). Biomarker analysis At the time of recruitment, a research nurse or clinician measured each patient's height, weight and blood pressure while also recording details of medical history, lifestyle/behaviours, and occupations before collecting urine (25ml) and blood (35ml) samples.In the collected samples, 80 biomarkers previously indicated as potential biomarkers of urinary tract diseases, representing a range of biological pathways, were measured (Supplementary Materials, Table A).Patient samples were analysed in triplicate and the results were expressed as a mean ± SD. In the study, chosen biomarkers were analysed with several different techniques.At recruitment, patient urine samples were collected prior to cytoscopic examination and evaluated using the POC test for NMP22 (BladderChek, Alere, US).Osmolarity (mOsm) was determined using a Löser Micro-osmometer according to manufacturer's instructions (Löser Messtechnik, Berlin, Germany).Total urinary protein levels (mg/ml) were measured by Bradford assay (Pierce, Rockford, IL, USA).For multimarker analysis Biochip Array Technology was used Classical approach In the study, due to differences in the typical causes of haematuria and the prevalence of malignant diseases we analysed data separately for male and female participants, in addition to the entire cohort.In the pre-processing step, as the data were characterised by a highly skewed distribution, we performed a log transformation of the biomarker measurement results for further analysis to reduce the skewness and replaced missing data with the median value for the given biomarker.For analysis, urine and serum biomarkers were used; if the same biomarker was analysed in both serum and urine samples, serum results are indicated by the word "serum" in the biomarker name. Firstly, we performed k-means clustering to assess if analysed features could be linearly separated.For k-means clustering, we iteratively tested the number of clusters from 1 to 20 and used the silhouette width to select the best configuration.We observed that for all three data subsets, the optimal value of clusters for k-means clustering was 2, showing that the distribution of features does not follow clear macro patterns or reflect the underlying number of causes of haematuria (Figure 2).As it was not possible to distinguish the number of clusters reflecting the number of underlying classes of final diagnosis, based on clinical evaluation and experience we decided to stratify patients into two subgroups, sick and healthy.The sick population had any of the following possible causes for their haematuria: chronic kidney diseases, infection, other benign diagnosis, bladder cancer, history of bladder cancer or other types of cancer (e.g., prostate cancer, renal cell carcinoma).The healthy population included every patient with no causes identified for their haematuria. The initial analysis included logistic regression analysis and assessment of balanced accuracy for each biomarker separately.For logistic regression, we tested two approaches: linear and, to account for any possible non-linear relationship between the biomarkers and the outcome, we also fitted natural cubic splines.The results of the two approaches were later compared by ANOVA and the best performing model was selected as the final regression analysis.Afterwards, we applied binary decision trees and random forest models.For both models we performed 10-fold cross validation repeated three times.As the random forest is not an inherently explainable method, in contrast to the decision trees, we applied the local interpretable model-agnostic explanations (LIME) algorithm (32), to provide the explanation of the classification process and understanding of the biomarkers' influence on the final class prediction.LIME focuses on explaining the model's prediction for individual cases.LIME generates a new dataset consisting of perturbed samples and the corresponding predictions and then trains an interpretable model (regression) on this new dataset, weighted by the proximity of the sampled cases to the case of interest.Because a linear model is inherently interpretable, the fitted weights can be inspected and viewed as proxies for feature importance and based on the proximity of the values to the perturbed data point, the cut-off values for individual features can be provided. All the analysis described in this section were performed in R (33). CACTUS classification To model the healthy and sick classes, we used the CACTUS (29, 30) algorithm.In the first step, fully anonymized data abstractions of the quantitative and qualitative biomarker data were generated by (34) transforming raw biomarker data into two-stage data abstractions (flips) based on receiver-operator curve (ROC) theory.These flips were encoded with the last letter of the label for each biomarker: up (U) abstracts raw data above, and down (D) below calculated cut-off values.For each biomarker, significance was determined from the node's conditional probability P(ƒ|c i ) of the flip ƒ, given the class c i (sick or healthy).To assess how the conditional probability P(ƒ|c i ), will change across the N considered classes, and to infer their importance for the classification process, ranks (R xf ) were calculated for each biomarker according to the to the Equation 1. To assess the accuracy of the network's patient classification we calculated the (Equation 2).For every patient in the given state "s" (sick or healthy) the cost function (C s ) was calculated based on the corresponding node significance (s s,i ) of each biomarker (x i ): The cost function with the greater value was the determinant for patient classification as sick or healthy.Obtained classifications were compared to real diagnosis groups, marked as true positive (TP), false negative (FN), true negative (TN) or false negative (FN) and used to calculate specificity (Equation 3.a), sensitivity (Equation 3.b) and accuracy (Equation 3.c) for all tested models.Due to the much higher number of sick patients in our study groups, we used balanced accuracy (a metric which is robust for unbalanced datasets) to assess model performance. Comparison of tested models We evaluated the performance of each model using the c2 test, which assesses whether the performance of the model is better than random chance.To compare the performance of the tested models, we performed a pairwise comparison of the model results using the McNemar test on the classification results, with a significance level of 0.05. Results K-means clustering was performed to assess how linearly separable the results were and whether it is possible to distinguish the final diagnostic group by the structure of the clusters.To select the best number of clusters, we used the silhouette score for which the highest value was obtained for two clusters in all analysed data subsets (k = 2).To visualise how patients with different final diagnosis groups are distributed within clusters, we have plotted individual points as biomarkers representing the final diagnosis on Figure 2. The graph shows that different final diagnosis groups are clustered within the same clusters, and a significant overlap of clusters.Assessment of balanced accuracy of the logistic regression showed that single biomarkers were not specific enough to discriminate between healthy and sick patients (Supplementary Materials, Tables B-D).For both genders, the highest accuracy was obtained for urine cystatin C (0.580, cubic spline), soluble tumour necrosis factor receptor I (sTNFRI, 0.572, linear model), and progranulin (0.516, linear model).For females, the three biomarkers with the best scoring performance were phosphoextracellular signal regulated kinase (pERK, 0.673, linear model), microalbumin (0.672, linear model) and chemokine (C-X-C motif) ligand 16 (CXCL16, 0.667, linear model).In the male data subset, which is highly unbalanced, no single biomarker gave an accuracy higher than 0.5. Decision trees provided simple rule-based models based on a maximum of 14 biomarkers (including gender) for patient classification.The most complicated tree was built for a dataset with patients of both genders.The first branch was built based on the male gender; resulting in the subsequent branches being gender specific.The male and female decision trees were similar to the branches of the tree built for both genders, with some additional branches.In the case of males, stratification was improved by adding decision boundaries based on serum hyaluronic acid (HAD) and pERK levels, allowing additional healthy individuals to be distinguished.In the female decision trees, the situation was reversed; the classification was performed with a lower number of features and some of the branches of the trees for both genders, such as vascular endothelial growth factor (VEGF), were pruned.The highest balanced accuracy of decision tree classification was obtained when both genders were analysed together (0.640, Figure 3A), even though the first split was on gender.Separate stratification for males and females gave lower balanced accuracy (0.551, Figure 3B and 0.623, Figure 3C), and better significance and specificity was obtained for females as the data subset was more balanced (Table 2). The effectiveness of the random forest classification was also insufficient to discriminate between sick and healthy individuals (Table 2).The highest value of balanced accuracy was obtained for the female data subset (0.665), lower for both genders (0.627) and the lowest for the male subset (0.512, not statistically significant, pvalue = 0.135).The corresponding values of sensitivity and specificity showed a bias towards the more prevalent class (sick), which is most visible in the case of the male data subset (sensitivity: 1.00, specificity: 0.024), showing that all cases of sick individuals were correctly classified and only one healthy individual was correctly classified.We also extracted the top 10 features for random forest classification (Figure 4A).As can be seen in the graph, two of the most important features are gender-specific (serum total prostate specific antigen (tPSA) and gender), justifying the need to separate female and male cases for analysis purposes.Several biomarkers such as: microalbumin, osmolarity, sTNFRI, cystatin C, CXCL16, pERK, progranulin, and patient age were of high importance for two or three data subsets.Although the biomarkers were common to the data subsets, as the LIME analysis showed, the decision boundaries (levels of the biomarkers) and their contribution (weights) to the final model were different.For example, for age, which is one of the most important characteristics, the lowest cut-off value in the case of both genders was 60 and has a slightly higher influence on the classification of the healthy category than the sick category; for the male data subset only, this cut-off value was 61, with the same influence on the classification.In the case of all analysed data subsets, there were some biomarkers within certain ranges that had a clear positive or negative influence on the classification (Figure 4B, and Table 3).CACTUS classification gave a higher balanced accuracy than the models described above for all analysed data subsets (0.747 both genders, 0.803 males, 0.718 females).Moreover, the obtained values of sensitivity and specificity were more balanced, although the sensitivity was lower, which indicated a higher false negative rate.The CACTUS specificity was higher than the specificity for decision trees and random forests, showing that the classification was not biased towards the predominant group (sick individuals) (Table 2). The 10 biomarkers with the highest CACTUS ranks for sick and healthy individuals in all groups is shown in Figure 5.The ranks provide information about the average difference between the classes (sick and healthy) for the probability of the biomarkers being in each state ('U' or 'D'), meaning that the higher the rank value, the greater the difference in at least one of the probabilities.Like random forest, CACTUS confirmed the need to stratify patients into subgroups based on gender, as gender was indicated by CACTUS as the most important factor for the whole population studied.Additionally, the second most important biomarker was serum tPSA, a gender-specific biomarker of prostate health and therefore important in the classification process.Microalbumin was reported as the third most important biomarker for both genders, but also received a high score in the gender stratified analysis (second for men and first for women). In the male population, age is the highest ranked factor and was not reported in any of the other subsets analysed.In addition, as in the random forest analysis, several biomarkers such as serum tPSA, microalbumin, CXCL16, urinary protein, monocyte chemoattractant protein-1 (MCP-1), and progranulin were present with a high score in more than two groups and are therefore more sensitive to discovering the differences in flips which were more prevalent in the healthy class, than the sick class.This was visible as a higher difference in between probabilities of nodes in the healthy.Interestingly, microalbumin was the only common biomarker in both male and female results, suggesting a gender-specific mechanism for haematuria development. In CACTUS analysis, the flip probabilities indicated whether the biomarker was generally below ('D') or above ('U') the calculated cutoff values; we observed that the distribution of some of the biomarkers changed significantly between classes.For example, when looking at the dataset for both genders, we can see that male, having a serum tPSA above the cut-off value and having high levels of microalbumin are important factors for classification as sick.We have also observed that some features were only important for classification in one class and for the second class there was an equal or almost equal probability of the flip probabilities.This was the case in the both genders dataset for sTNFRI, which had the same probability of flip for healthy individuals (0.5, 0.5), but showed higher probability (0.836) for sTNFRI being in state ('U') for the patients classified as sick.In the male population, the probability of flips indicated that being older (0.721), with higher levels of MCP-1 (0.797), serum tPSA (0.649) and sTNFRII (0.836) and reduced levels of D-dimer (0.676) were important factors in being sick.It is interesting to note that in the male population, CACTUS classification detected higher differences in the flip's probability for healthy individuals than in sick.In the female dataset we observed the gradual change in the flip probabilities, i.e., the highest difference in the flip probability, which was most important for healthy individuals (microalbumin), had at the same time the lowest difference for sick individuals and vice versa. Discussion In the study, we analysed the HaBio cohort, which contains data from patients presenting with haematuria.One of the challenges related to the analysis of this dataset is the unbalanced structure of data, both in terms of gender (male predominance) and the different number of patients in each disease category.The data structure reflects the real-world structure of the patients reporting to a clinician with haematuria and is related to differences in diagnostic processes and potential risks.Males, older patients, and smokers have significantly higher malignancy risk (36)(37)(38)(39).On the other hand, women do not receive the same diagnostic attention, which leads to delays in urological consultation and poorer oncological outcomes in bladder cancer (22,40,41).It is therefore crucial to provide genderspecific blood or urine biomarkers which could reduce the time and harm associated with the current methods, while being affordable and addressing gender inequalities in the diagnostic process. A second element contributing to the unbalanced structure of the dataset was the different number of patients in each category.As kmeans clustering showed it was not possible to distinguish the final diagnostic group (bladder cancer, benign prostate enlargement, infections, incidental haematuria, other cancers, and benign disease) by the structure of the clusters.The best results were obtained when clustering into two, highly unbalanced clusters (Figure 2).Although, it is possible to computationally balance datasets during analysis (42)(43)(44) these data reflect the true distribution of patients presenting to clinicians with haematuria, so no pre-processing techniques were used to balance the class distribution.Additionally, initial stratification into healthy and sick groups could expedite the diagnostic process by referring patients to Random forest results.(A) Top 10 most important features selected by random forest, (B) LIME analysis results, the cut-off values are presented as raw values for each biomarker.The value of the mean weight of the biomarkers indicates whether the biomarker, within the specified range, has a positive (positive value) or negative (negative value) influence for being classified as a sick (S) (red colour) or healthy (H) (blue colour) individual.In the case of both genders' dataset analysis there were 4 biomarkers with a positive and 1 with a negative influence on sick class.In the male data subset, there were 9 biomarkers with a positive and 2 biomarkers with a negative influence on the sick class.In the female data subset, there was 1 biomarker with positive and 1 with negative influence towards the sick class and 1 with positive influence towards the healthy class.For LIME results, only the mid-ranges are shown (upper and lower ranges are presented in Supplementary Materials, Table E).The decision boundaries made by LIME go as follows: below the indicated range, as a range is presented in the table, and above the indicated range.The unit of the values presented goes as follows: BTA (U/ml), serum_CD44 (ng/ml), serum_CEA (ng/ml), Clusterin (ng/ml), Creatinine (mmolL), serum_CRP (mg/ml), CXCL16 (ng/ml), Cystatin B (ng/ml), Cystatin C (ng/ml), serum_Cystatin C (ng/ml), D-dimer (ng/ml), EGF (pg/ml), serum_HAD (U/l), serum_IL-4 (pg/ml), IL-7 (pg/ml), IL-8 (pg/ml), MCP-1 (pg/ml, Microalbumin (mg/l), Midkine (pg/ml), NGAL (ng/ml), Osmolarity (mOsm), pERK (pg/ml), Progranulin (ng/ml), serum_tPSA (ng/ml), Protein (mg/ml), serum_S100A4 (ng/ml), TGF-b1 (pg/ml), sTNFRI (ng/ml), serum_VEGF (pg/ml), serum_HDL (mmol/l)."-" biomarker was not selected by given algorithm. A B C CACTUS analysis results for all three data subsets: (A) both genders dataset, (B) males dataset, (C) females dataset.The top panel presents the ten most important biomarkers according to the rank values.The bottom panel presents probability of flips for sick and healthy in descending order of rank value. Droz ˙dz ˙et al. 10.3389/fonc.2024.1401071 Frontiers in Oncology frontiersin.orgthe most appropriate specialist or for more targeted diagnostic and less invasive testing, which could be beneficial for patients and clinicians. In the case of decision trees and random forests we observed a strong influence of the unbalanced nature of the dataset on the classification process, while CACTUS was the most robust.As the imbalance between sick and healthy increased (111:79 for females, 152:523 for both genders and 41:444 for males) the discrepancies between the model metrics (specificity, sensitivity, accuracy, and balanced accuracy) also increased (Table 2).This was particularly evident for the male random forest analysis, where the balanced accuracy was 0.512 and the specificity 0.024, meaning that in this case only one healthy individual was classified as healthy.This result was not statistically significant, meaning that there was no difference between the classification result and random chance.A possible explanation for this was that random forests build each constituent tree from a bootstrap sample of the training data.There was a significant chance that bootstrapped samples from extremely unbalanced datasets could contain few or even none of the minority class, resulting in a model with poor performance.On the contrary, CACTUS despite the high prevalence of sick classes in the males dataset, obtained very high specificity (0.829), meaning that the algorithm was able to detect a high number of healthy patients and could potentially exclude them from subsequent invasive diagnostic procedures.The high performance of CACTUS was a result of its design.The classification process was based on the probability of each feature being in the state "U" or "D" for the given class which was not influenced by the number of cases in each class.Therefore, when the imbalance was high (both genders and males dataset) CACTUS generates the statistically significant improvement in the classification (Table 4) when comparing to random forest and decision trees. Logistic regression, showed that single biomarkers were not effective in identifying sick or healthy patients (Supplementary Materials, Tables B-D).It has been shown that the use of multiple biomarkers can improve the stratification of patients with bladder cancer (22), which has been confirmed by our analysis.The highest improvement in balanced accuracy was obtained for the male subset with the CACTUS classifier (0.803 versus 0.500 for single biomarker analysis).We also observed improvements in balanced accuracy for both genders (from 0.572 for the best single marker to 0.747) and for females (from 0.672 to 0.718) when using CACTUS.Interestingly, this improvement was not as significant using the other two methods (Table 2). The aim of the study was not only to classify patients, but also to identify potential biomarkers and their decision boundaries (Table 3).As shown in Table 3, the most important biomarkers differ widely between the algorithms and the data subsets tested.For the dataset of both genders, the most important features for all algorithms were gender and microalbumin.Microalbumin has been described in the literature as a marker of renal dysfunction (45,46).There is some evidence that elevated levels of microalbumin may be associated with some types of cancer, including cancer of the urinary tract (47).In the literature, values of microalbumin below 20 mg/mL are considered physiologically normal, but according to our analysis, the decision boundaries could be much lower, >5.38 mg/mL or >13.43 mg/mL depending on the model and dataset (Table 3).The values above the decision boundaries are classified as important for the stratification process and are more indicative of sick individuals.Therefore, when using the official reference values, it is possible to miss some individuals with developing pathology. Another important biomarker selected by random forest and CACTUS algorithms for both genders dataset was serum tPSA.The decision boundaries for serum tPSA were underestimated when analysed in the both genders dataset due to the presence of female samples, where in most cases the serum tPSA level was below the detection limit.For the male dataset, serum tPSA was only indicated by CACTUS, with a level of 1.03 ng/mL being indicative of a pathological state.This is well below the reference values even in the youngest men.PSA is prostate specific antigen and elevated levels of PSA could be caused by conditions that lead to disruption of the epithelial cells of the prostate basal membrane, such as prostatitis, benign prostatic enlargement (BPE), prostate biopsies and surgery or decreased by medication, including 5-alpha reductase inhibitors (48)(49)(50)(51).As the male dataset includes patients with different underlying causes of haematuria, not all of which affect PSA levels, observed values may be lower than reference levels even in the presence of BPE in the study group.Gender stratification is also strongly associated with age, which was identified as one of the important features by decision trees when analysing the whole dataset, and by CACTUS and random forest when analysing males only.It is known that biomarkers (such as cytokines, lipids or organ-specific biomarkers such as PSA (52-54)) change with age, as does the likelihood of developing age-related conditions such as prostate enlargement (55) or bladder cancer (56).According to our results patients over 60 years of age, and especially males over 63 (CACTUS estimation) should receive special attention during the diagnostic process, as risk of developing disease increases.These results are in line with current American Urological Association (AUA) guidelines (4), which place male patients over the age of 60 at high risk of malignancy. Several biomarkers important for bladder cancer screening were also indicated in the models in the male dataset, i.e., BTA (CACTUS), HAD (random forest and decision tree), and S100 calcium-binding protein A4 (S100A4) (random forest), however they were not among the most important features in the respective models.This may be due to the wide variety of diseases underlying haematuria in the datasets.BTA (12,13), HAD (57) and S100A4 (58) are closely related to tumourigenesis, so the addition of samples from individuals without malignant disease could influence the distribution of these features, making them less important for the classification process. For the female stratification process, many of the most important characteristics differ from the male and both genders datasets.One of the selected biomarkers is interleukin-8 (IL-8) measured in urine.IL-8 is an angiogenic factor associated with inflammation and carcinogenesis.It has been shown that elevated urinary levels of IL-8 are associated with urothelial cell carcinoma (59,60).In the study of Urquidi et al. ( 61) it has been shown that the urinary level of IL-8 in patients is elevated when compared to healthy controls with the median value of 128.43 pg/ml vs. 0 pg/ml, respectively.Our analysis set the decision boundaries at 68.12 pg/ml (CACTUS) and above 20.89pg/ml (random forest), which is comparable to previously obtained data and allows a more detailed classification of patients.It is important to note that IL-8, as a pro-inflammatory cytokine, is also elevated in the samples of patients with urinary tract infections (59), so it should be used more as a biomarker of pathological conditions rather than specific diseases. Other biomarkers that are important in stratifying women are the phosphorylated form of ERK and epidermal growth factor (EGF).ERKs are members of the mitogen-activated protein kinase (MAPK) family and are involved in cell cycle regulation and tissue proliferation.MAPK signalling is active in both early and advanced stages of tumourigenesis and promotes tumour proliferation, survival, and metastasis (62).EGF has also been shown to activate the MAPK/ERK pathway (63, 64).EGF, acting through the EGF receptor, promotes cancer development (65).EGF has been shown to promote bladder cancer cell proliferation (66).To the best of our knowledge, this is the first time that EGF has been described as a potential biomarker for the detection of the pathology related to urinary tract cancer, providing an initial estimate of the possible concentration of the biomarker for decision making. Several biomarkers were common to more than one group including CXCL16, cystatin C and microalbumin (described above).CXCL16 is a cholesterol receptor and a chemokine with a potential role in vascular injury, angiogenesis, and inflammation.CXCL16 has previously been described to be elevated in patients with urothelial cancer (67, 68) and diabetic kidney disease (69).As CXCL16 is not a routinely studied biomarker, reference values for it have not yet been described, but according to our studies, elevated levels are associated with the pathological causes of underlying haematuria.Urinary levels of CXCL16 higher than 0.1 ng/mL or 0.3 ng/mL (depending on the gender and the model, Table 3), may be of use in the stratification of patients presenting with haematuria. Cystatin C was also suggested as a potential biomarker by several models, when measured in urine and serum (Table 3).Cystatin C is a biomarker produced by all nucleated cells and is freely filtered by the kidney with almost complete reabsorption in the proximal tubule and no significant urinary excretion.It has been postulated that serum cystatin C levels may be a more stable alternative to creatinine for glomerular filtration rate (GFR) (70) and a potential new biomarker of renal dysfunction (71, 72).In addition, some studies have shown that decreased serum cystatin C levels may be present in bladder cancer (73).There is also some evidence of increased expression of CST3 mRNA in higher-risk prostate cancer patients compared with those at lower-risk (74), but the utility of cystatin C (both serum and urine) requires further study.Our analysis showed that upper decision boundary for urinary cystatin C levels could be set up between 0.83 ng/ml and1.2 ng/ml (decision trees) and 6.84 ng/ml (random forest) are indicative of disease status.For men, the values are 1.2 ng/ml and 20.99 ng/ml (depending on the gender and the model, Table 3).As there are no officially established values for urinary cystatin C, reference values have been suggested at the level of 0.119-0.213mg/ L (75) or 0.06 -0.16 mg/L (76) which is much higher than our study suggested.There are well established reference values for serum cystatin C which are around 0.58 -1.02 mg/L (77).This is similar to the decision limits given by CACTUS for males (0.98 mg/L) and decision trees for both genders (0.84 mg/L). In the study, we identify several biomarkers that have not been studied in relation to haematuria, or biomarkers without established reference values.Although this is a retrospective study, it may point the way for future research.We believe that several of the selected biomarkers (CXCL16 for both genders, HAD and S100A4 for males and IL-18, pERK and EGF for females) may have the potential to be introduced into routine diagnostics in the future, but this will require further work not only to establish reference values but also to better understand underlying mechanisms. Notably some guidelines no longer recommend invasive testing for microscopic haematuria, and this seems to improve general patient management (78,79).Given the challenges described in the diagnostic process, including the high cost (economic and personal) the proposed pre-stratification of patients with biomarker screening could be a further improvement.However, for people with macroscopic haematuria, cystoscopy is still recommended.In the HaBio cohort, 48% of patients with macroscopic haematuria did not have malignancy and had to undergo invasive diagnosis.Noninvasive methods based on biomarker screening could change the approach to the initial assessment of haematuria, reducing the number of false-positive and false-negative cases and providing affordable and time-efficient diagnostic procedures. Conclusions In this work, we addressed the challenging problem of diagnosing patients presenting with haematuria into two subclasses (healthy or sick), which could enable the introduction of improvements in patient management, allowing for a more Frontiers in Oncology frontiersin.orgefficient use of healthcare resources.With multiple possible causes and large variations in the number of patients with each condition, we addressed the problem of analysing unbalanced datasets in a medical setting and showed that by carefully selecting the models applied, it is possible to perform meaningful analysis even on challenging datasets.We focused on both classification and explanatory power to aid decision making.Although we were able to classify patients with satisfactory accuracy and provide decision boundaries for each of the biomarkers, our analyses were based on a retrospective study and further work is required to introduce the proposed biomarkers into clinical practice.Nevertheless, the classification obtained and the selection of biomarkers provided could be used to inform guidance for healthcare professionals to develop less invasive, faster and more economical strategies for patient disease management. 2 K FIGURE 2K-means algorithm applied to the input features for both genders (A), males (B) and females (C) used for classification.For all data subsets, the highest silhouette score was obtained for k = 2. Visualisation of the clustering process showed overlap within the final diagnostic group of the clusters, indicating poor compactness in the clusters.Individual patients are presented as a description of the final diagnosis. TABLE 1 (22)arison of different haematuria guideline outcomes simulated on the modelled haematuria patient's cohort.Antrim, Northern Ireland, UK), other biomarkers were measured using commercially available ELISA kits.Detailed description of analytical procedures is provided in Supplementary Materials.When data was below the Limit of Detection (LOD) or the Mean Detectable Dose (MDD) for any given test, 90% of the LOD or the MDD was used in lieu of the actual value for analysis(22). TABLE 2 Comparison of tested models' performance, statistical analysis was performed with the c2 test, with significance level of 0.05. TABLE 3 Comparison of the decision boundaries for each algorithm.The cut-off values are shown as raw measurement results. TABLE 4 Pairwise comparison of the model's performance for the data subsets with McNemar test.
8,986.6
2024-05-08T00:00:00.000
[ "Medicine", "Computer Science" ]
Double-stranded RNA-dependent Protein Kinase Phosphorylation of the α-Subunit of Eukaryotic Translation Initiation Factor 2 Mediates Apoptosis* As the molecular processes of complex cell stress signaling pathways are defined, the subsequent challenge is to elucidate how each individual event influences the final biological outcome. Phosphorylation of the translation initiation factor 2 (eIF2α)atSer51 is a molecular signal that inhibits translation in response to activation of any of four diverse eIF2α stress kinases. We used gene targeting to replace the wild-type Ser51 allele with an Ala in the eIF2α gene to test the hypothesis that translational control through eIF2α phosphorylation is a central death stimulus in eukaryotic cells. Homozygous eIF2α mutant mouse embryo fibroblasts were resistant to the apoptotic effects of dsRNA, tumor necrosis factor-α, and serum deprivation. TNFα treatment induced eIF2α phosphorylation and activation of caspase 3 primarily through the dsRNA-activated eIF2α kinase PKR. In addition, expression of a phospho-mimetic Ser51 to Asp mutant eIF2α-activated caspase 3, indicating that eIF2α phosphorylation is sufficient to induce apoptosis. The proapoptotic effects of PKR-mediated eIF2α phosphorylation contrast with the anti-apoptotic response upon activation of the PKR-related endoplasmic reticulum eIF2α kinase, PERK. Therefore, divergent fates of death and survival can be mediated through phosphorylation at the same site within eIF2α. We propose that eIF2α phosphorylation is fundamentally a death signal, yet it may promote either death or survival, depending upon coincident signaling events. One of the most perplexing problems in modern biology is to understand how the cell chooses between adaptation and apoptotic demise in response to stressful insults. Because there are multiple interacting anti-apoptotic and pro-apoptotic signaling pathways, it is assumed that the sum of these signaling cascades dictates the final outcome. When one pathway becomes predominant, a delicate balance is perturbed and either an adaptive or a lethal response ensues. Advances in our knowledge of how this commitment occurs will lead to a greater understanding of cell growth and differentiation as well as the etiology of various disease states. Numerous phosphorylation events are known to regulate the overall rate of protein synthesis or translation of selective mRNAs. However, the most dominant influence is mediated through phosphorylation at Ser 51 on the ␣-subunit of heterotrimeric eukaryotic translation initiation factor 2 (eIF2␣) 3 (1). eIF2 is required to deliver Met tRNA i to the 40 S ribosomal subunit. Physiological conditions that induce eIF2␣ Ser 51 phosphorylation regulate global as well as specific mRNA translation. Phosphorylation of eIF2␣ at Ser 51 inactivates eIF2 and reduces the efficiency of AUG initiation codon recognition, thereby attenuating translation initiation. However, reduced AUG initiation codon recognition can increase the initiation efficiency at selective AUG codons, thereby altering initiation site utilization to regulate both the quantity and quality of proteins produced (2). Four protein kinases phosphorylate eIF2␣ at Ser 51 in response to different stress stimuli: 1) the dsRNA-activated protein kinase PKR is a major component of the interferonmediated antiviral response and is activated by binding to dsRNA produced during viral infection (3); 2) the general control of nitrogen metabolism kinase GCN2 responds to amino acid depletion (4); 3) the heme-regulated inhibitor kinase HRI responds to heme deprivation to couple globin synthesis with available heme (5); and 4) the PKR-related endoplasmic reticulum (ER) kinase PERK responds to the accumulation of unfolded proteins in the ER in a subpathway of the unfolded protein response (6). Generally, eIF2␣ phosphorylation provides a fundamental mechanism to couple the rate of protein synthesis with the capacity to fold proteins under conditions of different physiological stress, such as nutrient deprivation or viral infection. Although the mechanism by which phosphorylation of eIF2␣ inhibits protein synthesis is well characterized, the cellular responses to eIF2␣ phosphorylation remain elusive. Recent studies support the idea that eIF2␣ phosphorylation promotes survival under conditions of oxidative stress and accumulation of unfolded proteins in the lumen of the ER (7,8). In contrast, eIF2␣ phosphorylation was proposed to mediate apoptosis in response to PKR activation (9 -12). In this study, we addressed how eIF2␣ phosphorylation influences the balance between survival and apoptosis upon activation of PKR. Treatment of cells with interferon and dsRNA is cytotoxic, and data support the hypothesis that this toxicity is mediated through PKR activation and induction of apoptosis (13). Although a number of different signal transduction and transcriptional programs are influenced through PKR activation (for review see Ref. 14), the most well characterized PKR substrate is eIF2␣. The growth suppressing activity mediated through eIF2␣ phosphorylation is an evolutionarily well conserved cell response. Either inactivation of the PKR pathway (12,(15)(16)(17)(18)(19)(20)(21)(22) or overexpression of a nonphosphorylatable S51A mutant eIF2␣ (10,12) protects from stress-mediated apoptosis. These studies provide compelling evidence for an anti-proliferative effect of PKR-mediated eIF2␣ phosphorylation in growth inhibition. However, the interpretation of these results is confounded because of the diverse effects that PKR activation has on multiple stress signaling pathways. Therefore, to date there is no direct evidence to support the hypothesis that eIF2␣ phosphorylation is necessary and/or sufficient for apoptosis. We propose that PKR activation with subsequent eIF2␣ phosphorylation is a primary mechanism that 1) inhibits initiation of protein synthesis, and 2) contributes to apoptosis in response to a variety of physiological and environmental stimuli. To test this hypothesis, we have studied apoptosis induced by dsRNA, TNF␣, or serum deprivation in cells that harbor a homozygous S51A knock-in mutation at the phosphorylation site in eIF2␣ (23). Here, we show that apoptosis induced by TNF␣, the interferon pathway, and serum deprivation requires PKR-mediated phosphorylation of eIF2␣. In addition, expression of a Ser 51 to Asp phospho-mimetic mutant of eIF2␣ was sufficient to activate caspase 3 in the absence of any apoptosisinducing stimuli. The results demonstrate that translational inhibition through eIF2␣ phosphorylation contributes to and can be sufficient to activate an apoptotic response. MATERIALS AND METHODS Plasmid DNA Transfection-The poly(ADP-ribose) polymerase (PARP) cDNA cloned in pCDNA3 was kindly provided by Dr. M. Keifer, (LXR Biotechnologies). The eIF2␣ expression vectors were previously described (24). Transfection of eIF2␣ expression plasmids into HeLa cells was performed as described (25). After transfection, the cells were washed twice with Dulbecco's modified essential medium (DMEM) and incubated at 5% CO 2 for 2 days in DMEM with 10% fetal bovine serum-containing antibiotics. The cells were washed twice with phosphate-buffered saline (PBS), harvested using Nonidet P-40 lysis buffer (1% Nonidet P-40, 150 mM NaCl, 50 mM Tris, pH 8.0) with complete protease inhibitors (Roche), and incubated on ice for 15 min followed by centrifugation at 10,000 rpm for 10 min. The supernatant was collected, and protein concentrations were determined by the Bradford method (26). In Vitro Transcription and Translation of PARP-In vitro transcription and translation of PARP was performed in the presence of [ 35 S]methionine/cysteine (Redivue PRO-MIX, Amersham Biosciences.) using the TNT kit (Promega Biotech) following the manufacturer's instructions. Cleavage of in vitro translated PARP was previously described (27). The reaction products were analyzed by SDS-PAGE and autoradiography using EN 3 HANCE (Dupont). Briefly, a 50-l reaction containing 50 mM HEPES, pH 7.4, 100 mM NaCl, 0.1% CHAPS, 10% sucrose, 20 g of cell extract protein from transfected COS-1 cells, and 2 l of in vitro translated [ 35 S]methionine/cysteinelabeled PARP were mixed and incubated at 37°C for 2 h. Then, 25 l of 3ϫ SDS-PAGE sample buffer was added to each sample followed by heating at 90°C for 5 min. The reaction products were analyzed by SDS-PAGE under reducing conditions. After electrophoresis, the gels were fixed, soaked for 45 min in EN 3 HANCE (Dupont), dried, and subjected to autoradiography. ImageJ (Version 1.31 for Mac OS X, NIH) was used to quantitate band intensities. Immunoblot Analysis-Cells were treated as described and harvested using Nonidet P-40 lysis buffer containing 150 mM NaFl, complete protease inhibitors (Roche), and 100 g/ml phenylmethylsulfonyl fluoride. Lysis buffer additionally included 500 mM ␤-glycerol phosphate, 50 mM sodium orthovanadate, and 1ϫ phosphatase inhibitor (Sigma P2850) (Fig. 3, E and F; supplemental Fig. S2). Samples were centrifuged at 10,000 rpm, and supernatants were collected for SDS-PAGE and transfer to nitrocellulose. The eIF2␣ Ser 51 phosphospecific antibody was obtained from BioSource (Camarillo, CA) and PKR antibody was kindly provided by Dr. Bryan Williams (Cleveland Clinic). Phosphospecific PKR antibody (3075) and PKR antibody (3072), (Fig. 3, E and F, supplemental Fig. S2), were obtained from Cell Signaling. The antibody that recognizes total eIF2␣ was previously described (23). Anti-TNFR1 antibody (SC-8436) was obtained from Santa Cruz Biotechnology. All Western blotting was performed with chemiluminescence detection and quantitation of film band intensities was performed with ImageJ (Version 1.31 for Mac OS X, NIH). Measurement of Translation Rates-MEFs were cultured as described above. After overnight culture, subconfluent cultures were treated with culture medium containing TNF␣, okadaic acid (OA), or poly(rI-C) as described. Cells were washed two times with PBS and incubated in methionine/cysteine-free medium including 200 Ci/ml [ 35 S]methionine/cysteine (Redivue PRO-MIX, Amersham Biosciences) in the continued presence of the described stimulus for 15 min. Cells were washed two times with ice-cold PBS and cell lysates were prepared in Nonidet P-40 lysis buffer containing complete protease inhibitors as described above for immunoblot analysis. Protein concentration was determined by the Bradford method (26). Trichloroacetic acid precipitation was performed by spotting samples on Whatman filter paper with subsequent washing in ice-cold 20% trichloroacetic acid, 10% trichloroacetic acid, and 100% ethanol. Filters were dried and liquid scintillation counting was performed. Real-time Quantitative RT-PCR-Total RNA was isolated from MEFs at 12 h after cell plating using the TRIzol method (Invitrogen), and RNA was dissolved in diethylpyrocarbonatetreated water containing 1 unit/l RNase inhibitor (Roche). Reverse transcription reactions were performed with i-Script (Bio-Rad) and then diluted 25-fold with water for real-time PCR in an i-Cycler machine using 9 l of diluted reverse transcriptase product and iQ SYBR Green Supermix in a 20-l reaction (Bio-Rad). The amplification primers used for TNFR1 detection were forward (5Ј-CATCCCCAAGCAAGAGTC-ATG-3Ј) and reverse (5Ј-GCTACAGACGTTCACGATGC-3Ј) and the primers used for ␤-actin amplification were forward (5Ј-CCTCTATGCCAACACAGTGC-3Ј) and reverse (5Ј-GTACTT-GCGCTCAGGAGGAG-3Ј). Cell Survival-Cells were cultured on 10-cm tissue culture dishes. At 24 h after plating, apoptosis was induced by treatment with culture medium containing 100 g/ml poly(rI-C) (Amersham Biosciences) for 16 -18 h or 1 ng/ml TNF␣ (Invitrogen) for 18 -21 h including 50 ng/ml actinomycin D (Act D) (Sigma) during both incubations. Serum deprivation was performed by washing cells three times with serum-free DMEM followed by 14 -23 h of incubation in DMEM containing 0.01% serum. In the morphological studies, cells were cultured on coverslips coated with 1% gelatin, treated as described, and fixed with 10% formalin (Sigma) prior to phase contrast microscopy. Cell viability was quantified by trypan blue dye exclusion. For nuclear staining, cells were plated, treated for induction of apoptosis, and then fixed with cold 70% ethanol at 4°C for 1 h. The cells were then washed with PBS and incubated in ice-cold PBS containing 0.5 mg/ml RNase and 50 g/ml propidium iodide for 15 min in the dark. Samples were mounted onto glass slides with ProLong Gold (Molecular Probes) and viewed using an Olympus BX51 microscope. Caspase 3 Assay-Adherent and floating cells were washed three times with PBS, collected at 1,200 ϫ g and resuspended in 100 -200 l of 25 mM HEPES, pH 7.5, 5 mM MgCl 2 , 5 mM EDTA, 5 mM dithiothreitol, 2 mM phenylmethylsulfonyl fluoride, 10 g/ml pepstatin A, and 10 g/ml leupeptin. The samples were lysed by four freeze-thaw cycles, centrifuged at 10,000 rpm, and the supernatant was collected for caspase 3 fluorometric assay using 70 -80 g of protein extracts as described by the supplier (Promega CaspACE Fluorometric Assay System, Madison, WI). Addition of the supplied caspase 3 inhibitor peptide to the cell lysates inhibited all activity (supplemental Fig. S1). A SPECTRAmax Gemini XS spectrofluorometer (Molecu-lar Devices, Sunnyvale, CA) was used with excitation, emission, and cutoff wavelengths of 368, 467, and 420 respectively. Protein concentrations were determined using a detergentcompatible assay (Bio-Rad). IKK Assay-IKK activity was measured by an immune complex kinase assay as previously described (31,32). Briefly, cell lysates were immunoprecipitated with anti-IKK␣ antibody and the immune complexes used for phosphorylation of a GST-IB␣-(1-54) peptide substrate. Phosphorylation of eIF2␣ Is Sufficient to Activate Caspase 3- To elucidate whether PKR activation and/or eIF2␣ phosphorylation are sufficient to activate apoptosis through activation of caspase 3, wild-type, and mutant forms of eIF2␣ and PKR were transiently transfected into HeLa cells in the presence of a procaspase 3 expression vector. Because transiently co-transfected cells express both plasmid DNAs, using this approach it is possible to measure caspase 3 activation in the subpopulation of co-transfected cells that transiently express wild-type or mutant forms of PKR or eIF2␣. Western blot analysis demonstrated that significant levels of procaspase 3 were detected only in cells that received the procaspase 3 expression vector (Fig. 1A, lanes 1-3, 7-9 versus 4 -6). However, the total amount of procaspase 3 was 3-fold lower in cells co-transfected with either wild-type PKR or S51D phospho-mimetic mutant eIF2␣ expression vectors compared with the other transfectants ( Fig. 1A, lanes 2 and 9). This is consistent with cleavage and activation of procaspase 3 or with the translational inhibition observed upon overexpression of wild-type PKR or the S51D mutant eIF2␣ (33). At 48-h post-transfection, caspase 3 activa- tion was measured in cell lysates using a PARP cleavage assay. In the presence of vector alone or in the presence of vectors expressing K296P trans-dominant-negative mutant kinase PKR or S51A non-phosphorylatable mutant eIF2␣, a low level of PARP cleavage was detected (Fig. 1B, lanes 1 and 2 and 10 and 13). In contrast, co-transfection with vectors that express either wild-type PKR or S51D mutant eIF2␣, increased the amount of PARP cleavage 3-5-fold (Fig. 1B, lanes 3 and 12). These results show that expression of either wild-type PKR or a S51D mutant eIF2␣ activates caspase 3 compared with expression of K296P mutant PKR or S51A mutant eIF2␣. Therefore, we conclude that phosphorylation of eIF2␣ is sufficient to activate caspase 3. Interferon ␣and dsRNA-induced Apoptosis Requires eIF2␣ Phosphorylation-We then tested whether PKR-mediated apoptosis requires eIF2␣ phosphorylation by studying MEFs that harbor a knock-in replacement of Ser 51 for Ala in the endogenous eIF2␣ gene (23). Wild-type (S/S) and homozygous (A/A) eIF2␣ mutant MEFs were treated with interferon ␣ and poly(rI-C) to strongly activate the PKR pathway. Whereas treatment with interferon ␣ and poly(rI-C) increased the level of phosphorylated Ser 51 eIF2␣ in the wild-type S/S MEFs 1.9-fold ( Fig. 2A, lanes 1 and 2), phosphorylated eIF2␣ was not detected in the homozygous A/A mutant MEFs ( Fig. 2A, lanes 3 and 4), consistent with the presence of the homozygous eIF2␣ mutation in these cells. Where treatment of Pkr ϩ/ϩ MEFs with interferon ␣ and poly(rI-C) also increased levels of eIF2␣ phosphorylation 1.4-fold over that observed under control conditions, there was no increase in the Pkr Ϫ/Ϫ MEFs ( Fig. 2A, lanes 5-8). Western blot analysis confirmed that the Pkr Ϫ/Ϫ MEFs did not express PKR (Fig. 2A). These results support the notion that PKR is the major eIF2␣ kinase activated under these conditions. Morphological analysis of cells treated with poly(rI-C) indicated a distinct difference in survival. Act D was included at a low concentration to prevent the anti-apoptotic response mediated by NF-B activation under these conditions (11). Act D is necessary to elicit an apoptotic response in cultured MEFs. Cycloheximide is also an apoptotic sensitizer that may be used in conjunction with TNF␣ (34). Compared with both wild-type Pkr ϩ/ϩ and wild-type eIF2␣ S/S MEFs that did not survive this treatment, the survival of homozygous eIF2␣A/A mutant MEFs was not compromised (Fig. 2B). Analysis of viability by trypan blue dye exclusion was consistent with the morphological observations (Fig. 2B, legend). These results demonstrate that inactivation of either the eIF2␣ kinase PKR or mutation at the Ser 51 phosphorylation site in eIF2␣ produced substantial resistance to poly(rI-C)-induced death. To quantitatively monitor a direct marker of apoptosis, caspase 3 activity was measured in prepared cell lysates. Where a 16-h treatment with poly(rI-C) increased caspase 3 activity in wild-type MEFs, the activation of caspase 3 was significantly impaired in MEFs that harbor the homozygous S51A mutant eIF2␣ (A/A) (Fig. 2C). In addition, caspase 3 activation was reduced 50% in the Pkr Ϫ/Ϫ MEFs, consistent with earlier find- Wild-type and mutant MEFs were pretreated with 400 units/ml interferon-␣ overnight and then with poly(rI-C) for 8 h. Cell extracts were prepared for Western blot analysis with anti-phosphopeptide-specific eIF2␣, total anti-eIF2␣, or anti-PKR antibodies. The intensities of eIF2␣ phosphorylation relative to total eIF2␣ levels are indicated. B, Pkr Ϫ/Ϫ and eIF2␣ A/A MEFs are resistant to dsRNA-induced cell death. MEFs were treated with poly(rI-C) (0.1 mg/ml) and Act D (10 ng/ml) for 16 h and then analyzed by light microscopy. Act D treatment alone did not significantly affect cell morphology. Trypan blue dye exclusion indicated that the viable cell counts of treated wild-type Pkr ϩ/ϩ and eIF2␣ S/S MEFs were ϳ20% of control vehicle-treated cultures. In contrast, the viable cell counts were ϳ60 and 45% in the Pkr Ϫ/Ϫ and the eIF2␣ A/A mutant MEFs, respectively. C, procaspase 3 activation is reduced in eIF2␣ A/A and Pkr Ϫ/Ϫ MEFs. MEFs were treated with poly(rI-C) and Act D for 20 h, and then cell extracts were prepared for analysis of caspase 3 activity as described under "Materials and Methods." Caspase 3 activation was significantly reduced in eIF2␣ A/A MEFs and Pkr Ϫ/Ϫ MEFs compared with their respective controls. ***, p Ͻ 0.001. JULY 28, 2006 • VOLUME 281 • NUMBER 30 JOURNAL OF BIOLOGICAL CHEMISTRY 21461 ings that Pkr Ϫ/Ϫ cells are resistant to apoptotic stimulation (9 -12, 18). These data suggest that eIF2␣ phosphorylation contributes to PKR-mediated cell death, although it is not absolutely required. eIF2␣ Phosphorylation Is Required for TNF␣-induced Apoptosis-To determine the role of eIF2␣ phosphorylation in response to another inducer of apoptosis, the response to TNF␣ was measured. Although previous studies suggest that TNF␣ induces eIF2␣ phosphorylation (9 -11), this has not been directly demonstrated nor has the role of PKR and possibly other eIF2␣ kinases in this process been established. Treatment of subconfluent wild-type MEFs with TNF␣ induced apoptotic bodies in 36% of the cells analyzed by propidium iodide staining (41/271 fragmented apoptotic nuclei, 57/271 picnotic nuclei). In contrast, the same treatment produced no apoptotic nuclei in cells that harbor the homozygous S51A mutation in eIF2␣ (A/A) (Fig. 3A). TNF␣ treatment increased caspase 3 activity 6-fold and 3-fold in wild-type eIF2␣ S/S MEFs and wild-type Pkr ϩ/ϩ MEFs (Fig. 3B). In contrast, TNF␣ treatment did not markedly increase caspase 3 activity in the Pkr Ϫ/Ϫ and homozygous S51A eIF2␣ mutant MEFs (Fig. 3B). The activity measured was specifically caspase 3 as inclusion of the aldehydenoncleavable DEVD peptide fully blocked the activities measured in the lysates. Although the induced caspase 3 activity was primarily dependent upon TNF␣, there was detectable caspase 3 induction upon treatment with Act D alone (supplemental Fig. S1). In every experiment using TNF␣ as an apoptotic stimulus, caspase 3 activity was significantly increased 2-5-fold in lysates from Pkr ϩ/ϩ cells compared with Pkr Ϫ/Ϫ cells. However, the maximal level of activity in Pkr ϩ/ϩ cells was reproducibly lower than that measured in lysates from TNF␣-treated eIF2␣ S/S cells (Figs. 3B and 4, A-C). The difference may result from the different genetic backgrounds for the two strains of mice. The Pkr ϩ/ϩ and Pkr Ϫ/Ϫ MEFs were derived from C57Bl/6J, and the eIF2␣ S/S and eIF2␣ A/A MEFs were derived from C57Bl/6J X 129/Sv. To verify the validity of the protective effect of the eIF2␣ mutation, two additional wild-type MEF isolates were derived from independent litters of the eIF2␣ mouse strain (S/S-2 and S/S-3) and analyzed. Both lines displayed remarkably similar degrees of caspase 3 activation upon treatment with TNF␣ compared with the original control MEFs (eIF2␣ S/S, S/S-1) (Fig. 3C). Additionally, an alternative preparation of homozygous eIF2␣ mutant MEFs (A/A-2) was also markedly impaired in caspase 3 activation in response to TNF␣. To evaluate the requirement for the three additional eIF2␣ kinases in TNF␣ signaling of eIF2␣ phosphorylation and apoptosis, caspase 3 activity was analyzed in Perk Ϫ/Ϫ (29), Hri Ϫ/Ϫ (5), and Gcn2 Ϫ/Ϫ (35) MEFs and their respective wild-type con- A and B). Act D is not required for induction of caspase 3 activity under these conditions. In comparison to wild-type control MEFs, caspase 3 activation was significantly reduced in Pkr Ϫ/Ϫ , and eIF2␣ A/A MEFs ***, p Ͻ 0.001. C, proteasome inhibition protects from TNF␣-induced apoptosis. Cells were treated same as above (Fig. 3, A and B), except lactacystin (LC) was present during the TNF␣ treatment. Lactacystin treatment decreased caspase 3 activation in wild-type MEFs treated with TNF␣ and Act D. ***, p Ͻ 0.001. trol MEFs. TNF␣ treatment significantly elevated caspase 3 activity in both the wild-type and all knock-out MEFs, indicating that the TNF␣-dependent caspase 3 activity does not require PERK, HRI, or GCN2 eIF2␣ kinases (Fig. 3D). We next explored the relationship between TNF␣ receptor signaling, PKR activation, and eIF2␣ phosphorylation. We tested whether TNF␣ treatment leads to PKR activation, eIF2␣ phosphorylation, and translational inhibition. Western blot analysis using a phosphopeptide-specific antibody detected an ϳ2.5-fold increase in activated PKR-Thr 451 -P in wild-type MEFs treated with TNF␣for 2 or 4 h (Fig. 3E, lanes 3 and 4). These treatments did not alter the steady state level of PKR (data not shown and supplemental Fig. S2). PKR-Thr 451 -P was not detected in Pkr Ϫ/Ϫ MEFs indicating specificity of the antibody (Fig. 3E, lane 1). In addition, TNF␣ stimulation induced an ϳ2.5-fold increase in eIF2␣-Ser 51 -P (Fig. 3E). The TNF␣mediated increases in PKR-Thr 451 -P and eIF2␣-Ser 51 -P were similar to those observed upon treatment of MEFs with poly(rI-C), which is a very strong stimulus for PKR activation (Fig. 3E, lane 6). These increases in phospho-PKR and phospho-eIF2␣, were reproducibly detected in independent experiments (supplemental Fig. S2). When TNF␣ stimulation was performed in the presence of the phosphatase inhibitor okadaic acid, slightly larger increases in PKR phosphorylation and eIF2␣ phosphorylation were observed. In contrast, poly(rI-C), and TNF␣ did not increase eIF2␣ phosphorylation in Pkr Ϫ/Ϫ MEFs (Fig. 3F). These results demonstrate that TNF␣ signaling activates PKR and is required to elicit eIF2␣ phosphorylation under these conditions. Because TNF␣ induces PKR activation and eIF2␣ phosphorylation, we asked whether protein synthesis is inhibited upon TNF␣ treatment. After 4 h, TNF␣ inhibited protein synthesis to ϳ40% in wild-type MEFs, but had little effect in Pkr Ϫ/Ϫ MEFs (Fig. 3G). Treatment with okadaic acid alone or okadaic acid with TNF␣ reduced protein synthesis to ϳ25% in wild-type MEFs, consistent with the increased eIF2␣ phosphorylation observed in the presence of okadaic acid. In contrast, TNF␣ treatment only modestly reduced protein synthesis to ϳ90% in the Pkr Ϫ/Ϫ MEFs, consistent with the reduced level of eIF2␣ phosphorylation measured (Fig. 3F). Overall, the same conditions that elicit eIF2␣ phosphorylation measured by Western blot analysis also inhibit translation in a PKR-dependent manner. Therefore, TNF␣ treatment inhibits protein synthesis through the PKR-eIF2␣ pathway. TNF␣-induced Apoptosis Requires Protein Synthesis Inhibition-Phosphorylation of eIF2␣ inhibits protein synthesis at the level of initiation. Our results support the hypothesis that phosphorylation of eIF2␣ is required for apoptosis induced by poly(rI-C) and TNF␣. To test the requirement for protein synthesis in the TNF␣ apoptotic response, we measured the effect of protein synthesis elongation inhibition on caspase 3 activation induced by TNF␣. Increasing time of cycloheximide (CHX) treatment in the presence of TNF␣ very rapidly increased caspase 3 activation in the eIF2␣ wildtype and Pkr ϩ/ϩ MEFs (Fig. 4, A and B). In contrast, caspase 3 activation was significantly reduced in the eIF2␣ A/A and Pkr Ϫ/Ϫ mutant MEFs. However, in these mutant MEFs, 6 h of CHX treatment restored activation of caspase 3. Therefore, the protective effect of the S51A mutant eIF2␣ allele or PKR deletion could be partially reversed by general inhibition of protein synthesis. These results suggest that eIF2␣ phosphorylation may inhibit the translation of a short lived inhibitor of apoptosis, such as p53 (36) or inhibitors of caspase activation (IAPs). Most IAPs contain a C-terminal RING-Zinc finger domain that has ubiquitin ligase (E3) activity and is responsible for their rapid degradation mediated by the proteasome (37). Therefore, inhibition of proteasome activity to prevent p53 and/or IAP degradation may also protect cells from the caspase activation. Indeed, treatment with the proteasome inhibitor lactacystin did partially prevent caspase activation in the wild-type cells (Fig. 4C), as previously described (12). eIF2␣ Phosphorylation Is Not Required for TNF␣ Signaling and Activation of IKK-Under conditions of ultraviolet light or ER stress, eIF2␣ phosphorylation facilitates activation of NF-B by decreasing translation of IB (38,39). Because PKR is also known to activate IKK in response to dsRNA (40), we determined the requirement for eIF2␣ phosphorylation in this response. Treatment with poly(rI-C), dsRNA, TNF␣, or IL-1, activated IKK to a similar degree in the wild-type and homozygous eIF2␣ mutant A/A MEFs (Fig. 5A). These results demonstrate that eIF2␣ phosphorylation is not required for IKK activation by these stimuli and are consistent with reports that the catalytic activity of PKR is not required to signal NF-B activation and target gene activation (11, 40 -42). In addition, transcriptional activation of a luciferase reporter gene under control of three NF-B binding sites was not altered in the homozygous eIF2␣ A/A mutant MEFs (data not shown). These studies demonstrate that impaired receptor signaling is not the FIGURE 5. eIF2␣ phosphorylation is not required for PKR-mediated activation of IKK. Cells expressing wild-type or designated mutants of PKR or eIF2␣ were assayed for IKK activity after treatment with poly(rI-C), TNF␣, or IL-1 (A). Real-time quantitative RT-PCR for TNFR1 mRNA (B) and Western blot analysis (C ) for TNFR1 and eIF2␣ was performed using lysates from logarithmically growing cells as described under "Materials and Methods." reason eIF2␣ mutant A/A MEFs are resistant to TNF␣-mediated apoptosis. Previous studies suggested that TNFR1 mRNA is down regulated in cells that express a trans-dominant-negative PKR mutant (15). Indeed, mRNA analysis by real-time quantitative RT-PCR demonstrated that homozygous eIF2␣ A/A and Pkr Ϫ/Ϫ MEFs did express lower levels of TNFR1 mRNA (Fig. 5B). However, Western blot analysis of TNFR1 protein demonstrated similar levels of expressed TNFR1 protein, relative to the loading control eIF2␣ (Fig. 5C). In conclusion, these results demonstrate that although TNFR1 mRNA was reduced in the mutant MEFs, the levels of TNFR1 protein were not altered and that signaling from the TNF␣ receptor to NF-B activation was functional in the mutant MEFs. PKR-mediated eIF2␣ Phosphorylation Is Required for Serum Deprivation-induced Apoptosis-Because previous studies suggested that serum deprivation induces apoptosis through PKR-mediated phosphorylation of eIF2␣ (10), we analyzed the response to serum deprivation in the wild-type, Pkr Ϫ/Ϫ , and eIF2␣ A/A MEFs. Where serum deprivation induced eIF2␣ phosphorylation by greater than 5-fold in wild-type MEFs (Fig. 6A), significantly less eIF2␣ phosphorylation occurred (1.8-fold) in Pkr Ϫ/Ϫ MEFs. Serum deprivation activated caspase 3 by 5-7-fold in wildtype MEFs. In contrast, serum deprivation did not activate caspase 3 in the Pkr Ϫ/Ϫ or eIF2␣ A/A MEFs (Fig. 6, B and D). Therefore, PKR-mediated phosphorylation is required for apoptosis induced by a different stimulus in the absence of the transcriptional blockade with Act D. JOURNAL OF BIOLOGICAL CHEMISTRY 21465 We have studied the role of eIF2␣ phosphorylation by analysis of cells that express S51A or S51D mutants of eIF2␣. The following results support the hypothesis that eIF2␣ phosphorylation is alone sufficient to activate apoptosis and in addition, is required for the apoptotic response to PKR activation. First, transient overexpression of wild-type PKR or S51D mutant eIF2␣-induced caspase 3 activation (Fig. 1). Second, caspase 3 activation in cells that harbor a knock-in S51A mutation in eIF2␣ was significantly reduced in response to TNF␣, poly(rI-C), as well as serum deprivation (Figs. 2 and 3). Our studies directly demonstrate that TNF␣ activates PKR to phosphorylate eIF2␣ and inhibit translation. TNF␣-mediated apoptosis required eIF2␣ phosphorylation, whereas TNF␣-dependent activation of IKK did not require eIF2␣ phosphorylation. This is consistent with findings that demonstrate PKR signals activation of IKK in a manner that does not require PKR kinase activity (11, 40 -42). Furthermore, TNF␣-induced eIF2␣ phosphorylation was exclusively dependent on PKR, and not any of the other known eIF2␣ kinases. Finally, treatment with CHX partially restored caspase activation in the S51A eIF2␣ A/A mutant cells, suggesting that eIF2␣ phosphorylation mediates its apoptotic effects through translational inhibition (Fig. 4). These findings support the idea that apoptosis does not require new protein synthesis and that all the machinery required for cell death preexists in the cell. This finding is consistent with a requirement for continued protein synthesis to maintain a pool of short-lived anti-apoptotic factors, such as p53 or IAPs (Fig. 7) (36). The latter was also supported by the protective effect observed by proteasomal inhibition, conditions that should stabilize short-lived protective molecules. Our results indicate that eIF2␣ phosphorylation is necessary and sufficient for the PKR apoptotic response. These findings are in contrast to conclusions recently derived from observations using an inducible overexpression system to produce S51A and S51D mutants of eIF2␣ (56). These studies did not detect a complete reduction in protein synthesis that would be expected with S51D eIF2␣ expression (33). In addition, although cell number was significantly reduced at 24 h postinduction, apoptosis was not measured at this time. When analyzed at 6 days after induction of the S51D mutant eIF2␣, apoptosis was not detected. These results suggest that the robust apoptotic effect of eIF2␣ phosphorylation may be transient, with subsequent survival of a subpopulation of cells that activate adaptive mechanisms. In contrast, our apoptosis studies using transient DNA transfection of S51D mutant eIF2␣ in HeLa cells were performed at 24-h post-transfection, early after expression of S51D eIF2␣ commenced. Previous studies support the idea that apoptosis through PKR activation is rapid, occurring within 24 h (9, 57). Our studies are consistent with additional findings that support the conclusion that PKR-mediated phosphorylation of eIF2␣ promotes apoptosis. First, overexpression of S51A mutant eIF2␣ protected from vaccinia virus-, TNF␣-, and serum deprivation-induced apoptosis (10,12). In addition, macrophages from S51A eIF2␣ homozygous mutant mice were resistant to apoptosis induced by lipopolysaccharide treatment in the presence of p38 MAPK inhibition (9). This apoptotic response is mediated through toll-like receptor 4 and PKR. In contrast to the requirement for eIF2␣ phosphorylation for apoptosis mediated through PKR activation, S51A mutation in eIF2␣ or deletion of the eIF2␣ kinase PERK in MEFs dramatically increased sensitivity to agents that disrupt protein folding and produce stress in the ER (23,29). Therefore, it was surprising that the same S51A eIF2␣ A/A mutant MEFs were resistant to apoptotic stimuli that signal through PKR activation. It is unknown how two different stress stimuli that signal through phosphorylation at the same site in eIF2␣ result in opposing responses. We propose that upon ER stress, cells stimulate the death-inducing property of eIF2␣ phosphorylation. However, the outcome is survival because eIF2␣ phosphorylation decreases the protein-folding burden on the ER to relieve the stress. In addition, ER stress induces auxiliary pathways to reverse eIF2␣ phosphorylation so that the eIF2␣ phosphorylation is transient (58). Thus, the ER-stressed cell may benefit from acute reduction of biosynthetic load, while escaping apoptosis in the long term through eIF2␣ dephosphorylation. We hypothesize that the delicate balance between cell survival and death upon a stress stimuli is determined by the strength of the primary death-inducing stimulus and the input of auxiliary and compensatory pathways that are coordinately activated. Some of these secondary signals may assist or be required for death while others may be protective. We propose that eIF2␣ phosphorylation is fundamentally a death-promoting signal. Our data show that TNF␣-induced eIF2␣ phosphorylation inhibits translation and possibly mediates apoptosis by inhibiting the synthesis of anti-apoptotic cellular factors, such as IAPs (Fig. 7). Under these conditions, the primary death signal to elicit caspase activation is increased. In addition, eIF2␣ phosphorylation also thwarts the adaptive transcriptional FIGURE 7. Phosphorylation of eIF2␣ contributes toward dsRNA-, TNF␣-, and serum deprivation-induced apoptosis. TNFR1 activation initiates apoptosis through caspase 8 leading to caspase 3 activation. TNFR1 occupancy also activates PKR, leading to phosphorylation of eIF2␣. Translation of antiapoptotic factors is inhibited to promote apoptosis. Inhibition of protein synthesis by CHX can complement the requirement for eIF2␣ phosphorylation to promote apoptosis in TNF␣-treated cells. Lactacystin inhibits apoptosis by preventing degradation of anti-apoptotic factors. In parallel, PKR mediates activation of IKK in a manner that does not require kinase catalytic activity. response through translational inhibition to prevent synthesis of protective factors. Given the observations of the importance of PKR and eIF2␣ phosphorylation in apoptosis, it was interesting and curious that homozygous mutation of S51A in eIF2␣, Pkr deletion, or expression of a trans-dominant-negative mutant PKR did not have an obvious developmental phenotype in the mouse (23,28,59,60). These findings would support the idea that eIF2␣ phosphorylation is not an essential apoptotic signal in mammalian embryonic development, a process where apoptosis plays a central role. This is in contrast to dramatic phenotypes observed in mice harboring deletions in essential caspase genes or key modulators of apoptosis (61,62). However, it is consistent with absence of embryonic lethality in mice lacking the known TNF␣ receptor family members (63). Thus, death receptor signaling has a lesser role in development than essential apoptosis effectors. Although TNF␣ receptor signaling is not required for embryonic apoptosis, there are a number of circumstances where TNF␣-induced apoptosis is physiologically important including lipopolysaccharide-mediated apoptosis in the liver (64), hepatotoxicant-induced apoptosis (65), suppression of acute HSV-1 viral infection (66), limitation of T cell number during chronic LCMV viral infection (67), infarction induced myocardial rupture and ventricular dysfunction (68), and death of malformed embryos (69). Adenovirus delivery of TNF␣ induced apoptosis in esophageal cancer cells in a manner that required PKR (70), suggesting the utility of this approach to promote apoptosis in transformed cells. The delineation of TNF␣ signaling to eIF2␣ phosphorylation and apoptosis established by our studies suggests eIF2␣ phosphorylation may be an important death signal in these physiologically important apoptotic events. Conditional homozygous eIF2␣ A/A mice with tissue specific expression will provide important tools for future studies on TNF␣-induced apoptosis. Regulation of eIF2␣ phosphorylation could provide an attractive target for therapeutic intervention. Agents that inhibit eIF2␣ phosphorylation could promote survival under desired conditions, for example to inhibit macrophage apoptosis upon viral infection (9) or prevent ischemic cell injury (71). Alternatively, direct targeting of therapeutic agents to induce eIF2␣ phosphorylation may accentuate apoptosis of virus-infected cells (72). Future analysis of potential therapeutics to increase eIF2␣ phosphorylation likely will lead to death promoting applications in anti-tumor or anti-microbial targeted therapeutics as well as to protective functions in ER stressrelated disease.
8,050
2006-07-28T00:00:00.000
[ "Biology", "Chemistry" ]
Prediction on X-ray output of free electron laser based on artificial neural networks Knowledge of x-ray free electron lasers’ (XFELs) pulse characteristics delivered to a sample is crucial for ensuring high-quality x-rays for scientific experiments. XFELs’ self-amplified spontaneous emission process causes spatial and spectral variations in x-ray pulses entering a sample, which leads to measurement uncertainties for experiments relying on multiple XFEL pulses. Accurate in-situ measurements of x-ray wavefront and energy spectrum incident upon a sample poses challenges. Here we address this by developing a virtual diagnostics framework using an artificial neural network (ANN) to predict x-ray photon beam properties from electron beam properties. We recorded XFEL electron parameters while adjusting the accelerator’s configurations and measured the resulting x-ray wavefront and energy spectrum shot-to-shot. Training the ANN with this data enables effective prediction of single-shot or average x-ray beam output based on XFEL undulator and electron parameters. This demonstrates the potential of utilizing ANNs for virtual diagnostics linking XFEL electron and photon beam properties. Recent advances in X-ray free-electron lasers (XFELs) [1][2][3][4][5][6] at world-wide facilities such as SLAC 7 , SACLA 8 , PAL-XFEL 9 , SwissFEL 10 , and the European XFEL 11 have demonstrated innovative capabilities and operational configurations that are expected to greatly impact a wide range of proposed science experiments 12 .Tunable devices such as variable gap undulators and phase shifters have been integrated into the XFEL to tailor and control the electron beam 13 , opening up fresh opportunities for science experiments.However, as the number of electron beam control parameters increases, so does the complexity of accelerator optimization and tuning.This, along with the shot-to-shot variations from the self-amplified spontaneous emission (SASE) process of XFELs, make it essential to understand the relationship between the electron beam parameters and the actual X-ray beam properties delivered to a sample. To understand this relationship, several options are possible.First, the wavefront and spectrum of the XFEL pulse can be determined computationally, though this is a challenging task due to the complexity of the underlying physics, discrepancies between real-world and computational models, and the multitude of variables and parameters involved especially with the more recent generation XFELs.Second, real-time nondestructive measurements of the energy spectral and spatial wavefront properties of the XFEL pulse delivered to a sample could also be implemented.One method to do this involves splitting the X-ray pulse into reference and experimental beams using a beam splitter and taking measurements on both beams from shot to shot.This, however, can increase experimental complexity, require additional instrumentation, which may not be feasible depending on the physical constraints of the experimental setups, and reduces photon flux.In addition, accuracy would be highly determined by the quality and performance of the X-ray beam splitter optic. To overcome these challenges, we develop a virtual diagnostics model based on artificial neural networks (ANNs) and shot-to-shot measurement data of both electron and X-ray beam parameters.ANNs are powerful tools for modeling complex nonlinear relationships, and exploration of their utility to overcome the limitations of conventional methods for accelerator optimization, tuning, and modeling is underway [14][15][16][17] .The majority of machine learning models for XFELs have primarily focused only on the electron beam for tasks such as accelerator and undulator tuning and optimization 18,19 , with one study incorporating X-ray spectrometer data 20 .These studies were made possible due to the single-shot diagnostics of the electron beam implemented in the XFEL.With the recent development of highaccuracy single-shot X-ray wavefront sensors for both soft and hard X-rays at XFELs [21][22][23] and the development of single-shot soft X-ray spectrometers based on off-axis zone plates for spectral measurements 24,25 , X-ray properties can now be characterized routinely.These diagnostic tools enable us to measure the spatial amplitude and phase, as well as the spectral qualities of the X-ray beam and allow us to further combine the X-ray diagnostics data with that of the electron beam diagnostics data into a model based on ANNs. In the following set of experiments, we modulate the electron beam parameters via different accelerator operational configurations in the XFEL, including both that of routine operations with full normal electron beams and exploration of the effect of detuning, tapering, and kicking of slotted electron beams, record the electron and X-ray beam parameters on a single-shot basis, and then train an ANN-based model using the data.Detuning plays a critical role in determining XFEL modes through the dispersion relation equation 2,3,[26][27][28][29] and can excite high-order modes 30 .In conjunction with tapering of the undulators, amplification of these high-order modes are expected.Both the routine case and the specialized cases of detuning, tapering, and kicking were chosen to demonstrate and understand the utility and limitations of the ANN-based virtual diagnostics model. Results These experiments were conducted at the Time-resolved atomic, Molecular and Optical Science (TMO) instrument 31 at LCLS as illustrated in Fig. 1a.LCLS was operated in self-amplified spontaneous emission (SASE) mode, producing ~530 eV X-rays at a repetition rate of 120 Hz.Data from a total of 13 XFEL configurations were recorded, 12 different configurations using the slotted electron beam, and 1 configuration representing routine operations using the normal full SASE beam.In the 12 different configurations of the slotted electron beam, an energy chirp along the electron bunch was introduced for detuning and the taper and kicking parameters were varied.A slotted foil was used to create a short, coherent spike in the electron bunch by spoiling the majority of it when incident upon the foil, leaving an ultrashort unspoiled portion through the slot in the foil 32 , as shown in Fig. 1a.The unspoiled portion then produces an ultrashort XFEL pulse through lasing.The undulator sections were set to two different states: no taper and optimal taper 33 , and for each of these states, the electron bunch was kicked at various locations in the undulator, n sections before the final section, with n = 0, 1, 3, 5, 7, and 9 where 0 indicates no kicking, as illustrated in Fig. 1b.This resulted in a total of 12 different configurations.For each configuration, we recorded the single-shot wavefront intensity, phase, and spectrum, as well as electron parameters from the undulators (spectrum and wavefront were measured separately for the same 12 configurations).In addition to the slotted electron beam, the full SASE beam in routine operation was used to study shot-to-shot wavefront phase variations, with similar recordings of wavefront phase and electron bunch parameters.The X-ray wavefront was measured using a Talbot wavefront sensor, and the spectrum was recorded using an off-axis zone plate on a yttrium aluminum garnet (YAG) screen, shown in Fig. 1c.See the "Methods" section for further details on the XFEL configurations and data acquisition. In Fig. 2, we present the average spectra, wavefront intensity, and phase for each configuration, including variations with and without taper and kicking at different points along the undulator.The results show that different configurations result in distinct spectra and wavefronts.For instance, the spectra from taper configurations exhibit a higher energy tail and reduced low energy components compared to that of the no taper cases.The intensity also increases as the electrons are kicked further downstream.The differences among the twelve phase maps indicate the wavefront's evolution with different kick locations and taper settings. Indeed, the experiments revealed interesting XFEL physics when certain parameters of the electron bunch and the undulators are varied.In the no-taper case, due to the fact that the electrons are continuously losing energy, the radiation spectrum is skewed toward the red-shift side.For the taper case, the taper was over-tapered to introduce a detuning to set the resonant frequency in the blue-shift side compared to the radiation frequency in the exponential growth region, i.e., before the tapered region.Thus, the microbunching will now support high-order modes according to the dispersion relation discussed below in the "Methods" Section.In our case, the donut mode is excited, as shown in the intensity plot in Fig. 2, while the spectrum shows spectral tails at high energy, as seen in the normalized spectrum plot in Fig. 2. We conducted an investigation into the correlations between X-ray properties and electron parameters by computing Pearson correlation coefficients between recorded electron beam parameters and our X-ray measurements (e.g., Zernike coefficients for wavefront phase).As shown in Fig. 1d, we created a correlation matrix to demonstrate the relationship between electron beam parameters and Zernike coefficients.The correlation matrix highlights that electron parameters exhibit intricate correlations with the resulting X-ray wavefront.These relationships are often implicit yet complex, involving a multitude of parameters that become challenging to depict and solve through conventional methods. ANNs can solve real-world problems, such as regression or classification, by receiving inputs, performing complex calculations, and providing outputs.To map both X-ray and electron properties, we employed a conventional multilayer perceptron (MLP) model to predict X-ray outputs based on electron parameter readings.The MLP we used in this paper is depicted in Fig. 1e and is comprised of an input layer, multiple hidden layers, and an output layer.The inputs are electron parameters and the outputs are X-ray properties like wavefront or spectrum.Electron parameters consist of readings from bunch length monitors, beam position monitors at various sections, and electron attributes such as position, peak current, bunch charge, coordinates, pulse energy, etc.The X-ray wavefront phase is represented as Zernike coefficients obtained by decomposing the phase into Zernike polynomials.The X-ray beam spectrum is represented as 50 numbers obtained through binning.See the "Methods" section for further details on model training. We demonstrate the effectiveness of our trained models by presenting predictions for (1) different configurations from different runs with slotted electron beam varying kicking locations and taper states, and (2) shot-to-shot variation within a single run with a full electron beam.Predictions are all single shots, and the averages are calculated based on the predicted single shots.These predictions are discussed in the following subsections. Analysis of predictions from the slotted electron beam configurations In Fig. 3, we present a comparison between the measured and predicted average wavefront phase in Zernike coefficients for various configurations.The measurements and predictions are nearly identical, with only minor phase differences observed.The root-meansquare (RMS) prediction error for the average wavefront phases was determined to be 0.0169 rad.Furthermore, the standard deviation of wavefront phase from case to case was found to be 0.236 rad.Based on these values, the estimated relative error for predicting average caseto-case fluctuations is ~7%.Refer to the "Methods" section for further information on the prediction error and accuracy evaluation.The model accurately captured the differences and changes in wavefront phase caused by varying electron parameters and accurately predicted the resulting X-ray wavefront phase.With the single-shot measurements of a comprehensive collection of electron parameters, we can determine the X-ray beam wavefront phase delivered to the end station. Similarly, in Fig. 4, we compare the measured and predicted average spectra for various configurations.There is very little difference between the two.The good agreement observed in the figures is due to the fact that they represent comparisons of the averages.The model effectively captured the differences and changes in the X-ray spectrum caused by varying electron parameters and accurately predicted the resulting X-ray spectra.For instance, kicking at a more upstream location results in more symmetrical spectrum curves, and taper leads to spectral tails at high energy, while no taper X-ray to TMO endstation 31 , where the X-ray diagnostics were located downstream of the instrument's Kirkpatrick-Baez (KB) focusing mirrors.b The electron bunch was kicked at various locations in the undulator, with n sections before the final section, where n = 0, 1, 3, 5, 7, and 9, with 0 indicating no kicking.c Single-shot X-ray wavefronts were measured using a Talbot wavefront sensor [21][22][23] .The single-shot X-ray spectra were measured using an off-axis X-ray zone plate spectrometer 24,25 .d From the recorded single-shot electron and X-ray data, a heat map displaying the Pearson correlation coefficients is produced, highlighting the relationship between the electron parameter inputs with the X-ray wavefront parameter outputs (Zernike coefficients of the X-ray wavefront phase).Each cell represents a correlation coefficient, with red indicative of a positive correlation and blue for a negative.The heat map shown is a subset of the data due to the large number recorded.e The electron and X-ray data were then used to train an artificial neural network (ANN).An illustrative diagram is shown representing a multilayer perceptron (MLP) model.The architecture consists of an input layer, several hidden layers, and an output layer, with the inputs being electron parameters and the outputs being X-ray beam properties.The diagram does not reflect the actual numbers for layers or nodes.The "Methods" section describes all parameters used.results in low energy components in the spectra.The mean similarity between the predicted and measured spectra is 0.999 for average spectra, and 0.924 for single-shot spectra.Refer to the "Methods" section for further information on the prediction error and accuracy evaluation.With the single-shot measurements of a comprehensive collection of electron parameters, we can determine the overall spectrum of the X-ray beam delivered to the end station.The spectral resolution relies on the measurements obtained from the zone plate spectrometer as detailed in the "Methods" section on data acquisition.It is worth mentioning that the spikiness observed in a single-shot spectrum is a random occurrence and cannot be predicted due to the stochastic nature of XFEL startup and the inability to make measurements at the single-electron level.However, what holds significance is the envelope of the single-shot spectrum, as it provides information about the central frequency, bandwidth, and spectral tails at high energy for tapered cases and the tails at low energy for no taper cases.These distinctive features are illustrated in Fig. 4. Furthermore, we also built and trained neural network models to perform classification tasks.We used either the wavefront phase Zernike coefficients or the electron parameters to predict the operation configuration from among the twelve options.The prediction accuracy is remarkable, reaching 99% when given the electron parameters and 87% when given the wavefront phase Zernike coefficients at the single-shot level. Shot-to-shot variations We utilized a similar technique to predict shot-to-shot variations in the single-shot X-ray wavefront phase within a single run using full SASE beams.Specifically, we employed a neural network to map electron parameter readings from the undulators to the measured single-shot Xray wavefront phase.The results, depicted in Fig. 5a, illustrate the standard deviations of (1) the measured wavefront phase, (2) the predicted wavefront phase, and (3) the RMS prediction errors of the wavefront phase over all shots in the test dataset.The measured and predicted wavefront phases exhibit similar shot-to-shot variations, as evidenced by their comparable standard deviations for each Zernike term, particularly the two primary Zernike terms that contribute the most to shot-to-shot variations.The decrease in the variation of the difference between the measured and predicted wavefront phases indicates that the model has learned something that has reduced the difference to a level below shotto-shot variations, and the remaining variation is likely due to shot-toshot noise.Based on the single-shot wavefront phase data, the RMS undulator taper, respectively.The color bar at the bottom is only applicable to the phase maps.The wavefront phase plots have had the defocus and astigmatism terms removed and used the no taper no kick case as the reference to enhance the illustration of high-order phase differences.It is evident to see the differences in spectrum, wavefront intensity, and phase among cases.Please note that all plots are displayed in pixel units and are not calibrated to energy or length units due to experimental limitations during the run.For the purpose of this study, they are not required, but it would be desired for future studies. prediction error between the predicted and measured wavefront phase is determined to be 0.141 rad.Additionally, the standard deviation of the wavefront phase from shot to shot is calculated to be 0.269 rad.Consequently, the estimated relative error for predicting shot-to-shot fluctuations is ~52%.Refer to the "Methods" section on prediction error and accuracy evaluation for further details. Figure 5b presents the measurement and prediction results from the test dataset, based on Zernike coefficients (Z3-Z8) versus an example electron beam parameter (electron x coordinate from a beam position monitor).It is worth noting that while a single electron parameter is depicted against Zernike coefficients in this figure, these coefficients are multivariate and rely on the complete set of electron parameters.The figure demonstrates how Zernike coefficients change as electron beam parameters vary and how the model's predictions compare to the measured data.Figure 5b indicates that the model has captured the correlations between Zernike coefficients and that selected single electron parameter, as well as the variation or dispersion among shots that arises from other electron parameters.The slight reduction in variation or dispersion from the prediction in Fig. 5b and the difference between measured and predicted wavefront phase in Fig. 5a may both be indications of noise sources (either systematic or measurement noise) that were not learned by the model. Single-shot prediction is vital for XFEL X-ray imaging that relies on wavefront phase, as well as any other experiment that depends on X-ray intensity or spectra on the sample.This capability enables us to determine the wavefront phase in cases where direct, single-shot, insitu wavefront measurements are not feasible, particularly for the exact shot pulse being used for single-shot imaging due to shot-to-shot variations of XFEL pulses.Although using a grating to split XFEL X-ray beams and measure the wavefront phase and spectrum to determine the X-ray delivered to experiments is possible, it would significantly increase the complexity of the experimental setup, consume more time and space, and result in a loss of photon flux. Discussion Our recent experiments at LCLS have confirmed that ANN models can be trained on experiment data to accurately predict XFEL pulse properties such as wavefront and spectra using electron bunch parameters as inputs.The study aims to emphasize the valuable insights provided by electron diagnostics in predicting X-ray output.While acknowledging the complexity of XFEL physics, the study demonstrates the efficacy of the MLP model in capturing the nonlinear relationships between electron parameters and X-ray characteristics.This capability will simplify virtual diagnostics for single-shot X-ray pulses and facilitate electron diagnostics, optimization, and tuning to achieve optimal or desired X-ray output. Optimal performance in ANN training and tuning necessitates a large dataset encompassing a diverse sample space.In this work, we utilized readily available shot-to-shot recorded electron beam parameters while measuring the XFEL beam, without investing additional effort in obtaining innovative electron measurements.However, to explore further avenues for improvement, it is worth considering to introduce additional parameters that provide a more comprehensive and in-depth characterization of electron information.By incorporating such parameters, the method presented here has the potential to enhance the model's robustness, reliability, and overall performance.For instance, the Convolutional Neural Network (CNN) can serve as a subnet for processing 2D electron parameters, specifically electron time-energy distribution images obtained from the X-band Transverse CAVity (XTCAV) diagnostic system 34 .By leveraging its ability to recognize learned patterns in these 2D inputs, the CNN can effectively extract relevant features.Moreover, to capture temporal pulse-pulse correlations, alternative models such as recursive neural network or transformer can be employed.These sequential models excel at extracting features related to the contextual information within the pulses, thereby providing a more comprehensive understanding of the data. Similarly, further avenues for improvement can be made in the areas of X-ray diagnostics as well.Improvements in the performance of existing diagnostic tools as well as introduction of additional measurement capabilities in the future, for example the ability to measure temporal characteristic of the X-ray beam, can improve the overall performance of this type of model.Incorporation of the various instrument optics performance modeling and their optomechanic or other tuning parameters specific to each instrument can allow the integration of information from any X-ray optics induced characteristics or fluctuations in the beam prior to interaction with the sample.This can lead to a higher fidelity predictive capability in the model as well as improved overall tuning of the accelerator and optics systems for an experiment. The slotted electron beam configurations To understand the mechanism behind the presence of high-order modes in XFEL pulses, we intentionally generate short electron bunches that resemble a single coherent spike.If we used a long electron bunch, it would result in many (order of 100) coherent spikes 4 , which would be different transverse eigenmodes in the post-saturation regime of the XFEL.Observing these different modes becomes difficult when many spikes interfere with each other as they hit the wavefront sensor. We utilized a slotted foil to spoil the majority of the electron bunch, leaving only a small, ultrashort portion 32 , as shown in Fig. 1a.This ultrashort, unspoiled portion lases and generates an ultrashort XFEL pulse, which allows us to manipulate the electron bunch properties and undulator configuration to excite different high-order eigenmodes.Additionally, to effectively excite high-order modes, we perturb the electron orbit in the undulator by kicking it at specific locations, shown in Fig. 1b.The kicking occurs at n sections before the final undulator section with n = 0, 1, 3, 5, 7, and 9, where 0 means no kicking. In the high-gain XFEL, the slowly varying envelope function of the electric field has the form: where the dimensionless variables measuring spatial and temporal variations are: with r being the transverse coordinates, z the longitudinal coordinate, t the time, v 0 the electron bunch longitudinal velocity, ω w = k w c = (2π/λ w )c and λ w being the undulator period, c being the speed of light in vacuum, k r = k 0 + k w = 2π/λ 0 + k w and λ 0 being the radiation wavelength. The eigenfrequencies Ω = Ω n (q ∥ ) and the eigenfunctions ψ = ψ n (q ∥ , x) are determined by the dispersion relation 26 : where α = ðn 0 μ 0 e 4 A 2 w Þ=ð2m 3 γ 3 0 ω 2 w Þ with n 0 being the peak density of the electron bunch, γ 0 , e, and m being the Lorentz factor, the charge, and overlap with each other.The intensity of the spectrum data was normalized, so only the spectral shape was considered, without the intensity information.The spectra were plotted against pixels on the yttrium aluminum garnet (YAG) screen, without any calibration to energy units, as it was not necessary.A downstream kick resulted in a larger variation in the spectrum compared to an upstream kick.Moreover, the taper and non-taper cases exhibit opposite skewness in the spectrum. the mass of the electron, respectively, μ 0 being the vacuum permeability, and A w being the vector potential of the undulator. It is now clear that to excite high-order eigenmodes ψ n (q ∥ ), the system should be detuned to support that particular eigenfrequency Ω n (q ∥ ).In our experiment, we then introduced an energy chirp along the electron bunch to efficiently excite high-order modes. Besides introducing energy chirp along the electron bunch for detuning, we can also adjust the taper of the undulator, since the XFEL wavelength is: λ FEL = λ w ð1 + K 2 =2Þ=ð2γ 2 0 Þ, tapering the undulator strength K will directly detune λ FEL .In the experiment, we study the evolution of high-order modes by setting the undulator sections in two states: no taper and optimal taper 33 .On top of these states, the undulator can be over-tapered to introduce the proper effective detuning for efficient excitation, guiding, and amplification of highorder eigenmodes. Data acquisition and preparation The single shot data was recorded as two distinct datasets-one for the X-ray beam wavefront/spectra on the photon side and another for the electron parameter readings from undulators on the accelerator side.Both datasets recorded the single shot pulse energies, which were used to synchronize the two datasets on a single shot basis, thus ensuring that the X-ray and electron data is aligned for each individual shot. The X-ray data includes the X-ray wavefront and spectrum.Highly accurate wavefront measurements were conducted using a Talbot wavefront sensor 35 , which has recently been successfully demonstrated with XFEL radiation [21][22][23] .The number of Zernike terms required for an accurate representation of a wavefront phase depends on the complexity of the wavefront and the desired level of accuracy.To most XFEL experiments, the most important photon beam characteristics are focused beam position and profiles, which typically fluctuate shot-to-shot in current generation XFELs due to the SASE nature of lasing.Low order Zernike terms (up to Z15-Z21) can effectively capture the aberrations associated with those fluctuations, allowing a reasonably accurate determination on the beam features.In our specific case, considering both the absolute values and standard deviations of the higher-order Zernike coefficients to be very small compared to the dominant terms, we retrieved the wavefront phase and decomposed it into 21 Zernike coefficients (Z0-Z20) following the OSA/ANSI convention.By utilizing these Zernike coefficients, we were able to represent and characterize the wavefront phase.Each coefficient corresponds to a specific property of the wavefront, such as oblique and vertical astigmatism (Z3, Z5), defocus (Z4), trefoil (Z6, Z9), and coma (Z7, Z8).Decomposing the wavefront into Zernike polynomials serves as a featurization step, converting diverse forms of data into numerical representations suitable for basic machine learning algorithms. For spectral measurements, we utilized an off-axis zone plate and captured the spectra on a YAG screen using a CCD camera.The spectrometer demonstrated a sub-eV spectral resolution (0.5-0.7 eV) in the vicinity of 530 eV.On the CCD, the pixel-to-eV ratio was 29 pixels per eV around 530 eV.To facilitate training and prediction, the resulting spectrum was subsequently binned into 50 values at a 6:1 ratio (equivalent to 0.2 eV per value after binning), offering a comprehensive representation of the overall spectrum shape. We did not intentionally choose specific electron parameters and attributes; instead, we utilized all the directly accessible single-shot parameters.The model relied on a total of 192 parameters to generate the X-ray output.These electron parameters encompass readings from a range of sources such as bunch length monitors and beam position monitors at different sections (undulator soft line, linac-to-undulator soft line, electron dump soft line) and include electron beam positions (x and y coordinates), bunch charges, peak current, raw waveform, X-ray pulse energy, etc. Model training To prepare the data for model training, we initially screened the pulse energy data to eliminate outliers by removing shots that were exceptionally weak or empty.In order to capture the intricate relationship between the electron beam parameters as input and X-ray output, we employed an MLP model.The MLP functions as a black box, taking the electron input and generating predictions for the corresponding X-ray output.Its focus is on establishing a nonlinear mapping rather than simulating the complex physics of XFEL systems. The architecture of the MLP comprises several layers, including an input layer, three hidden layers with 256, 128, and 64 nodes respectively, and an output layer.The number of nodes in the input layer corresponds to 192 electron beam parameters, while the output layer consists of either 18 nodes for wavefront phase or 50 nodes for the spectrum.The electron parameters, which encompass parameters of the electron bunch and the undulators, serve as the input for the neural network.Prior to training, these parameters are normalized to enhance performance. The output of the network is either the wavefront phase, represented by Zernike coefficients, or the normalized spectrum numbers. To ensure that the model accurately captures the nonlinear relationship and maintains generalization capability, we carefully select hyperparameters to prevent both underfitting and overfitting.The MLP utilizes the hyperbolic tangent (tanh) activation function, which allows for output normalization within the range of (−1, 1), effectively capturing both positive and negative influences from the input data. For training the model, we employ the Mean Squared Error (MSE) as the loss function, along with dropout regularization (rate of 0.1) to prevent overfitting.An Adam optimizer and a batch size of 256 are utilized during the training process.We trained the model using 80% of approximately 10,000 total shots, while the remaining 20% was reserved for evaluating its predictive capabilities.To ensure the reliability of the model, we performed 5-fold cross-validation.This process involved dividing the data into 5 subsets and conducting training and evaluation on different combinations of these subsets.The consistently minimal errors observed during cross-validation indicated that the model was not prone to overfitting or selection bias. Prediction error and accuracy evaluation When we have two 2D wavefront phase maps, the RMS difference between these wavefronts can be computed as . Here, ΔX represents the phase difference within the circular aperture.Alternatively, this difference can be expressed in terms of Zernike coefficients as k ΔZk 2 = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P j ΔZ j 2 r , where ΔZ j signifies the discrepancy on each Zernike coefficient.This formula is used to calculate the RMS error between the measured wavefront and the predicted wavefront.Additionally, we can assess the shot-to-shot or case-to-case variations by considering ΔZ j as the standard deviation of each Zernike coefficient.By dividing the RMS prediction error by the standard deviation of the wavefronts, the wavefront prediction error can be evaluated as a relative error. To evaluate the accuracy of spectrum shape prediction, we measure the similarity between the predicted and measured spectra using the cosine similarity formula S C ðA,BÞ = AÁB A j j 2 B j j 2 . This calculation allows us to quantify the level of resemblance between the predicted and measured spectra, providing a metric for assessing the accuracy of the prediction. Fig. 1 | Fig.1| Overview of experimental setup and data analysis.a A schematic of the experiment in the detuning configuration with undulators and a slotted foil to produce short electron bunches.The X-ray pulses were delivered to the Timeresolved atomic, Molecular and Optical Science (TMO) instrument31 , where the X-ray diagnostics were located downstream of the instrument's Kirkpatrick-Baez (KB) focusing mirrors.b The electron bunch was kicked at various locations in the undulator, with n sections before the final section, where n = 0, 1, 3, 5, 7, and 9, with 0 indicating no kicking.c Single-shot X-ray wavefronts were measured using a Talbot wavefront sensor[21][22][23] .The single-shot X-ray spectra were measured using an off-axis X-ray zone plate spectrometer24,25 .d From the recorded single-shot electron and X-ray data, a heat map displaying the Pearson correlation coefficients is Fig. 2 | Fig.2| X-ray spectra and wavefronts for different kicking locations and tapers.The X-ray wavefront and spectrum were measured for various operating configurations, and each subplot displays the average measurement result of a particular case.The six columns correspond to six different kicking positions along the undulator, with n = 0, 1, 3, 5, 7, and 9 sections before the final undulator section, where 0 indicates no kicking.The six rows are grouped into three categories, displaying the normalized X-ray spectra, wavefront intensities, and phases, respectively.Within each category, the two rows show the results with and without Fig. 3 | Fig.3| Measured and predicted average Zernike coefficients for different kicking locations and tapers.The Zernike coefficients obtained from both the measurements and predictions were averaged for each case, and each subplot displays a single Zernike term.The kicking locations are marked on the x-axis, with tapered cases shown in blue and green, and non-tapered cases in orange and red.The blue and orange lines indicate the measured Zernike coefficients, while the red and green lines represent the predicted Zernike coefficients from the trained model.The Zernike coefficients of the no kick no taper case (leftmost orange point in each subplot) are set to zero as a baseline for better illustration and comparison.The predicted and measured Zernike coefficients almost completely overlap with each other. Fig. 5 |Fig. 4 | Fig.5| Measured and predicted single-shot Zernike coefficients and shot-toshot variations.a The standard deviations (SD) of measurements, predictions, and RMS prediction error for the Zernike coefficients of all single shots from the test dataset are displayed for a single routine run with a full SASE beam.The measurement and prediction variations are similar, particularly for the two primary Zernike terms, Z4 and Z8.The smaller RMS prediction errors compared to the SD of measurement indicate that the model has accurately learned the systematic shotto-shot variations.b The figure shows examples of measurements and predictions from the test dataset for Zernike coefficients (Z3-Z8) versus an example electron parameter (electron x-coordinate from a beam position monitor at a specific location within the Linac-To-Undulator Soft Line area, in units of standard deviation).The plot illustrates how the Zernike coefficients change as the electron parameters vary.The predictions not only capture the correlations between the Zernike coefficients and electron parameters shown but also demonstrate the variation or dispersion that results from other electron parameters.
7,519.4
2023-11-08T00:00:00.000
[ "Physics" ]
Environmental Conditions during Breeding Modify the Strength of Mass-Dependent Carry-Over Effects in a Migratory Bird In many animals, processes occurring in one season carry over to influence reproductive success and survival in future seasons. The strength of such carry-over effects is unlikely to be uniform across years, yet our understanding of the processes that are capable of modifying their strength remains limited. Here we show that female light-bellied Brent geese with higher body mass prior to spring migration successfully reared more offspring during breeding, but only in years where environmental conditions during breeding were favourable. In years of bad weather during breeding, all birds suffered reduced reproductive output irrespective of pre-migration mass. Our results suggest that the magnitude of reproductive benefits gained by maximising body stores to fuel breeding fluctuates markedly among years in concert with conditions during the breeding season, as does the degree to which carry-over effects are capable of driving variance in reproductive success among individuals. Therefore while carry-over effects have considerable power to drive fitness asymmetries among individuals, our ability to interpret these effects in terms of their implications for population dynamics is dependent on knowledge of fitness determinants occurring in subsequent seasons. Introduction A central challenge in population ecology is identifying the factors that drive variation in observed reproductive success among individuals. Reproductive success can be expressed as a function of both processes occurring within the current season, such as summer climate affecting hatching success [1], as well as processes from previous seasons whose effects have persisted into the current time period, so-called 'carry over effects' [2,3]. Carry-over effects have been shown to be powerful drivers of fitness asymmetries among individuals [4], and have been described in numerous taxa including birds, mammals and reptiles (reviewed in 3). For example, carry-over effects can be mediated by body mass prior to reproduction, where individuals that have experienced superior resource access in the season before breeding have higher body mass during the breeding season, and consequently are able to invest more in reproduction [5,6]. Mass-dependent carry-over effects are likely to be particularly pronounced in migratory capital-breeders [3,7], which must accrue resources prior to the breeding season to fuel both travel to the breeding grounds and reproduction upon arrival. Accordingly there is a wealth of evidence demonstrating mass-dependent carry-over effects in Arctic-nesting bird species, where differences among individuals in rates of pre-migration mass storage have been linked to variation in both migratory timing [8] and probability of breeding [5,6,9,10]. While evidence for the presence of carry-over effects in a multitude of taxa is growing, only recently have studies begun to investigate how the strength of carry-over effects from prior to breeding may interact with, and be modified by, processes during the breeding season to influence reproductive success [4,10]. It is difficult to quantify the potential for carry-over effects to drive variance among individuals in their reproductive success without also quantifying the potential for the strength of those effects to be magnified or reduced by events occurring during breeding. Legagneux et al. [10] provided evidence of mass-dependent carry-over effects in Greater Snow Geese (Anser caerulescens atlanticus), but also demonstrated that the advantage of increased body mass for reproduction could be negated in some years by favourable environmental conditions during breeding. Though we may infer that the strength of interaction between pre-breeding body mass and environmental conditions during breeding is likely to vary among years [10], the direction(s) in which they act may not be universal among species. Empirical evidence describing the strength and form of these interactions between separate periods of the annual cycle is lacking, and the manner in which they affect demographic processes remains poorly understood, largely because such estimates require that we track individuals between seasons within and among years [4,[11][12][13]. While the influence of environmental conditions during breeding on reproductive success in Arctic-nesting species is fairly well characterised [1,14,15], past studies have measured reproductive success at the population level by counting proportion of juveniles in flocks the season following breeding. Population-level measures preclude the quantification of carryover effects, which are by definition an individual-level phenomenon [2,3]. Without individual level data (e.g. prebreeding mass, migratory departure date, or number of offspring produced by a single individual), one cannot readily discern whether the observed the per capita breeding output of the population is the product of either carry-over effects, or density-dependent seasonal compensation [2,16], both of which have different implications for our understanding of the mechanisms that regulate population dynamics. When investigating the interaction between carry-over effects from prior to breeding and environmental conditions during breeding, there are two broad hypotheses whereby such interactions may occur: i) in years of poor environmental conditions during breeding, only those individuals with largest amount of endogenous energy reserves are likely to breed [10]; or ii) only in years of favourable breeding conditions are the asymmetries in reproductive success among individuals of high and low body mass fully realized, because only these years provide an opportunity for individuals to utilize stored mass to finance reproduction. Conversely, it may be the case that there is no interaction between two processes occurring in season t and season t+1 respectively, and as such they simply operate in an additive and independent manner. Here we combine 6 years of data on body mass in the season prior to breeding and environmental conditions during breeding to examine how mass-dependent carry-over effects and environmental conditions may interact to influence reproductive success in an Arctic-nesting migratory bird, the light-bellied Brent goose (Branta bernicla hrota). Ethics Statement In the UK all work was carried out under UK home office licence, Environment and Heritage Service (NI) wildlife licence and BTO cannon netting permits. In Ireland all work was carried out under National Parks and Wildlife Licence and BTO cannon netting permits. In Iceland work was carried out in conjunction with the Icelandic Natural History Society regulations. All field procedures were approved by the University of Exeter Ethics and Health and Safety Committees. All work was carried out with land owners' permission. Study Species and Data Collection The East Canadian High Arctic (ECHA) population of lightbellied Brent geese (Branta bernicla hrota) overwinters around the coast of Ireland from late-August to April, subsequently staging for one month on the west cost of Iceland before breeding in the Canadian Arctic [17]. Brent geese feed preferentially on high-quality marine resources such as Zostera spp., and green algae (Enteromorpha spp. and Ulva lactuca L), but also shift to lower quality terrestrial grassland during the overwinter period once the density of the preferred food sources has diminished beyond a level that is no longer profitable to exploit [6].. Satellite telemetry data has shown that mean arrival date in the Canadian Arctic breeding grounds is 1 st June (K. Colhoun & G. Gudmundsson, unpublished data), with clutch initiation occurring roughly 7-10 days after arrival [18]. Modal clutch size is 4 eggs (range 2-6; [18]). The Irish Brent Goose Research Group and collaborators have marked over 3500 light-bellied Brent geese to date from across the entire range (Ireland, Iceland and Canada). In both Ireland and Iceland, geese were caught in cannon nets while in Canada flightless adults (during moult) and juveniles were herded into enclosures. Birds were sexed either by cloacal examination, or using molecular markers as described in Harrison et al. [19], fitted with individually-coded colour leg rings and had morphometric data collected (mass, wing length and skull length). The resighting database currently contains over 95,000 records of marked birds, many of which include information on adult associations (breeding pairs) and number of juveniles in a family group. Previous work has verified that these familial associations represent parents and true genetic offspring [20]. Details of assignment of family groups and breeding pairs can be found in Inger et al. [6]. Our analyses used 6 years of data from 213 female light-bellied Brent geese for which we could quantify breeding success in the year of capture when mass was measured by counting the number of offspring they returned to wintering grounds with the following year. Of these, 92 females were observed to return with offspring the year after capture. The remaining 121 birds did not successfully breed in the year of capture, returning to the wintering grounds without offspring. Birds were only assigned as non-breeders if there were 3 or more records for the year after capture where they had been recorded without juveniles, and if they had been recorded in the year of capture as having an adult associate, to avoid noise caused by assigning potential singletons as non-breeders. The SMI scales the mass of all individuals to that expected if they were all of identical body size. We scaled all birds to the mean skull length (89.6mm), using a Secondary Major Axis (SMA) slope of 3.4, calculated as the ratio of the slope of a least squared regression of log(Mass) on log(Body Size) (1.003) to the correlation coefficient between those two variables (0.29, [21]). We note that this is not a measure of body condition based on mass-length residuals, which have been heavily criticised [22], but a metric that standardises mass among individuals based on an inherent power law between mass and size calculated from the data [21]. As with many capital breeders, light-bellied Brent geese show strong nonlinear seasonal trajectories in body mass, whereby they rapidly increase size of fat stores prior to migration to fuel migratory flight and and finance reproduction [6]. We therefore corrected the point estimates of scaled body mass index for seasonal trajectory by fitting a 2 nd order polynomial term for day of annual cycle ( F 2,465 = 122.5, p<0.001, r 2 = 0.34). Residuals of these models were taken and added to the mean mass for females in the sample (1464.2g) to provide a seasonally corrected mass estimate, independent of body size, for each female to be used in subsequent analyses [10], which we refer to hereafter as 'mass'. We extracted the corrected mass estimate for the 213 females for which we could confidently assign reproductive status the year after capture (see above) to use in subsequent analyses. There was no evidence that mean mass differed by year in our sample ( Figure S1 in File S1). Environmental Data We obtained data for the North Atlantic Oscillation (NAO) Index from the Climate Prediction Centre [23], and extracted the monthly mean values for June. We used NAO for June as this is the period when Brent geese nest and lay clutches [18]. The June NAO data showed a significant non-linear temporal pattern (2 nd order polynomial for Year, F 2,60 = 5.82, p=0.005; Figure S2 in File S1), and so we de-trended the data by taking the residuals of the non-linear model of June NAO over time. If there is a true causal relationship between variables, then the residuals should still be correlated independently of any correlation between the original variables [24]. We used the detrended residuals for June for each breeding cycle from 2004/5 to 2009/10 in our analysis. The NAO index is representative of weather conditions throughout the Brent goose's breeding range, and its ecological effects are well characterized [25,26]. Large-scale climatic predictors such as the NAO have often been shown to have superior explanatory power compared to local environmental variables such as rainfall or temperature [27], and indeed have been shown to correlate with local weather conditions in the Canadian Arctic [15]. An additional advantage of the NAO index is that it is a composite proxy measure of multiple nonindependent climatic variables including rainfall, temperature and winds, and so its use in predictive models avoids the potential problems of autocorrelation among multiple explanatory variables [26]. Positive NAO indices are generally associated with intense low pressure over Iceland [26] and as a result an increase in the severity of westerly winds, storms and precipitation in the Arctic [25,26]. Conversely, negative NAO values generally represent favourable environmental conditions. Values from June represent conditions during breeding, as it is at this time that light-bellied Brent geese initiate nesting and lay clutches [18]. Several recent studies have found that large scale climatic predictors such as the North Atlantic Oscillation (NAO), when used as a proxy for local environmental conditions, have a significant effect on the reproductive success of Arctic-nesting bird species [14,15]. Mechanisms by which environmental conditions are expected to influence reproductive success include: i) lower temperatures causing reduced juvenile survival/growth due to increased thermoregulatory costs and/or reduced food availability [1,15],, ii) poor pre-breeding conditions affecting food availability/ability to sequester endogenous capital for breeding [28]; iii) poor weather increasing migration costs, which can also reduce the capital available for breeding [29]; and iv) longer migration times due to delays caused by poor weather reducing the time available to successfully raise young before the return of unfavourable late-summer environmental conditions, or causing a mismatch between peak offspring nutritional requirements and food availability [30]. Statistical Analysis All models were fitted in the software R v2.15 [31]. We used a GLMM with Poisson errors and log link to investigate variables affecting number of offspring produced, using the package 'lme4' [32]. We evaluated the support in the data for 8 competing models designed to explain variation in reproductive success as a function of carry-over effects from staging body mass, summer environmental conditions, or a combination of both. (Table 1). To account for unequal sample sizes among years we included year of measurement as a random intercept term in the models. All variables were z-transformed prior to analysis to have mean 0 and standard deviation of 1 to put all predictors on a common scale and make main effects interpretable in the presence of interactions [33]. Model selection was performed using an information-theoretic approach, using the R package 'MuMIn' [34] to rank all models based on AIC C . We considered all models within Δ6 AICc units of the top model as the best supported models, but also applied the 'nesting rule' [35,36] whereby models in the Δ6 AICc set were not retained if they were more complex versions of nested (simpler) models with better AICc support (higher up in the table). The nesting rule prevents the retention of overlycomplex models that do little to improve the fit to the data [37]. Those models present in the Δ6 AICc set after application of the nesting rule were selected for model averaging using the function 'model.avg' in the MuMIn package. We present model-averaged predictions from these models alongside predictions from the top model parameterised under a Bayesian framework, which gives predicted means and 95% credible intervals that are exact for the given sample size [38]. R code for these models is available in the Supporting Information (Code S1 in File S1). We calculated r 2 for the top model using the methods detailed in Nakagawa & Schielzeth [39] for calculating r 2 for mixed models. Multivariate Models. The use of derived variable analyses (i.e. correcting body mass for seasonal trends) using best linear unbiased predictors (BLUPs) has been criticised for its anti-conservatism [40]. To test the robustness of our results, we verified our analyses using a multivariate mixed model framework. The advantage of such an approach is that it allows the estimation of the posterior correlation between mass and reproductive success (the carry-over effect), while simultaneously controlling for confounding effects such as body size, or when individuals were measured, and evaluating support for predictors such as June NAO (summer environmental effects). It thus prevents the need to use predicted values from prior models in subsequent models, which can cause bias in results [40]. We used the R package 'MCMCglmm' [41] to fit a bivariate response model with number of juveniles and mass (raw mass at capture) as Poisson-and Gaussian-distributed responses, respectively. We modelled mass as a function of both time (2 nd order polynomial, as above in 'Body Mass' section) and skull size. We modelled number of juveniles as a function of June NAO. We then estimated the posterior correlation between mass and juveniles following Harrison et al. [42], whereby a significant positive correlation (95% credible intervals do not cross zero) is representative of a carry-over effect from winter mass after controlling for other confounding factors. Models were run for Table 1. Eight competing models investigating the factors affecting the reproductive success of light-bellied Brent geese, measured as number of offspring females returned to the wintering grounds following Autumn migration from the breeding quarters in the Canadian High Arctic. Factors Influencing the Number of Offspring Produced There were 4 models in the Δ6 AICc candidate set ( Table 2). The best-supported model contained a terms for a quadratic effect of body mass prior to migration to breeding, June NAO (representative of environmental conditions during breeding) and their interaction. In years of favourable environmental conditions during breeding (negative June NAO), individuals of higher body mass have much greater reproductive success than lower mass birds. However in years of poor breeding conditions, the advantage of higher body mass is largely negated (Figure 1), suggesting the strength of the carry-over effect is greatly reduced. Using the nesting rule, we retained only 3 models: the top model, a model containing a linear effect of mass and its interaction with June NAO (ΔAICc = 0.43), and a model containing only the effect of June NAO (ΔAICc = 5.27). Model averaged estimates from these three models are presented in Table 3. Predictions from the top model are presented in Figure 1 alongside model-averaged predictions. The marginal r 2 (variance explained by the fixed effects) of the top model was 53%. Multivariate Models Results from the multivariate models were in agreement with the information theoretic approach (Table 4). Both a 2 nd order polynomial term for day of year and a linear term for Skull size significantly affected mass, and number of offspring was significantly influenced by June NAO. We observed a significant, positive posterior correlation between Mass and Number of Offspring (0.22, 95% credible interval 0.015 -0.47, Table 4), after controlling for the effects of skull size, day of year and June NAO. This correlation is representative of carryover effect of body mass, where individuals that are heavier prior to migration to breed are predicted to return to the wintering grounds with more offspring the following year. Discussion Our results provide evidence that the reproductive success of light-bellied Brent geese is a function of carry-over effects driven by winter body mass, but that the strength of these carry-over effects is modified by environmental conditions during breeding. Individuals with higher body mass prior to migration to the breeding grounds have higher reproductive success than lower-mass birds, but only when the environmental conditions during breeding are favourable. Conversely in years of poor conditions during breeding, all individuals suffer a greatly reduced reproductive success, with no advantage of higher body mass. There are two important consequences of such a pattern: i) the advantage of accruing large endogenous resource stores prior to breeding fluctuates markedly among years in concert with the severity of environmental conditions during breeding; and ii) individuals will fail to realise their maximum potential reproductive success unless the years in which they accrue the largest body stores prior to migration also coincide with the years where the opportunity to utilise those stores during breeding is highest. Variance in reproductive success will therefore be greatest among individuals that consistently arrive at the breeding grounds in the best condition, and those that either arrive in consistently poor condition across years, or whose peak body mass mismatches the timing of the most favourable summer weather. The relative success of consistently carrying large fat reserves depends on the survival cost of carrying those stores in years of unfavourable weather where flight costs may be high [43]. In years of poor weather it may be that birds of intermediate mass are most successful [10], possessing sufficient energy stores to complete the migration and breed, but without suffering the increased flight costs that would result from excessively large fat stores. Light-bellied Brent geese cross the Greenland ice cap during migration from Iceland to the breeding grounds [17]; this route is 1000km shorter compared to having to fly around the ice cap, but the increased altitude requires such high flight muscle power output that Brent geese likely fly near or beyond the limits for aerobically sustained muscle performance [17]. Therefore in years of particularly poor weather it may be extremely disadvantageous to carry unnecessary fat stores that will only increase the demand on flight muscles during this phase of migration and severely constrain flight performance. The patterns we present here are consistent with previous, population-level studies that have examined the determinants of reproductive success in Arctic-nesting species. Ebbinge [44] found that in years where mean body mass of female Dark-Bellied Brent geese (Branta bernicla bernicla) prior to migration to breed was higher, breeding success, measured as the proportion of juveniles in winter flocks the following year, was also higher. Our results build upon those of Ebbinge [44] by quantifying the individual-level consequences of prior body mass by linking mass to number of offspring the following year. Madsen et al. [28] found that in years of high snow cover at the start of breeding, Pink-footed Geese (Anser brachyrhynchus) were forced to delay egg laying and suffered lower breeding success, indicating that individuals had to wait for suitable nesting sites to be exposed as the snow recedes, and subsequently the time in which they have to successfully rear offspring within the short Arctic summer was greatly reduced. Early nesting and clutch initiation is crucial to successful reproduction in Arctic-nesting species [28,45], and those individuals that arrive early on the breeding grounds usually have higher reproductive success [46]. However, late snowmelt can delay nesting for all birds [28,45] to the extent that there is no advantage to early arrival, and the low number of offspring produced in years of positive NAO most likely reflects the stochastic manner in which birds manage to secure nesting sites after poor weather conditions. During late-nesting years, higher female body mass is unlikely to confer any advantage in reproductive success, as survival prospects of offspring decline rapidly with lay-date, and geese most likely modify their clutch size based on the expected survival value of offspring given lay date [8]. Conversely, in years of favourable weather, those individuals that arrive early and with the largest mass stores will be able to nest early, as nest sites will most likely not be limited [28]. Early-arriving females are then able to maximise investment in clutch size and have sufficient time to rear offspring prior to the autumn migration back to the wintering grounds, explaining why we observe that in years of negative June NAO, those females with greater fat stores during winter produced more offspring. Using a multivariate response model, we validated the results of our models run using a derived variable of mass corrected for body size and seasonal trajectory. We detected a significant, positive posterior correlation between mass and number of offspring produced, once controlling for body size and seasonal trajectory within a singular modelling framework. This correlation represents a carry-over effect of mass sequestered on the Icelandic staging grounds prior to migration to breed, in addition to the significant effect of June NAO that was also recovered by modelling number of offspring as a function of environmental conditions during breeding. As birds were measured once within years, and not multiple times across years, we lack the ability to estimate how the strength of the posterior correlation between mass and number of offspring fluctuates in concert with the severity of the environmental conditions experienced during breeding. The large credible Table 3. Model averaged estimates for the 3 models in theΔ6 AICc top model set remaining after the nesting rule had been applied (see Table 2). interval around the correlation most likely result from constraining the model to estimate a single variancecovariance matrix for the mass and offspring variables across all years, where in fact it's likely that the magnitude of this correlation varies considerably among years in a similar fashion to the patterns demonstrated in Figure 1. By employing multiple observations of individuals from across years, future work will focus on quantifying the degree to which posterior correlation between mass and number of offspring changes in concert with fluctuations in the magnitude of the June NAO. Our measure of reproductive success was the number of juveniles females returned to the wintering ground with in the year after capture. Therefore females in our sample recorded as having no offspring could have failed to breed successfully either because they i) failed to lay eggs, ii) successfully laid a clutch and reared offspring, but those offspring were subsequently lost to predation or died due to poor food availability prior to autumn migration [25], or iiI) lost offspring during the lengthy migration back to the wintering grounds. Stochastic mortality will add noise to our data, as for example a female with large mass stores may have laid a large clutch but subsequently lost all offspring during autumn migration, variation that would not be captured by our measure of either mass or NAO. While clutch size may be a more accurate reflection of carry-over effects from mass-dependent reproductive investment, our results are clearly robust to the stochastic mortality events outlined above, as we still detect carry-over effects from winter body mass. More importantly, our results represent a more accurate measure of true reproductive success, and reflect true variance in reproductive success among individuals within and between years that results from the interaction between carry-over effects and breeding conditions. Our results suggest that in a migratory capital breeder, the advantage of higher capital stores for reproduction is not uniform across years, but instead varies according to environmental conditions experienced during the breeding season. The interaction we have described here between a carry-over effect from winter and within-season effects of weather during breeding are likely more common than currently reflected in the literature (but see 10), and highlight the importance of gathering individual-level data, both between seasons and among years. Seasonal interactions as described here are powerful drivers of fitness asymmetries among individuals [4], and knowledge of how processes from different periods of the annual cycle interact to influence reproductive success will have important implications for our understanding of the forces regulating the dynamics of animal populations. Supporting Information File S1. Online Supporting Information. Figure S1. Body mass variation by year for light-bellied Brent geese calculated using 6 years of data. Figure S2. Temporal variation in June NAO from a 60 year dataset. Code S1. R code for Bayesian Hierarchical models used for prediction. Code S2 R code for multivariate response model used to validate the derived variable analyses.
6,218.8
2013-10-15T00:00:00.000
[ "Biology", "Environmental Science" ]
Low power, less occupying area, and improved speed of a 4-bit router/rerouter circuit for low-density parity-check (LDPC) decoders Background: Low-density parity-check (LDPC) codes are more error-resistant than other forward error-correcting codes. Existing circuits give high power dissipation, less speed, and more occupying area. This work aimed to propose a better design and performance circuit, even in the presence of noise in the channel. Methods: In this research, the design of the multiplexer and demultiplexer were achieved using pass transistor logic. The target parameters were low power dissipation, improved throughput, and more negligible delay with a minimum area. One of the essential connecting circuits in a decoShder architecture is a multiplexer (MUX) and a demultiplexer (DEMUX) circuit. The design of the MUX and DEMUX contributes significantly to the performance of the decoder. The aim of this paper was the design of a 4 × 1 MUX to route the data bits received from the bit update blocks to the parallel adder circuits and a 1 × 4 DEMUX to receive the input bits from the parallel adder and distribute the output to the bit update blocks in a layered architecture LDPC decoder. The design uses pass transistor logic and achieves the reduction of the number of transistors used. The proposed circuit was designed using the Mentor Graphics CAD tool for 180 nm technology. Results: The parameters of power dissipation, area, and delay were considered crucial parameters for a low power decoder. The circuits were simulated using computer-aided design (CAD) tools, and the results depicted a significantly low power dissipation of 7.06 nW and 5.16 nW for the multiplexer and demultiplexer, respectively. The delay was found to be 100.5 ns (MUX) and 80 ns (DEMUX). Conclusion: This decoder’s potential use may be in low-power communication circuits such as handheld devices and Internet of Things (IoT) circuits. Introduction Low-density parity-check (LDPC) codes are considered more error resistant when compared to other forward errorcorrecting codes. These error-based circuits have been proved by their performance in the presence of noise in the channel. 1 Hence, LDPC decoders have been used more actively for communication applications. Different approaches may be used in the design of an LDPC decoder. One such structure is the layered approach, consisting of a layered design, memory unit, computational block, full adders, parity check unit, bit update unit, and router/reverse router circuits. 2 The decoding process begins with data being received into the decoder through the bit update block. The bit update block receives data, arranges them into vectors according to the system requirements, and stores them. These data are routed to the parallel adder through the routing circuit and the data bus. The parallel adder now computes the memory block stored in the previous iteration and the new vector. The output of the computation is checked for errors using the parity checker. 3 The result goes through another computation process to generate the original vector stored in the bit update unit for the next iteration. Also, new values after the parity check are stored in the memory block. Routers are integral to this architecture, sending data bits through the routers' different layers. Routers are multiplexer or demultiplexer circuits that select appropriate data to be sent or distribute the received data bits to other units. Multiplexers (MUX) and Demultiplexers (DEMUX) form the basic units of data paths. They are used in applications like processor buses in CPUs, network switches, and digital signal processing stages involving resource sharing and graphic controllers. In large-scale systems, multiplexers aid in the reduction of integrated circuits used in some designs. In this research, the design of the multiplexer and demultiplexer is achieved using pass transistor logic. 4 According to existing authors of the multiplexer, demultiplexer, and LDPC encoder circuits, a higher number of transistors leads the critical path and results in higher power dissipation. 5 The proposed method reduced the number of transistors in the design and the regular arrangement of transistors, thereby reducing the critical path. The target was low power dissipation, improved throughput, and smaller delay with a minimum area. Low power design is essential when this circuit is used along with many other components for communication purposes. Pass Transistor Logic (PTL) can reduce the number of transistors by eliminating redundant transistors. Here the transistors act as switches to pass different logic levels between nodes of a circuit. This paper's main objective was to design and develop routers and bit update blocks for the LDPC decoder. The proper router, rerouted, and LDPC circuit design reduces the critical path, power dissipation, and speed increases. This paper reviews the related work in designing multiplexers and demultiplexers and describes the design methodology used in the proposed circuits. The results obtained from the simulation are analyzed, and conclusions are then made regarding the proposed circuits. Literature review Unlike the main building blocks, such as adders and parity checkers, routers form a crucial support system for the decoder. The routers' function, mainly comprised of multiplexers and demultiplexers, helps arrange data bits according to the system configuration and passes the information through appropriate layers. Binary signals control multiplexers. 2 The analogue MUX/DEMUX was designed using ternary inverters to control the circuits, and CMOS transmission gates were used. [6][7][8] The design improved and proved excellent for ternary inverters. With the idea of switching activities suggested by Anitha and Javachitra, 9 adiabatic logic reduces the power by offering back the stored energy to the supply, and this was used for the 16:1 multiplexer and 1:16 demultiplexer. The results indicated that they had less power dissipation than conventional CMOS circuits. An 11 Gb/s CMOS demultiplexer using redundant multi-valued logic (RMVL) was proposed by Ahn and Kim (2006). 10 The circuit received serial binary data, converted to parallel redundant multi-valued data. The converted data are reconverted to parallel binary data. This makes it possible to achieve higher operating speeds than conventional binary logic. The implemented DEMUX consisted of eight integrators and was designed with a 0.35 μm standard CMOS process. The DEMUX achieved the maximum data rate of 11 Gb/s and an average power consumption of 69.43 mW. This circuit was expected to operate faster than 11Gb/s in the high operating frequency's deep-submicron process. A demultiplexer has been designed with 36 transistors using 90 nm CMOS technology. 7 REVISED Amendments from Version 1 According to the reviewer's comments, the manuscript has changed. In this version, the abstract methods rephrased the sentence and added the area value of the DeMUX and MUX circuits. The design methods, Some of the sentences are rephrased, and Table 1 and Table 2 has removed. The equation has been modified, and the equation number has been added. The necessary texts are added in the design method and result and discussion. According to the reviewer, the correction has been added. Table 2 shows the percentage of improvement added regarding the area and power dissipation. Finally, small corrections are made throughout the paper. In this new version, the paper quality has improved. Any further responses from the reviewers can be found at the end of the article Auto-generation technique and semi-custom layout design were integrated. There was an improvement in power consumption and area due to the semi-customized demultiplexer layout. Methods The router circuit in a decoder is a bank of MUX and DEMUX that forward the appropriate estimate terms from memory to the corresponding bit update circuit. The proposed MUX, DEMUX, bit update circuit, and proposed LDPC circuits logic simulations are executed mainly to validate the circuit's functionality. The designed circuit had the required logic behaviour. In the layout, the memory cell's charging and discharging were validated by the aspect ratio factor and expressed with current scaling methods. The proposed circuits were validated by reliable, optimum data of the designed parameters. Modern communication systems demand high reliability and optimum data rate, which makes the standards for future communication technology move towards methods of error correction that enable high throughput decoding with optimum performance based on the Shannon capacity. Multiplexer (MUX) The multiplexer is a combinational logic circuit that selects an appropriate analogue (or) digital signal from several input signals and forwards it to a single output line. 11 A multiplexer has several input lines and a single output line. The selection of the appropriate input is based on unique control lines called select lines. Figure 1 depicts a basic multiplexer with four inputs, I 0 , I 1 , I 2 , I 3 , and a single output line (Z). Multiplexers can be designed for a 2 n number of inputs. In this design, we used a 4 Â 1 MUX because it is simpler to cascade these circuits for many inputs, and the decoder was also for 4-bit data. There are two select lines, S 0 and S 1 , which are the circuit's control lines. The MUX is 4 Â 1, representing four inputs and one output. An additional set of input lines control each input line's selection according to these control input's binary conditions, which indicated 'HIGH' (1) or 'LOW' (0). Multiplexers have an even number of 2 n data input lines and some control inputs that match the number of data inputs. The output Z is obtained from the Boolean expansion. The equation (1) was expanded using associative and commutative laws to obtain an appropriate and optimized circuit equation for implementing the multiplexer. 11 Any single input line is selected instantly depending on the combination of select lines input to be connected to the output Z. Adding more control address lines (n) allowed the multiplexer to control more inputs to switch 2 n inputs. Still, each control line configuration will connect only one input to the output. In our proposed circuit, optimization of the circuit is done using pass transistor logic to design the multiplexer. A 4 Â 1 MUX was designed, as shown in Figure 1, and the input to the multiplexer in this circuit was from a bit update block (BUB), part of the LDPC decoder structure. The inputs were from the 4-bit update units used in the decoder circuit designed for this research. The multiplexer aimed to receive the updated data bits from the bit update unit and rearrange the vectors according to the circuit's requirements. 12 The multiplexer circuit was designed using pass transistor logic. The MUX comprised NMOS and PMOS circuits for the inverters and only NMOS circuits for the remaining circuit. The inverter complemented the select input signals S 0 (S A ) and S 1 (S B ). The multiplexer was configured to have seriesconnected switches so that, based on the input combination of S 0 and S 1 , one of the inputs was selected to pass the input to the output. The multiplexer passed a signal when the controlling voltage was logic low. The circuit used NMOS because electron mobility is better than hole mobility, so the performance will be better. The inputs I 0 , I 1 , I 2 , and I 3 fed from the 4-bit update circuits had the bit update unit's computation values. The selection of the input given to the router was based on the selected inputs S 1 and S 0 . Inputs I 0 , I 1 , I 2 , and I 3 were chosen to connect to the output line Z. Assuming the select inputs had an input combination of S 0 = 0 and S 1 = 1. The S 0 input was fed to an inverter circuit formed by the pass transistors, which passed the value '0' to the circuit, and the S 1 with a logic '1' was given to the other inverter circuit. The NMOS controlled the ground and the output in one inverter circuit, while PMOS connected the input supply V DD and the output. 13 The transistors then did what they are best designed for, that is, the NMOS allowed a logic '0', and the PMOS allowed a logic '1'. It acted like a 2 Â 1 MUX, where the inputs are logic 0 and logic 1. The input variable acted as the control signal and determined which input should be sent to the output. Hence, combining both inverters at the input would help select the signal sent to the output. This would be either I 0 , I 1 , I 2 , or I 3 . In our example, I 2 was fed to the output Z = I 2 . Multiplexer design can be enlarged to have many more inputs using the basic multiplexer circuits. A 16 Â 1 MUX can be designed using 2 Â 1, 4 Â 1, and 8 Â 1 MUX. As per basic MUX circuit design, 4 Â 1 multiplexers are used, so 16 inputs are available. Inputs I 0 to I 3 (for bits zero to three) are for the first multiplexer (to PMOS), I 4 to I 7 (for bits four to seven) to the second, and so on, where the last multiplexer has input I 12 to I 15 (for bits 12 to 15). Every multiplexer's select inputs are combined in parallel into two main selection lines that connect all four multiplexers. 14,15 The output from each multiplexer is now fed as four inputs to another 4 Â 1 multiplexer. The output from this multiplexer becomes the main output of the circuit. Demultiplexer (DEMUX) A demultiplexer is a combinational circuit that routes a single input line to multiple digital output lines. The demultiplexer of 2 n outputs has 'n' select lines to select which output lines need to be connected to the input. 13,14 In simple terms, it is a data distributor. The demultiplexer is a 1 Â 4 unit, implying a single input line Y and four output lines, D 0 , D 1 , D 2 , and D 3 . There are two select lines, S 0 and S 1 . The select lines help to decide to which output line the input line Y should be connected. The select lines are controlled by the binary combination of 0 and 1. The select lines S 0 and S 1 can take on 00, 01, 10, and 11. These are the four possible combinations for two input signals and hence four possible output lines. The combination and connection of input Y to the output lines D 0 , D 1 , D 2 , and D 3 . The data input to be connected to the particular output line is obtained from the equation, Adding more address line inputs it is possible to switch more outputs giving 1-to-2 n data line outputs. 16 The proposed demultiplexer was also a 1 Â 4 demultiplexer constructed using pass transistor logic, as shown in Figure 2. In the figure, two inverter circuits form the input point for the DEMUX. The inverters were constructed with opposite polarity Metal Oxide Semiconductor Field Effect Transistors (MOSFETs) with their gates connected to form the input voltage V shown as S A and S B . The drain terminals of both MOSFETs were connected to form a typical output. 17 These MOSFETS were connected in such a way (complimentary) that only one MOSFET conducts when the input has a low or high input voltage due to the complementary connection. The Gate-Source voltage V GS is equal to V in , that is: and the Source-Gate voltage given by V SG is: Where V DD is the supply voltage, the input voltage can have values from 0 to V DD . When S A = V in = V DD , the PMOS transistor gets cut off while the NMOS conducts and current flows to the ground terminal, and the output voltage is '0'. The '0' volts are now applied to one of the inputs of transistor T5, which is in series with T6. If input S B had an input value of '0' volts, the NMOS transistor inverter was cut off while PMOS conducted to give a path to the power supply and the output now had a value of V DD . The second input to transistor T5 was '0'. The transistor had inputs 0 and 1 and gave an output '0', indicating that line D A had been selected to distribute the input from the parity check circuit of the layered decoder circuit. Hence, the other lines D B , D C , and D D were selected to feed that input for other input combinations to S A and S B . The input fed at line D (Y in the truth table) was distributed to any four outputs represented by D 0 , D 1 , D 2 , and D 3 . The distribution was based on the select inputs S 0 (S A ) and S 1 (S B ). In Figure 2, the select lines are connected to two inverters at the first stage of the DEMUX. Each inverter created the terms given in equation (2). The inverter drove the value of S 0 , and if it was a '0', the output could be a '1', similar to the S 1 input. The following transistors drove the input to the outputs based on the bit pattern of S 1 S 0 . Bit update circuit The bit update circuit is integral to many circuits, where temporary storage and stored data updates are required periodically. These circuits have memories that will store some predetermined subset of codeword bits, though only one at a time. The circuit uses basic logic gates: the EXOR gate, a latch, and a multiplexer and inverter. It is like a loop operation, where input data bits received are fed into the multiplexer compared with the previously stored data from the latch. The EXOR gate will help identify new data and is given to the MUX, where the select inputs will ensure the new data is stored in the latch. This recently stored data is then sent to the next section of a large application circuit. In the proposed circuit, the data input was from the DEMUX circuit, transmitting data bits received. The bit update circuit ensured that new data received was always updated and stored and then distributed through the reverse router to the parallel adder blocks in the decoder through the data bus. The bit update circuit usually works in tandem with two memories, one as an accumulator for a new data set and the other supplies the last iteration's data. 18 These two memories act in an alternating manner. A multiplexer worked like a cross switch to facilitate their alternating operation. The proposed bit update circuit was designed using the pass transistor logic to reduce the number of transistors. The delay needed to be reduced in the circuit; hence, the technology used was adequate. The circuit shown in Figure 3 comprises a 2 Â 1 multiplexer circuit with a latch. The latch acted as the temporary storage or memory for the data bits. The data bit stored in the latch was given to an EXOR gate connected to an AND gate delay circuit. This was to create a delay so that the bits reached the multiplexer within the clock pulse. The EXOR input was also fed to MUX as one of the select inputs. The proposed LDPC decoder circuit A proposed decoder architecture is described in this section, which follows the layers of component decoding. The top-level architecture is shown in Figure 4. One type of decoding technique is the layers of components decoding. It generally includes layer-by-layer processing rows of a parity check matrix. 16 Each layer is processed sequentially, and the processing of each layer depends on data processed in an immediate previous layer. Decoders using the layered technique are designed to have an inbuilt latency for processing the data between layers. By explanation, say if a layer in the parity check matrix needs to be processed, data processed by a previous layer need to be received initially. But it may be that these data are unavailable yet because they are still processed in the previous layer or the data bus and have yet to reach their destination. Latency such as this has an impact on the performance of the decoder. Some problems like this need to be addressed in layered decoding methods. In the proposed circuit, improvements were made to a layered component decoding approach. The method proposed used a plurality of parallel computation blocks coupled to the memory, multiple parity check blocks connected to the computation blocks, and multiple-bit update blocks connected to the parity check block. Each bit update block had a memory. The received codeword split in this system, and at least one column/ row was grouped and processed. A low-density parity-check code suitable for efficient hardware implementation was designed with a belief propagation decoder circuit. Codes were arranged according to a sample H matrix whose rows and columns represented the parity check matrix. The decoder circuit had a parity check value that estimated memory, which could be arranged in groups and was logically connected to different data lengths and depths. A parallel adder generated approximate values fed to the parity check circuit. The new bitstream generated new values of estimates. These values generated were then stored back in the memory and fed to the bit update circuit. The bit update circuit then updated the new value for the subsequent input data received. Here, layered components decoding was performed by applying the decoding algorithm to each successive layer. Since no particular algorithm was developed, we used a standard to show how the improved decoder works. Applying a decoding algorithm for a particular layer included the use of calculations done in previous layers. The decoding was done using parallelized decoding hardware, and hence its performance may be better than the conventional approach. The memory block was a local RAM for storing the estimates derived within the iteration. These estimates were stored in the memory to save the chip area. The storage memory had one output coupled to one input of the parallel adder. This was connected to the negative input of the parallel adder to provide a subtrahend for subtraction that took place in the parallel adder. The output of the parallel adder was applied to the parity check update circuitry. This block performed the updating of estimates obtained from memory for each of the parity check nodes. The output of the parity check circuit was applied back to the memory to store updated values. It was also applied to the router circuit to update the input nodes' Log-Likelihood Ratio (LLR). The router circuitry collected multiplexers and demultiplexers that forward the appropriate estimate terms from memory to the corresponding bit update circuit. The bit update circuits were accumulators through which the current values of LLR of the input nodes were maintained from one iteration to the next iteration. LDPC operation The Referring to Figure 4, soft data received was routed into the decoder system through the data bus. The received data was first routed into the bit update block. Here the data was initialized into its components of a vector. Let us assume the vector for the received data as 'L'. We defined a set where all the bit columns for a row 'm' and the bits in the H matrix have a one in row 'm'. This makes the checksum for a row over a finite field. The LDPC decoder helps detect errors in the received data when checked for every row in the matrix. When data is received, the values may not be precisely binary values of 1 or 0 but some fractional values represented by several bits. Hence a probability of whether the bits are 1 or 0 can be represented using the LLR given by: where r j is the input bit value. Every input bit arrives, the estimated value is written based on the LLR. Initially, an estimate was assumed for the LLR based on the type of channel being used. A vector 'R mj ' was stored in the SRAM. These were estimates stored in the SRAM after every iteration or cycle of the decoding process and the updated value in the next iteration. The memory stores a few corresponding rows of values of R mj , representing vector R values for m rows and j columns from a parity check matrix. For every row, the vector L was written as for the checksum: The vector was then stored in the BUB. The data were fed into the reverse router block by data buses, where the data was rearranged as required by the system from the BUB. The values of the vector L were given as input to the parallel adder (PA). The other input to the parallel adder came from the memory with the values of the data stored in the form of the components of vector R. The parallel adder performed the operation approximations and subtraction of vector R from L. The results of this subtraction operation in the output 'sum' were given as input to the parity check circuit and the second set of parallel adders (PA2). A checksum, a sequence of numbers and letters used to detect errors introduced during data transmission, was carried out in the parity check block. The results of this operation were then fed to the second set of parallel adder blocks and the memory block for storage. In the PA2, the computation of the earlier subtraction (R) results and the checksum were added to regenerate the vector L. The new values of L were now sent to the router block to be rearranged into components of vector L. These values were given to the BUB to be stored for the next iteration. Results and discussion The DEMUX and MUX circuits developed here were tested as part of the decoder circuit. The results obtained after simulations at different voltage values and using 180 nm technology are highlighted below, with improvements. Demultiplexer (DEMUX) The 1 Â 4 demultiplexers for the LDPC decoder were constructed to have one input D and four outputs, D 0 , D 1 , D 2 , and D 3 . The demultiplexer had two select inputs S 0 and S 1 . The selected inputs formed the decision-maker to connect the input to a selected output. The selection was based on the four possible combinations of the select input, namely, S 0 = 0 and S 1 = 1, S 0 = 0 and S 1 = 1, S 0 = 1 and S 1 = 0, and finally, S 0 = 1 and S 1 = 1, representing the binary form 00, 01, 10, and 11. The proposed demultiplexer was simulated to check its characteristics using the Mentor graphics PADS VX.2.7 x86, a CAD tool for 180 nm technology (Open-access software that can perform an equivalent function is DSCH version 2.7for schematic design and MICROWIND version 2.0 for layout analysis). The string of data bits was given as input D with the select inputs S 0 and S 1 varied for the four possible combinations. It should also be noted that the voltage rises and falls in Figures 5(a) to 5(c), which are not exactly zero or one. There was a signal distortion, but it showed a considerable voltage level to be read as 0 or 1. The voltage variation of 1V, 1.3V, and 1.5V did not significantly affect the output waveforms, with only a slight variation in the peak voltage values. The waveforms shown in Figures 5(a) to 5(c) represent the distribution of bits received from the adder circuit (refer to Figure 4). The data choice is based on S 0 (S A ) and S 1 (S B ). The waveforms of D 0 , D 1 , D 2 , and D 3 also show the effect of the gates' switching characteristics and the peak voltage drops, which is slightly due to the capacitive effect at the input nodes. As the output voltage increases in time, the biasing voltages decrease. A decreasing value of the gate-source voltage reduces the charge density and reduces the output voltage, which does not reach V DD . The output voltage was seen to delay reaching the final voltage. This was due to the parasitic capacitance, the gate channel capacitance between the gate-source and gate-drain terminals. Any switching action in the device leads to the formation of parasitic capacitance. A sudden change of voltage from zero to a high value creates a capacitive effect which can be realized as an RC circuit. Resistance is created, and the device consumes more power to drive the circuit, which depicts a delay in the device's output voltage. It creates a delay when it drives zero loads. The parasitic delay grows linearly with the number of inputs. This effect was seen in the waveforms for the demultiplexer, which displayed a slow-increasing ramp voltage. According to the simulation result, the demultiplexer area is 10.5 Â 25.555 μm 2 . Multiplexer (MUX) The reverse router had a multiplexer to transmit data bits from the bit update circuit to the parallel adder through the data bus. The multiplexer's characteristic was choosing a particular input to be connected to the output. The selection of the input was based on the two select signals. In Figure 3, the schematic of the multiplexer is shown. The multiplexer had four inputs, I A , I B , I C , and I D , and a single output, Z. The select inputs were S A and S B . Hence the multiplexer was a 4 Â 1 MUX. Since there are only two select lines, the possible input lines were four, and the possible combination was S B S A = 00, S B S A = 01, S B S A = 10, and S B S A = 11. The schematic in Figure 3 is simulated using the test bench. The 180 nm technology was used for the simulation, and voltage values of 1 V, 1.3 V, 1.5 V, and 2.5 V. Here the threshold voltage loss restricts the output voltage to the range [0V, V DD -V Tn ]. The proposed multiplexer circuit was simulated for voltage versus time using 180 nm for input voltages of 1 V, 1.3 V, and 1.5 V, and the output waveforms are shown in Figures 6(a) to 6(c), respectively. Figures 6(a) to 6(c) show the output voltage of the selected input to be given to the output. Even though the output waveform represented the correct selected input, it delayed reaching the maximum voltage. For some inputs, it did not reach the minimum zero value. The delay caused by the inverter and the threshold voltage loss restricted the maximum voltage. Charging the output for a logic one voltage was very slow compared to the transition to a logic 0. The parasitic capacitance increased the charging time from low to high since it was diverted from the output node. The charging of the output capacitance was time-dependent and began as linear as (t/2τ n ) and then levelled out. Since V out (t) increases in time, the device bias voltages V GS -V DD -V out (t) = V DS decreases with time. A decreasing value of V GS reduces the channel charge density, while smaller V DS shows a reduction of the drain-source electric field. This indicates that passing a logic 1 voltage through the n-channel transistor is difficult. The spikes seen in the output were caused due to the capacitive coupling of the input to the output by the gate-drain capacitance. As the input suddenly increased from 0 V to V DD , the capacitance did not have enough time to drop its voltage instantly. Hence, it would have retained some charge and is seen as voltage spikes. The proposed multiplexer circuit area is 9.9 Â 32.155 μm 2 . The multiplexer and demultiplexer circuits were simulated using the SilTerra CEDEC pyxis project of the Mentor graphics CAD tool PADS VX.2.7 x86. The simulation environment was an input voltage value of 1 V, 1.3 V, and 1.5 V for 180 nm technology, tabulated in Table 1. The results showed a low power dissipation in nanowatts. This is because of pass transistor logic, which reduced the number of transistors used and is reflected in the results. A reduced number of transistors (12,14) led to lower power dissipation and reduced layout area. The delay is only 80 ns and 130 ns for DEMUX and MUX, respectively. Table 2 shows a comparison of the proposed circuit with various published research. It can be seen that the proposed circuit performs better. The proposed multiplexer circuit has a power dissipation of 7.067 nW, whereas Bousseaud and Negra 7 had a value of 5 mW. The approach used by Bousseaud and Negra 7 used a transmission gate, while pass transistor logic is used in the proposed circuit. Pass Transistor Logic (PTL) provides an advantage in the design of circuits by eliminating redundant transistors. When the number of transistors was reduced, it had a lower power dissipation as each transistor occupied some area and dissipated power. For the DEMUX circuit, the power dissipation produced by Saseendran and Mehra 6 had a value of 142 uW; for the proposed circuit, it was 5.14 nW. The input voltage also tended to be at a lower value of 1.5 V. There was a huge difference in the number of transistors used in the design. Bit update circuit The bit update circuit receives new data and then arranges them into its vectors and routes them to the multiplexer as input to the parallel adder. In each iteration of the decoder circuit, the bit update circuit restored new data values after rewriting the data received from the router circuit with data from the transmitter received through the data bus. The bit update circuit was simulated for voltage versus time using 180 nm for input voltages of 1 V, 1. The carry inputs to the second set of parallel adders are also shown as check 0 to check 3. The output was measured at various points of the circuit, that is, the output of the memory unit (Vo1), the output of the adder (Vo2), the output of the parity check (Vo3), the output of the router (Vo4), and the final at the reverse router (VoF). It was observed that at the initial points of the check, the output voltage did not suffer from any signal loss. As the circuit became larger, all effects of power loss came into play due to the different circuits. At the final output (VoF), glitches were observed at regular intervals. This happened to off-pass transistors where the source and drain were initially high and then pulled low. The output of the router circuit shows the waveform reached the peak voltage but did not reach the zero line. This represents the presence of some minimum voltage that did not allow the voltage to reach zero. Practically, the drain current of a CMOS transistor does not reach zero once the voltage of the gate terminal goes below the threshold voltage. These values are the most updated: the parity check unit block (PUCB) and the values used for the next iteration. 23 The flow of data into the circuit with the input of received data at the bit update circuit was tested with bits of data given using the rows from a standard H matrix. Every stage of the movement of the bits through each layer, namely bit update, reverse router through the data bus to parallel adder one and from the adder to the parity check block, a second set of the parallel adders, and the stored data in the memory has been simulated and outputs observed. Tabulated results of the proposed LDPC decoder The results of individual layers and the entire decoder are tabulated in Table 3. Various input voltages were given to observe the effect on the decoder. The decoder circuit designed achieved low power dissipation and a reasonable delay improvement. Comparison of results In Table 4, the obtained results for the LDPC decoder are compared and analyzed with other published work. The proposed circuit performed better in power dissipation than the work done by Lee et al. 21 The power dissipated by the proposed circuit is in nanowatts, while all references are in milliwatts (19,21). This may be because the proposed circuit was designed using pass transistor logic, which reduced the number of transistors. CMOS circuits dissipate power during switching times. Hence, reducing the switching activity reduced the power dissipation. Other studies 19 and performance at 1.5V were much better in power dissipation and throughput. At lower voltages, the noise margin becomes critical. The area of the proposed circuit is in nanometres squared, which is also reduced compared to Bhargava et al. 21 (Table 4). Conclusion The proposed router circuit, which includes the multiplexer and demultiplexer circuits was designed using pass transistor logic. The proposed circuit gave better power dissipation and throughput performance than existing circuits due to the reduced critical path. The circuits were simulated using the Mentor Graphics CAD tool for the design and layout. The results show significant improvement in power dissipation, area, and delay. For the multiplexer, the improvement in power was 99%, but there was a difference in the technology used. The number of transistors used in the proposed circuit was also significantly reduced, which was the intention of this work. The delay obtained was 80 ns, and the area of 10.5 Â 25.55 μm 2 for the demultiplexer and 9.9 Â 32.15 μm 2 was considered small. The designed circuit silicon area utilization ensured reduced delay and power dissipation, making the router circuitry seemingly fitting for use in the decoder circuit. The multiplexer and demultiplexer circuits can be used in an LDPC decoder, which uses the layered approach. The multiplexer received input from the bit update block based on the state of the select inputs. The select inputs chose which data bits needed to be routed to the parallel adder block for the next iteration. Data availability All data underlying the results are available as part of the article and no additional source data are required. considerable improvement in targeted parameters. The manuscript is fluent and the contributions are clear. My major concerns are as follows: The evaluations are performed on outdated 180nm feature size. As we are on less than 5nm technology, how scalable and valid are the observations and improvements of this work? What about the compatibility of the proposal to very smaller technology nodes? A discussion in these directions is necessary. ○ The authors reported the absolute value of some parameters, but it does not make any sense when not compared to the state-of-the-art or a reference value. It is recommended to report the comparative values (maybe the percentage of improvements can help). ○ By "EXOR" gate, are the authors referring to well-known "XOR" gate? ○ Table 4 is strange to me. The authors reported their evaluation on the proposed circuit and the competitors, but they are not evaluated based on the same technology node. When comparing several designs for a circuit, their efficiency is comparable only when evaluated in the same scenario. To my understanding, the authors just used the report of each paper in the table for other schemes and did not implement them in their own experimental platform. If so, the results cannot be technically sound. I strongly recommend the authors to evaluate all schemes in the same evaluation platform. ○ The literature review is weak and needs to be more comprehensive. In minimum, the stateof-the-art (DE)MUXs in Table 4 should be discussed. This helps to better highlight the distinction between this work and the existing ones.
9,279.2
2022-01-05T00:00:00.000
[ "Computer Science", "Engineering" ]
Protection against Doxorubicin-Induced Cardiac Dysfunction Is Not Maintained Following Prolonged Autophagy Inhibition Doxorubicin (DOX) is a highly effective chemotherapeutic agent used in the treatment of various cancer types. Nevertheless, it is well known that DOX promotes the development of severe cardiovascular complications. Therefore, investigation into the underlying mechanisms that drive DOX-induced cardiotoxicity is necessary to develop therapeutic countermeasures. In this regard, autophagy is a complex catabolic process that is increased in the heart following DOX exposure. However, conflicting evidence exists regarding the role of autophagy dysregulation in the etiology of DOX-induced cardiac dysfunction. This study aimed to clarify the contribution of autophagy to DOX-induced cardiotoxicity by specifically inhibiting autophagosome formation using a dominant negative autophagy gene 5 (ATG5) adeno-associated virus construct (rAAV-dnATG5). Acute (2-day) and delayed (9-day) effects of DOX (20 mg/kg intraperitoneal injection (i.p.)) on the hearts of female Sprague–Dawley rats were assessed. Our data confirm established detrimental effects of DOX on left ventricular function, redox balance and mitochondrial function. Interestingly, targeted inhibition of autophagy in the heart via rAAV-dnATG5 in DOX-treated rats ameliorated the increase in mitochondrial reactive oxygen species emission and the attenuation of cardiac and mitochondrial function, but only at the acute timepoint. Deviation in the effects of autophagy inhibition at the 2- and 9-day timepoints appeared related to differences in ATG5–ATG12 conjugation, as this marker of autophagosome formation was significantly elevated 2 days following DOX exposure but returned to baseline at day 9. DOX exposure may transiently upregulate autophagy signaling in the rat heart; thus, long-term inhibition of autophagy may result in pathological consequences. Introduction Doxorubicin (DOX) is a highly effective chemotherapeutic agent used to reduce tumor burden in a wide variety of cancers [1]. Unfortunately, the clinical use of DOX is restricted due to the development of cardiotoxicity [2][3][4]. Current practices attempt to limit the cumulative dose of DOX in an effort to reduce the incidence of congestive heart failure (CHF) [5]. However, retrospective studies evaluating DOX-related cardiac outcomes indicate that CHF occurs with greater frequency and with lower cumulative doses than previously recognized [5,6]. The onset of cardiomyopathy following DOX chemotherapy negatively impacts long-term cardiac outcomes in cancer survivors and also severely limits treatment options for patients with relapsed or refractory disease [5]. Despite the prevalence and gravity of DOX-induced cardiac dysfunction there are currently no clinically-approved preventative strategies or standard of care practices for the management of DOX-related cardiomyopathy in cancer patients and survivors. Thus, detailed understanding of the molecular mechanisms that drive DOX cardiotoxicity is necessary for the development of cardioprotective therapies. A dynamic role for autophagy has been proposed in the development of DOX cardiotoxicity [7][8][9]. Although controversy exists regarding autophagy's complex interaction with pathological processes and its potential to disrupt cellular homeostasis, it is understood that DOX alters the regulation of autophagy within the heart [8]. Under homeostatic conditions autophagy functions to degrade and recycle damaged or senescent organelles, proteins and cellular components. However, continued balance of autophagic signaling is necessary for the sustained health and function of cardiomyocytes [10,11]. Following DOX exposure, the prevailing consensus suggests that autophagy is impaired as the result of an acute increase in autophagosome formation and corresponding suppression of autophagic flux [12]. Although aberrant proteolytic processing by autophagy has been consistently reported in cardiomyocytes exposed to DOX, the physiological consequences of this dysfunction remain unclear [8]. Given the potential for impaired autophagy to promote reactive oxygen species (ROS)-induced cellular damage and mitochondrial dysfunction, [9] targeting autophagy may be an effective strategy to reduce oxidative damage to cardiomyocytes and limit myocardial injury. Thus, this study aimed to clarify the contribution of autophagy to DOX-induced cardiac dysfunction and elucidate the relationship between autophagy and disturbed redox balance by preventing early autophagosome initiation in the heart. Our results uncovered acute benefits to autophagy inhibition that were not observed at a later timepoint. Biological Response to dnATG5 and Doxorubicin Exposure Body weight did not differ initially (initial weight) or four weeks following treatment with autophagy gene 5 (ATG5) recombinant adeno-associated virus (rAAV-dnATG5) or saline (treatment weight) among experimental groups in either the 2-or 9-day experiments (Table 1). Additionally, final weight and heart weight for groups euthanized 2 days following DOX or saline administration were not different. At 9 days, all DOX-treated rats receiving either rAAV-dnATG5 or saline, weighed considerably less and had a significantly reduced heart weight compared to saline-treated rats. Validation of the Experimental Treatment Expression of GFP and the ATG5-ATG12 conjugation product in cardiac tissue were determined to confirm the efficacy of our intervention. GFP expression was exclusively detected in the rAAV-dnATG5-treated rats ( Figure 1A,B). At the 2-day timepoint, expression of the ATG5-ATG12 conjugate was significantly elevated in the Saline-DOX group compared to Saline-Saline and dnATG5-DOX groups ( Figure 1C). At the 9-day timepoint ATG5-ATG12 conjugation was significantly reduced in the dnATG5-DOX rats compared to Saline-Saline and Saline-DOX ( Figure 1D). Validation of the Experimental Treatment Expression of GFP and the ATG5-ATG12 conjugation product in cardiac tissue were determined to confirm the efficacy of our intervention. GFP expression was exclusively detected in the rAAV-dnATG5-treated rats ( Figure 1A,B). At the 2-day timepoint, expression of the ATG5-ATG12 conjugate was significantly elevated in the Saline-DOX group compared to Saline-Saline and dnATG5-DOX groups ( Figure 1C). At the 9-day timepoint ATG5-ATG12 conjugation was significantly reduced in the dnATG5-DOX rats compared to Saline-Saline and Saline-DOX ( Figure 1D). Inhibition of Autophagosome Formation Protects against Acute DOX-Induced Cardiomyopathy Similarly to previous findings [13,14], our data show that administration of a single injection of DOX (20 mg/kg intraperitoneal injection (i.p.)) results in the rapid development of cardiac dysfunction. Evaluation of left ventricle (LV) systolic function 2 days following DOX revealed a significant reduction in fractional shortening in Saline-DOX rats compared to Saline-Saline and dnATG5-DOX (Figure 2A), and a reduction in posterior wall shortening velocity (PWSV) in Saline-DOX rats compared to Saline-Saline ( Figure 2B). Contrastingly, preservation of cardiac function was not retained in the dnATG5-DOX group 9 days post DOX administration. At day 9, fractional shortening was significantly reduced in the Saline-DOX rats compared to Saline-Saline ( Figure 2C), and PWSV was diminished in both the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline ( Figure 2D). Inhibition of Autophagosome Formation Protects against Acute DOX-Induced Cardiomyopathy Similarly to previous findings [13,14], our data show that administration of a single injection of DOX (20 mg/kg intraperitoneal injection (i.p.)) results in the rapid development of cardiac dysfunction. Evaluation of left ventricle (LV) systolic function 2 days following DOX revealed a significant reduction in fractional shortening in Saline-DOX rats compared to Saline-Saline and dnATG5-DOX (Figure 2A), and a reduction in posterior wall shortening velocity (PWSV) in Saline-DOX rats compared to Saline-Saline ( Figure 2B). Contrastingly, preservation of cardiac function was not retained in the dnATG5-DOX group 9 days post DOX administration. At day 9, fractional shortening was significantly reduced in the Saline-DOX rats compared to Saline-Saline ( Figure 2C), and PWSV was diminished in both the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline ( Figure 2D). Assessment of LV global diastolic and systolic function revealed a significant increase in myocardial performance index (MPI) in Saline-DOX rats compared to all other groups 2 days post DOX treatment ( Figure 3A). However, MPI was significantly increased in the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline at 9 days ( Figure 3B). In addition, independent of saline or dnATG5 treatment, DOX resulted in reduced LV septal wall thickness during systole and diastole as well as reduced LV posterior wall thickness during systole compared to Saline-Saline rats at 2 days (Table 2). No differences existed between groups for any parameter of wall thickness at the 9-day timepoint ( Table 2). Assessment of LV global diastolic and systolic function revealed a significant increase in myocardial performance index (MPI) in Saline-DOX rats compared to all other groups 2 days post DOX treatment ( Figure 3A). However, MPI was significantly increased in the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline at 9 days ( Figure 3B). In addition, independent of saline or dnATG5 treatment, DOX resulted in reduced LV septal wall thickness during systole and diastole as well as reduced LV posterior wall thickness during systole compared to Saline-Saline rats at 2 days (Table 2). No differences existed between groups for any parameter of wall thickness at the 9-day timepoint ( Table 2). Inhibition of Autophagosome Formation Protects against Acute DOX-Induced Mitochondrial Dysfunction and ROS Production Cardiac mitochondrial uncoupling and enhanced rate of ROS generation are directly related to the development of cardiac dysfunction following DOX exposure [13]. Our results confirm these previous findings 2 days following DOX exposure; Saline-DOX rats had a significantly decreased respiratory control ratio (RCR) compared to all other groups, as the result of increased mitochondrial state 4 respiration (Table 3). In addition, ROS production from permeabilized cardiac muscle fiber bundles of Saline-DOX animals produced a significantly greater amount of H2O2 compared to both other groups ( Figure 4A). A total of 9 days following DOX exposure RCR was decreased in both the Saline-DOX and dnATG5-DOX rats compared to Saline-Saline, with no significant differences in state Inhibition of Autophagosome Formation Protects against Acute DOX-Induced Mitochondrial Dysfunction and ROS Production Cardiac mitochondrial uncoupling and enhanced rate of ROS generation are directly related to the development of cardiac dysfunction following DOX exposure [13]. Our results confirm these previous findings 2 days following DOX exposure; Saline-DOX rats had a significantly decreased respiratory control ratio (RCR) compared to all other groups, as the result of increased mitochondrial state 4 respiration (Table 3). In addition, ROS production from permeabilized cardiac muscle fiber bundles of Saline-DOX animals produced a significantly greater amount of H 2 O 2 compared to both other groups ( Figure 4A). A total of 9 days following DOX exposure RCR was decreased in both the Saline-DOX and dnATG5-DOX rats compared to Saline-Saline, with no significant differences in state 3 or state 4 respiration among groups (Table 3). H 2 O 2 emission at 9 days was elevated in the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline ( Figure 4B). 3 or state 4 respiration among groups (Table 3). H2O2 emission at 9 days was elevated in the Saline-DOX and dnATG5-DOX groups compared to Saline-Saline ( Figure 4B). Discussion DOX is one of the most widely utilized antineoplastic agents that is unfortunately linked to the development of severe cardiac pathology [15][16][17][18][19]. Despite extensive investigation into the molecular mechanisms responsible for DOX cardiotoxicity, a precise understanding remains indeterminate. In particular, autophagy is implicated in the cardiac response to DOX as a result of increased oxidative damage to mitochondria [7][8][9][10][11]. However, conflicting results exist regarding the contribution of autophagy to DOX-induced cardiomyopathy [9]. Therefore, this study was designed to further discern the role of aberrant autophagy signaling following DOX administration by evaluating the acute (2-day) and delayed (9-day) effects of autophagy inhibition on the heart. Our results Discussion DOX is one of the most widely utilized antineoplastic agents that is unfortunately linked to the development of severe cardiac pathology [15][16][17][18][19]. Despite extensive investigation into the molecular mechanisms responsible for DOX cardiotoxicity, a precise understanding remains indeterminate. In particular, autophagy is implicated in the cardiac response to DOX as a result of increased oxidative damage to mitochondria [7][8][9][10][11]. However, conflicting results exist regarding the contribution of autophagy to DOX-induced cardiomyopathy [9]. Therefore, this study was designed to further discern the role of aberrant autophagy signaling following DOX administration by evaluating the acute (2-day) and delayed (9-day) effects of autophagy inhibition on the heart. Our results demonstrate that autophagy signaling is upregulated acutely following DOX administration and that preventing the DOX-induced activation of autophagy resulted in a cardioprotective phenotype. However, sustained inhibition of cardiac autophagy abolishes the beneficial effects of autophagy inhibition due to the transient nature of autophagy signaling upregulation following DOX exposure. DOX-Induced Autophagy Signaling Autophagy is a catabolic process responsible for maintaining cell homeostasis through recycling of dysfunctional and long-lived proteins and organelles by lysosomal proteases. Proper regulation of the steps responsible for autophagosome formation, autophagosome-lysosome fusion and autolysosome degradation is required to support controlled degradation of intracellular proteins within this system [20,21]. In the heart, investigation into the role of autophagy during basal conditions has shown that deficiency of lysosome-associated membrane protein 2 (LAMP2), a protein required for formation of the autolysosome, resulted in accumulation of autophagic vacuoles, impaired protein degradation and the development of cardiomyopathy [22,23]. Similar results have been demonstrated with deficient autophagosome formation via ATG5 deletion, indicating an important role for autophagy in the preservation of cardiomyocyte structure and function [24,25]. In contrast, excessive autophagy is also detrimental to cardiac function and is established to promote cardiac dysfunction [26]. While the role of autophagy in DOX-induced cardiac dysfunction remains unclear, the growing consensus is that DOX stimulates the initiation of autophagosome formation in cardiac cells [9]. However, temporal evaluation of autophagy upregulation following DOX administration has begun to reveal fluctuation in the expression of autophagy markers when compared at early and delayed timepoints [12,27,28]. Our results support the hypothesis that upregulation of autophagy markers is transient following DOX administration. Specifically, 2 days following DOX injection we show a significant upregulation of ATG5-ATG12 conjugation in the heart. However, when measured 9 days following injection this marker of autophagosome formation was not elevated as reported at 2 days. These data highlight the importance of time course studies in the development of clinical strategies to prevent DOX cardiac dysfunction, as manipulation of ATG5-ATG12 conjugation using rAAV-dnATG5 was sufficient to prevent the DOX-induced increase at day 2, while the same dose reduced ATG5-ATG12 conjugation below basal levels at day 9. Cardiac Function and DOX-Induced Autophagy A direct link between autophagy and DOX cardiotoxicity was first established by Lu et al., when they showed that administration of 3-methyladenine (3-MA), a class III phosphatidylinositol 3-kinase (PI3K) inhibitor, in combination with DOX attenuated cardiac dysfunction [29]. Similar findings using 3-MA to inhibit autophagy have been reported in cultured cardiomyocytes, further supporting the notion that autophagy dysregulation is required for DOX-induced pathology [30][31][32][33][34]. In our study, cardiac function examined following DOX administration in rAAV-dnATG5 treated rats demonstrated similar cardioprotection when compared to pharmacological inhibition of autophagy at the 2-day timepoint. This acute cardioprotection is consistent with the hypothesis that inhibition of autophagosome formation prevents DOX cardiotoxicity by maintaining normal autophagic flux and decreasing demand on the lysosomes [12]. However, we also show cardiac function is not preserved in DOX administered rats when autophagosome formation is knocked down below basal levels in the rat heart. The loss of cardioprotection at the delayed timepoint may be related to temporal changes in autophagy signaling as prevention of pathological ATG5-ATG12 conjugation at 2 days presents as a deficiency in conjugation at 9 days. Relationship between Autophagy and Oxidative Stress The accumulation of mitochondrial ROS is involved in the progression of DOX cardiomyopathy via the regulation of UNC-51-like kinase 1 (ULK1) phosphorylation at multiple binding sites [35]. Conversely, cardiac function is preserved when mitochondrial ROS production is attenuated, in part by inhibition of autophagy [36]. Specifically, supplementation with antioxidant compounds in conjunction with DOX therapy has proven effective in the preclinical treatment of DOX cardiotoxicity [13]. ROS production in the cardiomyocytes is proposed to induce autophagy as a means to remove damaged mitochondria [37,38]. However, it is also hypothesized that DOX can directly stimulate autophagy, which in turn can jeopardize the cellular defenses against ROS production [12]. Evidence of this has been shown in skeletal muscle where prevention of DOX-induced autophagy in the soleus was associated with enhanced transcription of antioxidant response element-related genes and increased antioxidant capacity [39]. These beneficial modifications to muscle redox balance resulted in the attenuation of mitochondrial dysfunction and ROS emission [39]. Reduced autophagy initiation in the heart via Beclin 1 haploinsufficiency resulted in a similar attenuation in ROS production, which was associated with a decreased need for autolysosomal protein degradation and improved myocardial performance [12]. Furthermore, transgenic overexpression of Beclin 1 promoted ROS production and exacerbated cardiac dysfunction in DOX treated mice [12]. The relationship between autophagy and ROS accumulation is unclear, but may be related to lysosomal dysfunction and the accumulation of damaged proteins [40,41]. In addition, augmented degradation of functional organelles by accelerated autophagic degradation has also been proposed as the link between autophagy and oxidative stress [42,43]. While further work is necessary to determine the exact interaction between these two processes, our results are consistent with the idea that a regulatory cross-talk exists between autophagy and ROS production. In particular, our data show that preservation of basal autophagy signaling in DOX administered rats prevents mitochondrial oxidative damage, and the reduction of autophagosome formation below basal levels impairs redox balance in the heart. Experimental Animals Young adult (~6-month-old) female Sprague-Dawley rats were used in these experiments. The current study utilized two experimental endpoints to determine acute (2-day) and delayed (9-day) effects of DOX exposure and autophagy on the heart. Animals in the 2-day and 9-day DOX exposure studies were randomly divided between experimental groups. Autophagy was inhibited via tail vein injection of a recombinant adeno-associated virus expressing a dominant negative mutation of ATG5 (dnATG5) (10 11 vg). The dnATG5 recombinant adeno-associated virus (rAAV-dnATG5) was created via a K130R mutation that prevents the conjugation of ATG5 to ATG12 [44]. The cytomegalovirus (CMV) promoter and AAV serotype 9 were used to drive gene expression of the rAAV-dnATG5 construct, and the vector was tagged with green fluorescent protein (GFP) to verify its presence in the myocardium. Efficacy of this construct has been previously demonstrated by our group [39,42]. Saline was used as the vehicle and was administered identically to dnATG5. Four weeks following rAAV-dnATG5 or vehicle treatment, DOX (20 mg/kg) or saline (equal volume) were administered as a single intraperitoneal (i.p.) injection. This DOX treatment protocol induces reproducible cardiac dysfunction in female rats, which develops two days following administration [13,14]. All procedures were carried out in compliance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals [45], and were approved by the Institutional Animal Care and Use Committees of the University of Florida (IACUC protocol #201207739; 8 January 2013; 2-day DOX exposure study) and University of South Carolina (IACUC protocol #2387-101272-100417; 4 October 2017; 9-day DOX exposure study). Echocardiography Transthoracic echocardiography was performed to assess cardiac function (Aplio XV, Toshiba Medical Systems, Tokyo, Japan for 2-day DOX exposure study and LogiQe NextGen, SOUND Technologies, Carlsbad, CA for 9-day DOX exposure study). Under anesthesia with inhaled isoflurane, two-dimensional ultrasound images and M-mode tracings of the left ventricle (LV) were obtained in the parasternal short-axis view at the level of the papillary muscles. Measurements were then performed using techniques as reported previously [13]. In brief, LV fractional shortening and PWSV were used to assess LV systolic function, and the MPI was used as a measurement of combined LV diastolic and systolic function. Measurements of diameters, thicknesses and time intervals were performed in ImageJ (NIH) on 10-15 cardiac cycles and averaged for each rat. Analysis and confirmation of cardiac function was performed by researchers blinded to the experimental groups. Following echocardiography animals were euthanized via overdose of inhaled isoflurane and hearts were excised. Cardiac Muscle Permeabilization Permeabilized cardiac muscle fiber bundles were used to measure mitochondrial function and ROS production [13]. Briefly, 5-7 mg sections of LV were placed in a plastic petri dish containing ice cold buffer X (50 mM K-Mes, 35 mM KCl, 7.23 mM K 2 EGTA, 2.77 mM CaK 2 EGTA, 20 mM imidazole, 0.5 mM dithiothreitol (DTT), 20 mM taurine, 5.7 mM ATP, 15 mM PCr and 6.56 mM MgCl 2 , pH 7.1). Muscle fibers were gently but thoroughly separated by a single blinded researcher in ice-cold buffer X to maximize surface area. Permeabilization of the fibers occurred by treatment with 50 µg/mL saponin diluted in buffer X and rotated by full inversion continuously for 30 min at 4 • C. Following permeabilization, fiber bundles were washed for 3 × 5 min in ice-cold buffer Z (105 mM K-Mes, 30 mM KCl, 1 mM EGTA, 10 mM K 2 HPO 4 , 5 mM MgCl 2 -6H 2 O, 0.005 mM glutamate, 0.02 mM malate and 0.5 mg/mL BSA, pH 7.1) by continuous inversion rotation. Mitochondrial Respiration Mitochondrial oxygen consumption rate was measured polarographically in water-jacketed respiration chambers maintained at 37 • C (Hanstech Instruments, King's Lynn, UK) [13]. Following calibration, permeabilized fiber bundles were incubated with 1 mL of buffer Z containing 20 mM phosphocreatine to saturate creatine kinase. Flux through complex I was measured using 2 mM pyruvate and 2 mM malate. The ADP-stimulated respiration (state 3) was initiated by adding 0.25 mM ADP to the respiration chamber. Basal respiration (state 4) was determined in the presence of 10 µg/mL oligomycin to inhibit ATP synthesis. RCR was calculated by dividing state 3 by state 4 respiration. Mitochondrial ROS Emission ROS emission in permeabilized muscle fibers was determined using Amplex Red (Molecular Probes, Eugene, OR, USA) [13]. This assay is based on the concept that horseradish peroxidase (HRP) catalyzes the H 2 O 2 -dependent oxidation of non-fluorescent Amplex Red to fluorescent resorufin red. Superoxide dismutase was added to the preparation to convert all superoxide into H 2 O 2 . Although this assay measures all H 2 O 2 produced in the fiber, previous work has indicated that the predominant amount of ROS production in the permeabilized muscle fiber preparation is released from mitochondria [46,47]. Western Blot Analysis Cardiac muscle samples were homogenized 1:10 (wt/vol) in 5 mM Tris (pH 7.5) and 5 mM EDTA (pH 8.0) with a protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA) and centrifuged at 1500 g for 10 min at 4 • C. Supernatant was separated from the pellet and supernatant protein content was assessed using the Bradford method (Sigma-Aldrich). An amount of 20-40 µg of protein were separated using polyacrylamide gel electrophoresis, transferred to nitrocellulose membranes and subsequently incubated with primary antibodies directed against conjugated ATG5-ATG12 (1:500; #4180) and GFP (1:1000; #2956) (Cell Signaling Technologies, Danvers, MA, USA) diluted in Odyssey blocking buffer (LI-COR Biosciences, Lincoln, NE, USA). GAPDH (1:1000; sc47724) (Santa Cruz Biotechnology, Dallas, TX, USA) was used to control for equal protein loading and transfer. Membranes were exposed to Alexa Fluor 680 IgG or 800 IgG (LI-COR Biosciences) secondary antibodies. Imaging and analysis were performed using the Odyssey CLx imaging system and Image Studio software (LI-COR Biosciences). Statistical Analysis Results were evaluated by ANOVA with Tukey's post hoc tests performed to determine differences between the means where appropriate. Significance was established at p < 0.05 and all values are reported as means ± SEM. Conclusions DOX accumulation within the myocardium creates a toxic environment that fosters the development of cardiac dysfunction. Although dysregulation of autophagy is an established complication associated with DOX cardiotoxicity, the results from this study offer evidence that manipulation of autophagosome formation does not provide extended benefits as a result of temporal changes in autophagy signaling following DOX administration. Nevertheless, our data do support the accepted view that ROS is a key contributing factor to DOX cardiotoxicity and that a direct interaction between autophagy and oxidative stress exists. Finally, as a result of the transient nature of proteolytic activity following cellular injury, this study emphasizes the need for future work focused on differences in acute versus delayed proteolytic signaling in the development of strategies to combat DOX cardiac dysfunction.
5,441.8
2020-10-30T00:00:00.000
[ "Medicine", "Biology" ]
Joint analysis of sequence data and single-nucleotide polymorphism data using pedigree information for imputation and recombination inference We developed a general framework for family-based imputation using single-nucleotide polymorphism data and sequence data distributed by Genetic Analysis Workshop 18. By using PedIBD, we first inferred haplotypes and inheritance patterns of each family from SNP data. Then new variants in unsequenced family members can be obtained from sequenced relatives through their shared haplotypes. We then compared the results of our method against the imputation results provided by Genetic Analysis Workshop organizers. The results showed that our strategy uncovered more variants for more unsequenced relatives. We also showed that recombination breakpoints inferred by PedIBD have much higher resolution than those inferred from previous studies. Background Next-generation sequencing (NGS) technologies have profoundly changed the landscape of genetic studies [1]. Although the cost of sequencing is becoming more affordable, increasingly more studies are choosing NGS as the primary platform to collect data, either at the whole genome level or for targeted regions. However, costs of sequencing thousands of individuals and the downstream analysis are still prohibitively high. On the other hand, many projects have already accumulated single-nucleotide polymorphism (SNP) data from previous studies. In such cases, researchers only need to sequence a small subset of family members (e.g., proband and parents) to reduce the costs. By jointly analyzing sequence data from a subset of family members together with SNP data from the families, computational approaches may fully recover variant information in unsequenced members. The data distributed by Genetic Analysis Workshop 18 (GAW18) provide an excellent example based on this design strategy. Many of the pedigrees are very large, and all of them have a significant number of members without SNP genotypes, which makes the imputation computationally very challenging. Our laboratory has recently developed an efficient haplotype inference algorithm called PedIBD, which is designed specifically for large pedigrees with many untyped individuals [2]. By taking advantage of haplotypes inferred by PedIBD using SNP data, we developed a procedure to computationally impute variants for unsequenced individuals based on haplotype sharing between them and their sequenced relatives. The advantage of our approach over the imputation provided by GAW lies in the fact that whereas our approach can take each pedigree as a whole when inferring haplotype or inheritance, GAW had to partition big pedigrees into smaller families. Our approach thus will provide more complete and more accurate results. In addition, based on the provided SNP data, we can also provide inferred recombination breakpoints with high resolution within each pedigree. Data We focused our analysis on chromosome 3 of the GAW18 dataset. The dataset consists of 1389 individuals from 20 families. Among them, 959 individuals were genotyped using SNP chips. In addition, a subset of 464 genotyped individuals were also sequenced. The total number of SNPs from the chip data is 65,519. Because only rs numbers of these SNPs were provided, we obtained their map positions from the NCBI dbSNP database (Build 37). Nineteen SNPs were removed because they either had no matched rs numbers or the SNPs with the same rs numbers were mapped to a different chromosome. Sequence data was converted to A/C/G/T format using VCFtools [3]. The total number of SNPs called from sequence data is approximately 1.75 million per individual. After removing SNPs with a high missing rate (>5%), the total number of sequence variants that used in our analysis is approximately 1.69 million. Analysis Our family-based imputation approach works in several steps. First, recombination breakpoints are inferred and haplotypes are assigned at each recombination-free segment for each individual with SNP chip data using PedIBD. Some individuals without chip data may also be assigned some unique haplotypes based on the inferred inheritance (e.g., untyped parents with typed children). Then, for newly discovered SNPs from sequenced individuals, at each individual locus, the allele on a haplotype can be determined if a sequenced individual sharing the same haplotype is homozygous at this locus. After all homozygous SNPs have been processed, the information can be propagated to heterozygous SNPs if the allele on one haplotype has already been assigned. Genotypes of unsequenced individuals can then be imputed based on their assigned haplotypes (see Figure 1 for the framework and an example). Conflicts may occur when the algorithm tries to assign different allele types to the same haplotype. Conflicts reflect inconsistency between inferred inheritance from chip data and observed SNPs from sequence data. Although there is a possibility that the inferred inheritance could be wrong, a significant majority of conflicts are actually due to high genotype calling errors from sequence data. One should notice that under the assumption that genotyping errors are randomly distributed among all SNPs in sequenced individuals, the total number of loci with conflicts will be proportional to the number of SNPs as well as the number of individuals with genotyping errors even when the typing error rate is a constant. Therefore, the total number of loci with conflicts increases with the size of a pedigree and can thus be substantial in large pedigrees. Figure 2 shows the pedigree structure of family 21 and one haplotype segment inferred by PedIBD. There are several characteristics of the proposed algorithm. First, because information from the whole pedigree has been used, it is possible that haplotypes for individuals with no data at all can be recovered (e.g., individual 949). It is also possible that only one of the two haplotypes of an Figure 1 The imputation framework (left) and an example illustrating the imputation procedure (right). In the example, individuals with grey color have single-nucleotide polymorphism (SNP) chip data from genome-wide association studies (GWAS), and individuals with black color have both chip data and sequence data. The haplotypes in this segment are labelled using different colors and they are inferred based on GWAS data. Notice that both haplotypes of individual 949 and one haplotype of individual 957 can be recovered based on the information of their children (the missed haplotype is illustrated using a thin black bar). However, only one haplotype can be recovered for 957 because he only has one child. The two variants are from sequence data (1 and 2 are alleles, and 0 is missing). For the first variant, because member 974 is homozygous genotype (1, 1), the alleles on its two haplotypes (pink and dark blue) can be assigned. Subsequently, the alleles on the light blue haplotype of member 940, the yellow haplotype of member 956, and the green haplotype of member 939 can be resolved (all three have sequence data). For all the other members, their alleles can be imputed based on the color of their haplotypes. However, haplotype light green (in members 949, 959, and 960) cannot be imputed because it has not occurred in any sequenced individual, thus showing missing one allele. For the second variant, our algorithm will identify a conflict because member 974 assigns allele 2 to the pink haplotype, and member 939 assigns allele 1 to the pink haplotype. individual with no data can be recovered (e.g., individual 957). Second, loci with inconsistent genotypes called from sequence data can be identified (e.g., locus 2 in Figure 1, right). Third, at a variant locus identified from sequence data, if there are no sequenced individuals with homozygous genotypes, the phase at this position cannot be determined. However, because most new variants from sequence data are rare, the probability of having no homozygous genotypes is extremely low. For each offspring in a family, a switch on its haplotype assignment indicates a recombination event. We collected all recombination events on chromosome 3 and examined the resolution of recombination breakpoints. Results Inconsistency between single-nucleotide polymorphism chip data from genome-wide association studies and sequence data Among 65,500 SNPs from genome-wide association studies (GWAS) data, 63,803 of them were rediscovered from prefiltered whole genome sequencing (WGS) data. Because of high accuracy of chip data, we treated the genotypes from GWAS data as ground truth and first examined the SNP-calling accuracy of WGS data on this subset. Families 14, 15, 23, and 25 were excluded from our analysis because they did not have any sequenced individuals. The inconsistency rate is about 5.50% on average (Table 1). After eliminating families 7, 9, and 11, which had unusual high rates of missing in GWAS data, the inconsistency rate is about 2.25%. We anticipated the issue that the allele types from GWAS data and from sequence data may be encoded differently (i.e., different strands) and did not include discrepancies when alleles are A and T (or G and C). Among the inconsistent genotypes (excluding families 7, 9, and 11), 34.26% were caused by missing genotypes in GWAS, 60.17% were caused by missing in WGS, and the remaining (5.57%) were mismatches. The very high inconsistency for families 7, 9, and 11 was mainly caused by high missing rates of GWAS data in these families. The average missing rate was 2.0% for WGS and 0.72% for GWAS data (excluding families 7, 9, and 11). Both measures indicate that for joint analysis of SNP and sequence data, one should not only impute variants in unsequenced or untyped individuals but also impute these missed or incorrectly called SNPs. For the remaining analysis, we have replaced the incorrect genotypes from sequence data using the genotypes from GWAS data. Comparison of imputation results between Genetic Analysis Workshop and our approach We compared our imputation results with the GENO dataset provided by GAW, which recovered 1.2 million variants for 813 individuals (including sequenced individuals themselves). GAW took a 2-step procedure for imputation: a preliminary imputation based on population level information alone in the first step and an additional imputation procedure using pedigree information in the second step using SimWalk2 and Merlin [4,5]. Neither program can handle pedigrees as large as the GAW18 families, so both required large families to be partitioned into smaller subfamilies. In contrast, by taking each pedigree as a whole, our method was able to recover approximately 1.53 million SNPs for 1011 individuals (these include an additional 198 individuals without sequence data or GWAS data), which accounts for 90.6% of total 1.69 million variants. Both the number of imputed SNPs and the number of imputed individuals by our approach are substantially higher than those given by the GAW. For the 9.4% remaining variants, our imputation method found that 7.39% had conflicts that are similar to the one in Figure 1 (right, second locus), caused by calling errors from sequence data. About 2% of them were located between haplotype segments. For the rest 0.01%, all of the sequenced individuals in each pedigree were heterozygous; therefore, genotypes of unsequenced individuals cannot be imputed. Among 1,070,318 common variants imputed by both methods for the 813 individuals, we found 0.15% of genotype mismatch between 2 sets. Our approach has imputed 467,485 more variants than the GENO dataset but missed 145,081 SNPs. The majority of the missed SNPs (>80%) are due to conflicts discovered by our program. This is consistent with the genotype-calling error rate from sequence data. The remaining SNPs were missed because their positions were out of haplotype segment regions. The summary of results can be found in Figure 3. Imputation accuracy We further evaluated the imputation accuracy by assuming sequence data of some individuals were unknown [6]. We selected 5 individuals from family 21 ( Figure 2 and Table 2). For each of them, we masked all of their genotypes (i.e., all genotypes were set to be "missing"), performed the imputation procedure, and then assessed the imputation accuracy as the proportion of correctly imputed alleles. Table 2 shows the pedigree relationship information for masked individuals and imputation accuracy. Each individual represents a distinct relationship within the family. Results show that imputation on individual 946 has the highest accuracy. Four of her close relatives (three children and one halfsibling) have been sequenced, and the average missing rate of the sequenced relatives was 2.13%. Most of her genotypes can be inferred because even if there is a missing variant in one of sequenced relatives at a locus, the other relatives may provide enough information to derive her genotypes. Individual 977 has the lowest accuracy, although both parents have been sequenced. Theoretically, one should be able to infer a child's genotypes from both parents if the inheritance is given. However, in this case, not only does she have a smaller number of sequenced relatives, but the missing rate of the father (4.17%) is also much higher, both of which contribute to the low accuracy. Recombination breakpoints The haplotypes and recombination breakpoints have been obtained from all families based only on GWAS data. Overall, there are a total of 3089 recombination events Asterisks (*) and Carets (^) indicate averaged numbers across families, with (*) and without (^) families 7, 9, and 11. identified. Among them, a fraction still cannot be determined from their parental sources because of missing genotypes in parents. After filtering out recombination events with unknown parental origins, our final dataset had 1361 maternal and 933 paternal recombination events. Because of homozygous genotypes, recombination breakpoints cannot always be within two adjacent SNPs. Still, the resolution of our inferred recombination breakpoints is very high, with more than 94% of them within 20-kb range, and the median length is about 8 kb, which is a great improvement over previous results [7,8]. Discussion In this study, we have proposed a computational framework to infer haplotypes and recombination breakpoints and finally impute genotypes based on a subset of sequenced members in a pedigree. Results on GAW18 data have demonstrated that (a) our approach is efficient for extremely large pedigrees and (b) we imputed more variants and more individuals than the one provided by GAW organizers. Our approach can be further improved in several directions. First, data quality, including missing and genotyping errors, can have a substantial effect on the final results. Many genotyping errors are actually Mendelian consistent, which makes error detection a challenging task. With the development of sequencing technologies as well as SNP calling algorithms, we expect the quality of genotyping calling from sequence data will improve, which in turn will improve our imputation results (e.g., reduce the number of conflicting loci). Second, given the high density of SNPs, population-level linkage disequilibrium can be used in imputation even for family data. Investigating approaches that can jointly consider information within families and information at the population level will be our future work. Third, our haplotype segments are defined based on all observed recombination events in a family. Therefore, the haplotype segments of a particular individual may have been cut short unnecessarily from recombination breakpoints of other individuals, resulting in some variants between haplotype segments being dropped. We will define haplotype segmentations of each individual based on her or his own recombination breakpoints, which will reduce the number of dropped variants. Last, our results show that the strategy of sequencing only a small subset of family members and imputing others is very effective. However, the final imputation results may depend on many factors, such as number and type of relationships of sequenced relatives, as well as the quality (e.g., missing rate) of sequence data. A truly important decision is how researchers select individuals to sequence to optimize the amount of information acquired within the constraints of a budget.
3,550.4
2014-06-17T00:00:00.000
[ "Computer Science", "Biology" ]
Analyzing the Preparation and Properties of Silver Nanoparticles ; A PhotoAcoustic Study Metal nanoparticles have garnered recent attention due to their potential for use in various mechanical, electrical, chemical and optical applications. This study aimed to investigate the synthesis of silver nanoparticles using pulsed photo-acoustic (PA) spectroscopy techniques. The results indicated a linear relationship between absorbance and concentration. Additionally, stability of silver colloids was seen at room temperatures, with no aggregation. The nanoparticles were spherical and between 2-40 nm in diameter. Nanoparticle size and PA signal were inversely proportional. Furthermore, lack of nanoparticle stability was found to weaken the PA signal. Lastly, nanoparticle absorption was inversely proportional to fluorescence. Further studies are needed for exploring the rationale in the relationship between fluorescence and absorption of the nanoparticles. Introduction Recent years have seen the use of metal nanoparticles in a wide variety of electrical, mechanical, optical, biological, and chemical applications (Stepanov, 2016).Metal nanoparticles have highly diverse uses due to their unique physical and chemical properties that occur due to the surface effect (Zhang et al., 2014).The optical properties of metallic nanoparticles are of great importance because of their ability to interact efficiently with optical fields over length scales smaller as compared to the diffraction limit and their sensitivity to changes within their local environment (Olson et al., 2015). Among the noble-metal nanoparticles, silver nanoparticles have gained attention due to their varied uses as catalysts (Bindhu & Umadevi, 2015) and photosensitive components (Dahiya, et al., 2015), in addition to their applications in surface-enhanced Raman spectroscopy (Grass et al., 2015).Furthermore, their property of optical absorption enables them to be used as highly competent contrast agents in photoacoustic imaging (Homan et al., 2010).Silver nanoparticles may be prepared in variety of ways; such as, chemical reduction by utilizing a reducing agent (Iravani, et al., 2014), electrochemical reduction (Nasretdinova et al., 2015), photochemical reduction (Iravani, et al., 2014) and finally thermal evaporation (Kibis et al., 2010), which is inclusive of chemical vapor deposition.Preparation may additionally be carried out physically by evaporating atoms from the surface of a metal by means of employing a high energy laser and then cooling to form nanoparticles (Abbasi et al., 2016). A number of studies highlighted the effective role of the photoacoustic (PA) technique in determining the rate of nanoparticle production and concentration of those particles (Valverde-Alva et al., 2015).The use of this technique is highly recommended due to its ability to non-invasively identify concentrations of substances at high spatial resolutions (Taruttis & Ntziachristos, 2015).Since nanoparticles are greatly size-dependent, there are dramatic changes in the electrical, optical, and magnetic properties because of reduced number of free electrons, smaller particle sizes, increased surface areas, and quantum confinement effect at the nano-scale (Zhou et al., 2017).Therefore, these properties need to be greatly investigated, so that silver nanoparticles may further be engineered at the appropriate sizes (Stepanov, 2016).As a result, silver nanoparticles with desired properties may be produced.However, there is a lack of studies that incorporate the use of this photoacoustic spectroscopy to identify the properties of silver nanoparticles.Therefore, this study has aimed to investigate the synthesis of silver nanoparticles using the laser-induced PA technique as a means to contribute to existing limited research in this context.Specifically, the concentration and size of the silver nanoparticles suspended in the dispersion were studied. Literature Review In previous literature, it was discussed that the PA technique effectively combined ultrasound and optical imaging modalities to study nanoparticle properties (Homan et al., 2010).The integration of PA with ultrasound modalities had been recommended by this study due to the complementary nature of these modalities to provide spatial images of high quality (Homan et al., 2010).The study found that the level of concentration of nanoparticles was in direct proportion to the intensity of the PA signal.Other studies have investigated the efficacy of this technique in studying the quantity and presence of nanoparticles within cells and tissues (Cook, Frey & Emelianov, 2013).Additionally, the PA technique had been used to study the thermophysical characteristics of Al-doped zinc nanoparticles (El-Brolossy, Saber & Ibrahim, 2013).This has earlier been corroborated by Homan et al. (2010), who suggested that PA was highly effective in detecting nanoparticles which were deep within tissue structures.Additionally, the use of pulsed PA has been noted in studying the optical properties of silver colloid nanoparticles through the use of laser ablation (Aldama-Reyna et al., 2018).The applicability of pulsed PA in studying the generation of silver nanoparticles in ethanol through the use of laser ablation had previously been highlighted (Valverde-Alva et al., 2015).This had not been seen in earlier studies, which had synthesized silver nanoparticles through the use of electrochemical means rather than PA (Nasretdinova et al., 2015).Additionally, the use of salt agents for synthesizing silver nanoparticles had previously been seen (Tolaymat et al., 2010).Moreover, previous studies had recruited the use of in situ reduction of hydroquinone in order to synthesize silver nanocomposities (Bao, Zhang & Qi, 2011).Therefore, it was seen that the use of physical and chemical techniques was predominantly noted in order to effectively produce silver nanoparticles.This was corroborated by a more recent study, which highlighted that such techniques were inclusive of chemical vapor deposition, hydrothermal techniques, sol process, pyrolysis, chemical precipitation and micelle (Abbasi et al., 2016).The study had further discussed that the use of these techniques was significantly associated with complexities in the form of maintenance of stability of the nanoparticles and in achieving an appropriate size for the nanoparticles (Abbasi et al., 2016). Silver nanoparticles were discussed to present comparatively better light absorptive properties as compared to gold nanoparticles; it therefore followed that the use of silver nanoparticles would be associated with comparatively stronger PA signals (Homan et al., 2010).Silver nanoparticles were additionally discussed to have therapeutic properties which were highly relevant to biomedical applications (Wei et al., 2015).Furthermore, the same study highlighted their applicability as anticancer and antiviral agents, and radio-and photo-sensitizers.Furthermore, they may be utilized for environmental and food safety applications due to their anti-microbial characteristics (Abbasi et al., 2016).The effectiveness of silver nanoparticle systems as competitive contrast agents in photoacoustic imaging had previously been highlighted (Homan et al., 2010).However, the use of silver nanoparticles may pose toxicity due to the increased level of silver being utilized, as highlighted by a toxicology study (Stensberg et al., 2011).Therefore, appropriate care is suggested when utilizing such nanoparticles.Further studies recommended the utilization of green strategies in order to synthesize silver nanoparticles (Andrade et al., 2016;Ahmed et al., 2016;Dhand et al., 2016). Hypotheses Development It may be clear that silver nanoparticle synthesis has gained immense attention due to their unique properties and potential for being utilized in a wide number of applications.However, there is still a gap in research pertaining to more explorations of silver nanoparticle synthesis through PA techniques.Thus, the following hypothesis has been developed. H 0 : The properties of silver nanoparticles cannot be effectively determined using the PA technique. H 1 : The properties of silver nanoparticles can be determined effectively using the PA technique. Materials and Methods An ultrapure water purification system (Milli-Q Advantage A10, Millipore, USA) was used to prepare the aqueous in triple distilled water.Silver nitrate (AgNO3, 99%, LA Container Inc., USA) and sodium borohydride (NaBH4, 97%, Fluka, Switzerland) were used for this study. Synthesis of Silver Nanoparticles To effectively reduce the ionic silver and stabilize the formed silver nanoparticles, a large excess of sodium borohydride is needed.Specifically, the initial concentration of sodium borohydride must be twice that of silver nitrate.When NaBH4 was varied from 2.0 mM, while using 1.0 mM of AgNO3, the breakdown of the product took place in less than an hour.The weight of AgNO3 and NaBH4 in the solution were then found.Following this, the concentration of the silver nanoparticles in the solution was calculated. To measure the sample masses, a balance (AL-204, D-SCALE INDONESIA) was used.Furthermore, Ultrasonic Equipment (Portable Cleaner, NXPC-1505, Ultrasonic frequency: 40Khz, Temp range: 0~70℃, KODO, Korea) was used to adequately mix the materials for approximately 5 minutes.In this way, a homogenous distribution of the particles is ensured.In order to cool down these materials, they were refrigerated through the use of an ice pan for approximately 10 minutes.In order to effectively homogenize the solution, a heating magnetic stirrer with timer (AREX.T, VELP, Italy) was utilized.Following this, a 10 mL volume of 1.0 mM AgNO3 was added in a drop-wise manner at the rate of 1 drop/second to 30 mL of 2.0 mM NaBH4 solution.This solution had been cooled using an ice pan.A magnetic stir plate was used to carry out vigorous stirring of this mixture.Following the complete addition of AgNO3, the solution turned to a bright yellow color. PA Spectroscopy Technique For conducting this experiment, a Nd: YAG laser (model LQ 129, Solar Laser System) was used.This had a pump energy equal to 23 J, a pulse energy of 280 mJ at 532 nm, an output energy of 280 mJ at 532 nm, a pulse repetition rate between 1-10 Hz, a pulse width of 12 ns at 532 nm, horizontal polarization at 532 nm, delay of output pulse relative to the pump pulse equal to 154 μs, a beam divergence of 1 mrad and a rod diameter of 8 mm.A third harmonic generator (LG 103, Solar Laser System) was used with the laser.Furthermore, the association between the wavelength and the photoacoustic signal was evaluated through the use of a titanium-doped sapphire laser (LX 325,Solar Laser System).This had an output energy equivalent to 70 mJ at 755 nm and 40 mJ at 885 nm.Furthermore, it had a tuning range between 694-935 nm and 866-1012 nm and a divergence equivalent to 1.5 mrad at 755 nm.Lastly, the linewidth was less than 0.8 nm at the maximum level of the tuning curve.In order to adequately obtain the UV-VIS absorption spectra of the nanoparticles, a UV-visible spectrophotometer (Lambda 40, Perkin Elmer, USA) was used.This obtained the spectra in the range of wavelengths between 190 nm to 1100 nm.Furthermore, a halogen and deuterium lamp were used as sources of radiation, to encapsulate the spectrophotometer's range of the working wavelength.Additionally, five mirrors were used.The first, fourth and fifth mirrors were plane mirrors, whereas the second mirror was the toroidal mirror.Lastly, the third mirror was the spherical mirror.To procure the emission, excitation and synchronous fluorescence, a luminescence spectrometer (LS45, Perkin Elmer, USA) was utilized. To increase the power of PA signal, a photodiode amplifier (PDA 6424, ILX Lightwave, USA) was used.Furthermore, a two-channel digital real-time oscilloscope (TDS 380, Tektronix Inc., USA) was used for observing the precise shape of the electrical signal received from the photodiode amplifier.To carry out spectral recording, an intensified charge coupled device (ICCD) camera (Andor iStar DH720-18F-03, Lot Oriel Instruments, USA) was utilized.This aids in achieving an enhanced response from 180-850 nm, 1024×256 pixels and 18 mm Gen 2 image intensifiers.Coupling of this ICCD camera was carried out with a monochromator (Oriel MS 257, model 77702, Lot Oriel Instruments, USA), a spectrograph with different gratings (300 and 1200 lines per mm).To analyze the size of the nanoparticle distribution within the dispersion, a Microtrac S3500 tri-laser particle size analyzer (Microtrac S3500, Measuring Range 0.02 to 2800 Microns, Lasers Wavelength 780nm, Japan) was utilized.Furthermore, the nanoparticle shape, size and distribution were analyzed using a transmission electron microscopy (JEM-2100F field emission electron microscope, JEOL, Japan).The device is characterized by accelerating voltage between 80 to 200 kV and a magnification level ranging from 50 to 1,500,000.For analyzing and quantifying the phase, x-ray diffraction (X'Pert Pro, wavelength 1.54056 Å, PANalytical, Netherlands) was used.The diffraction technique was further used to ascertain the crystal structure of the sample using a high score. Results This study has aimed to investigate the synthesis of silver nanoparticles in aqueous solution through the use of PA-induced laser excitation.The UV-Vis spectrum was attained with respect to a variety of sample concentrations.Additionally, various experimental parameters had been used in order to investigate the PA effect of silver nanoparticles.Figure 1 highlights the UV-Vis spectra of silver nanoparticles at various concentrations after 2880 minutes.was discussed in an additional study which observed similar results (Kriel & Priest, 2016).The study highlighted that according to Beer-Lambert's law, the absorbance of a solution is linearly proportional to the concentration.The results obtained by the present study are in accordance with this law. Additionally, the slight red shift of the observed peaks over time was indicative of larger particles with no aggregation being formed; thereby, suggesting the stability of silver colloids at room temperature conditions.This was similarly highlighted in a study by Iravani et al. (2014), which explored the chemical, physical and biological means to synthesize silver nanoparticles.The lack of nanoparticle aggregation as seen in the present study is greatly advantageous, since it has been reported that the performance of nanoparticles is adversely impacted by increasing aggregation (Shi et al., 2015).However, the TEM results of the present study indicated that aggregation was present at higher nanoparticle concentrations, a finding which was consistent with that observed by Gharagozloo and Goodson (2010).The aforementioned study emphasized that increases in nanoparticle aggregation were significantly noted at higher concentrations and temperatures. The results of the TEM were further elucidatory in that they indicated the size of the nanoparticles to be approximately 2 to 40 nm in diameter and their morphology as being spherical.The presence of spherical silver nanoparticles had earlier been observed in a study by Logeswari, Silambarasan and Abraham (2015), which had utilized chemical means to synthesize these particles.A further study highlighted that nanoparticles generally had sizes ranging between 10 to 20 nm.It was further seen that an increase in the PA signal was directly correlated with increases in silver nanoparticle concentration until saturation, a finding which was consistent with an additional study by He et al. (2017).Furthermore, the present study found decreases in the size of nanoparticles to be significantly associated with increase in the PA signal, a finding which was in line with that observed by Hatef et al. (2015).Furthermore, the results from the absorbance and fluorescence of the PA signals illustrated a lack in stability of the silver nanoparticles immediately following preparation, due to which, a weak PA signal resulted.The close association between nanoparticle stability and increases in PA signals had earlier been explored in a study by Masim et al. (2016), which reported similar results.Conversely, it was also seen in the present study that the absorption of the silver nanoparticles in solution was maximized under the lowest levels of fluorescence.Further exploration is necessary for ascertaining the association between fluorescence and absorption of the silver nanoparticles. To this end, this study filled the gap in existing research through suggesting a close association between variables such as absorbance and concentration, fluorescence and absorption, nanoparticle concentration and aggregation, PA signal and nanoparticle concentration and between nanoparticle size and PA signals.However, it was suggested that further studies are necessary for conducting a more comprehensive exploration of the relationship between fluorescence levels and the absorption of silver nanoparticles. Figur Figure 6
3,462.2
2018-11-30T00:00:00.000
[ "Materials Science", "Physics", "Chemistry" ]
Adaptive Language Processing Based on Deep Learning in Cloud Computing Platform With the continuous advancement of technology, the amount of information and knowledge disseminated on the Internet every day has been developing several times. At the same time, a large amount of bilingual data has also been produced in the real world. These data are undoubtedly a great asset for statistical machine translation research. Based on the dual-sentence quality corpus screening, two corpus screening strategies are proposed first, based on the double-sentence pair length ratio method and the word-based alignment information method. The innovation of these two methods is that no additional linguistic resources such as bilingual dictionary and syntactic analyzer are needed as auxiliary. No manual intervention is required, and the poor quality sentence pairs can be automatically selected and can be applied to any language pair. Secondly, a domain adaptive method based on massive corpus is proposed. The method based on massive corpus utilizes massive corpus mechanism to carry out multidomain automatic model migration. In this domain, each domain learns the intradomain model independently, and different domains share the same general model. Through the method of massive corpus, these models can be combined and adjusted to make the model learning more accurate. Finally, the adaptive method of massive corpus filtering and statistical machine translation based on cloud platform is verified. Experiments show that both methods have good effects and can effectively improve the translation quality of statistical machines. Introduction Currently, corpus-based translation system relies on largescale bilingual parallel corpus, uses the translation model to estimate the probability, and selects the final translation result based on the translation probability. e advantage of the corpus-based translation method over the rule-based translation method is that it does not require much human and material participation in the construction of the model. e researchers themselves do not need to master the level of linguistic experts in the mastery of the two languages. e threshold is not so high, which allows more interested scholars and researchers to invest in it. Depending on the specific translation strategy, corpus-based machine translation can be divided into statistical-based machine translation and instance-based machine translation. e statistical-based method is the mainstream method of current machine translation. e early stages of statistical machine translation development use only some coarse-grained features, such as bidirectional phrase translation probabilities [1][2][3], bidirectional lexical translation probabilities [4], vocabulary length penalties [5], phrase lengths, punishment [6], language model [7], and sequence model [8][9][10]. Many systems use only these 10-20 features to complete the translation process and use the minimum error rate training (MERT) method [11][12][13] to perform feature weight adjustment. With the development of statistical translation models and the widespread use of massive data, researchers have found that the use of fine-grained features [14] can further improve the accuracy of the translation system. However, the use of a large number of fine-grained features poses a great challenge to the adjustment of feature weights. e traditional MERT method can only adjust the weights of dozens of features but cannot do anything for a translation system with thousands of features. References [15][16][17] proposed a training algorithm based on max-violation perceptron and forced decoding [18], which can be used to translate the system by using all bilingual training data, large-scale discriminative training, and support for tens of millions of sparse features. Compared to the MERT and PRO methods, this approach can bring very significant performance improvements [19,20] and further maximizes the use of perceptual machine training methods. e hierarchical phrase translation system has also achieved good results. e traditional statistical machine translation domain adaptive method usually migrates the model for a single domain. For example, the training data is news corpus, and the test data is network corpus. However, most practically in the application scenario, it is necessary to perform model migration on multiple domains at the same time. For example, for online translation services, the user's input is usually text from various fields, which requires the statistical machine translation model to process automatically according to the actual input. e field adaptive research of deep learning translation is still relatively few, and the existing work has not given a clear domain label. However, the actual translation of scientific and technological literature often faces multiple professional fields, and the use of existing knowledge to organize information, such as the keywords of the paper, the scientific and technological word system, and other knowledge to obtain more clear semantic tags, helps to divide the corpus more finely. In view of this, this paper mainly studies the multidomain adaptive method of statistical machine translation based on massive corpus under the cloud computing platform. Firstly, two corpus screening strategies are proposed, based on the double-sentence pair length ratio method and the word alignment information based method. e innovation of these two methods is that no additional linguistic resources such as bilingual dictionary and syntactic analyzer are needed as auxiliary. No manual intervention is required, and the poor quality sentence pairs can be automatically selected, and can be applied to any language pair. Secondly, a domain adaptive method based on massive corpus is proposed. e method based on massive corpus utilizes massive corpus mechanism to carry out multidomain automatic model migration. In this domain, each domain learns the intradomain model independently, and different domains share the same general model. rough the method of massive corpus, these models can be combined and adjusted to make the model learning more accurate. Finally, the adaptive method of massive corpus filtering and statistical machine translation based on cloud platform is verified. Experiments show that both methods have good effects and can effectively improve the translation quality of statistical machines. Cloud Computing Platform Framework. e Hadoop Distributed File System (HDFS) can be deployed on a large number of inexpensive machines to store up terabytes and petabytes of data in a highly fault-tolerant and reliable manner. It combines well with the MapReduce model to provide high-throughput data access. e structure of DFS is shown in Figure 1. As can be seen in Figure 1, an HDFS cluster that consists with a NameNode and multiple DataNodes was discussed. e metadata and the DataNode are actual data. e application accesses the NameNode to get the metadata of the file, and the actual I/O operation is directly interacting with the DataNode. e NameNode is the primary control server responsible for managing the file system namespace and coordinating application access to files, recording any changes to the namespace or changes to their properties. e DataNode is responsible for storage management on the physical node where the file is located. e feature of HDFS is that the data is "write once, read many times." e files of HDFS are generally divided into different data blocks according to a certain size, and each data block is dispersed into different DataNodes as much as possible. In addition to completing the namespace operation of the file system, the NameNode also determines the mapping of the data block to the DataNode. Massive Corpus Screening Strategy. For statistical machine translation systems, the intuitive understanding is that increasing the size of the training data can help improve system performance. Massive data is easier to obtain in today's information environment than ever before. Scholars have built knowledge bases such as parallel sentence pairs and bilingual dictionaries by crawling bilingual web pages [21]. ere are more and more sources of corpora, from multilingual websites, comparable bilingual corpora, human translated text, and more. e scale of building parallel corpora has been large, and it can be used for statistical machine translation system training. Too many errors must affect statistical machine translation systems that rely on data quality. In view of the fact that there is no qualitative change in the current statistical model, it is necessary to acquire the model features by training the corpus. erefore, in order to train a high-performance statistical machine translation system, it is necessary to process and screen the training data. In this paper, two methods are used to filter the noise sentence pairs in the bilingual parallel corpus: the method based on the sentence pair length ratio and the method based on the alignment information. Method Based on Sentence to Length Ratio. In general, the length of a pair of statements that are translated should be proportional to a certain ratio. However, most parallel corpora contain sentence pairs that do not match the length ratio. ese sentence pairs are usually noise in the corpus. Noise phenomena caused by the length ratio include monolingual errors, alignment errors, and inclusion of unknown tags (html tags, etc.). ese phenomena have been observed, and many noise sentence pairs whose length ratios do not conform to the regularity are found in the experimental corpus. Some examples are listed in Table 1. In order to remove such pairs of erroneous sentences, we set a length rule that defines the length ratio as where f is the source sentence, e is the target sentence, and |f| and |e| are the number of words after the source and target sentences. e method based on the length ratio is usually based on linguistic knowledge, and the artificially set length is lower than the upper and lower limits. is paper assumes that the noise sentence in the corpus is less than the normal sentence pair; that is to say, there is a continuous range of ratios, which is the majority of the normal sentence pairs in this range. erefore, the threshold is set according to the statistical distribution characteristics of the length ratio; that is, the sentence pairs whose length is less than the total number are filtered out. is has the advantage that thresholds can be set for different language pairs without the need for specific linguistic knowledge. Method Based on Word Alignment. e word alignment problem is the task of finding the alignment of words in a given two-state pair. It is a key step in statistical machine translation. e word alignment model has been studied for a long time, and people use different methods for bilingual word alignment. Run the IBM model from both directions and merge the results of the two word alignments. In general, the intersection contains relatively reliable alignment points; that is, the alignment point is highly accurate but does not contain all reliable alignment points; and the assembly contains most of the desired alignment points; that is, the recall rate is high but introduces additional errors. A good alignment point is adjacent to other good alignment points. erefore, the algorithm starts from aligning the intersections. In the expansion step, adjacent alignment points located in the union but not in the intersection are added, and finally points that are not aligned in both directions are added. e pseudocode for this algorithm is given in Table 2. e position of the two sentences in the corpus is adjacent. e occurrence of this situation is the automatic noise extraction of the parallel sentence to the technology, because it is impossible to judge the correct alignment sentence pair (the correct alignment sentence corresponds to <discountedant opinions, discordant opinions>, or <discondant opinions> which is the second sentence in the table). A similar situation has occurred many times in the corpus we use. Figure 2 shows the alignment matrix of the two sets of sentence pairs. As shown in Figure 2, it shows the results of two unidirectional alignments in English and Chinese, and the bidirectional alignment matrix on the right is obtained by the grow-diag-and-final algorithm. It can be seen that the intersection of two unidirectional alignments is only a matter of discretion, discordant; that is to say, when the grow-diag-and-final extension is performed, there is only one alignment result that is originally considered reliable. After the expansion, the result of the obvious error alignment is obtained. is error not only affects the alignment quality of itself, but also affects the rule extraction result of the translation system. For example, in the phrase system, the above sentence pairs are extracted from the rules of discriminating, inconsistent. is kind of rule does not play any role in translation decoding; even if the decoder selects the rule, it will only reduce the translation quality. erefore, similar problems should be avoided as much as to improve the quality of the translation or to reduce the size of the rule set. To this end, we propose a sentence-pair filtering method based on the grow-diag-and-final extension method. We consider expanding the number of alignment results EC and the number of alignment results of the intersection alignment IC. When the extended alignment result exceeds the intersection alignment result by a certain amount, we think that the alignment result is unreliable. We set the filtering rules based on the word alignment extension and use the following to judge whether the word alignment results are reliable: Statistical Machine Translation Adaptation. is section introduces a domain adaptive approach based on massive corpus. e main idea is that the training of our model is mainly divided into three steps: first, selecting the data in the domain is according to the predefined domain; second, training the domain model and the general model is to construct the statistical machine translation system; third, using massive corpus technology makes joint adjustments to multiple domain systems. According to the above, the first step in this work is to select the in-domain bilingual control data from all the bilingual training data to train the translation model. Since the monolingual data in a specific field can be obtained in large quantities, we draw on the method of bilingual cross section data selection [22] to obtain bilingual data in the field: is bilingual cross-entropy-based criterion tends to choose a sentence pair that is more similar to the data distribution in the domain but different from the general data distribution. erefore, this method considers that the sentence pair with larger cross-entropy difference should be selected. In the second step, we use the training data in the selected domain to build a statistical machine translation system based on the hybrid model. Specifically, we adopted the idea of a hybrid model to build N machine translation systems for N predefined fields; each of which is a log-linear model. For each system, the optimal translation result f is given by For each machine translation system, two translation models and two language models are included. e translation model of a specific field is trained by the bilingual data selected by the data selection method introduced in the previous section, and the translation model of the general domain is trained using all bilingual data. For the language model, we reuse the language-specific and general-language models of the specific domain trained for data selection in the previous section. Compared to a translation system that does not do domain migration, this system with a hybrid model can better balance the general translation knowledge and domain-specific translation knowledge and can benefit from two aspects. In the third step, it is necessary to adjust the feature weights in different machine translation systems. e traditional method of arranging is generally directed to a single system. e method described in this section regards translation systems in different fields as related translation tasks, and joints are coordinated under the framework of massive corpus. ere are two reasons for using massive corpora: (1) e translation system of a specific domain shares the same general domain translation model and language model, and the massive corpus mechanism can make better use of the common translation knowledge of translation tasks in different fields. (2) By forcing the general domain translation model and the language model to behave the same in different fields, massive corpus provides a regularization mechanism to prevent model overfitting. Formally, the objective function of using massive corpus to adjust parameters is represented by the following formula: In order to be able to efficiently coordinate the parameters, we have improved an asynchronous stochastic gradient descent algorithm to optimize and borrowed the idea of pairwise ranking to use the perceptron algorithm to update the feature weights. We first use the machine translation system to generate the N best translation result candidates (N-best), which are reordered and combined into pairs by scoring with smooth sentence level BLEU. Specifically, similar to the asynchronous gradient descent algorithm, we divide the N best translation result candidates into three parts: the best 10% (high), the middle 80% (middle), and the worst 10% (low). ese three parts of the translation result candidates are used for two-two sorting, in which we choose "high one," "medium one low," and "high one low" to combine in pairs, but will not select two of the same part Candidate combinations that are paired. e basic idea of constructing a sample in this way is that the algorithm can better have the discriminability of distinguishing between high quality and low quality translation results. Neural Network Deep Fusion Model. e algorithm based on domain knowledge uses the explicit discrete features of domain knowledge, and the deep labeling algorithm uses the hidden continuous features of deep learning. e sentence domain probability vectors obtained by the two methods are different. Combining the domain labeling algorithm based on domain knowledge and the domain labeler based on deep learning, a multilayer perceptron based on the top layer is designed as a deep fusion model of the neural network. e architecture is shown in Figure 3. e preprocessing of the sentence to be labeled is mainly word segmentation and garbled filtering. e preprocessed results are input to the knowledge-based domain tagger and deep learning-based domain tagger to obtain the domain knowledge-based probability vector and probability vectors for deep learning. e top-level neural network deep fusion model is a twolayer perceptron, and the hidden layer is two receiving fourdimensional vectors. Neuron, the activation function, is set to the ReLU (Rectified Linear Unit) function. e deep mixed neural model obtained through this fusion will well combine explicit and invisible knowledge, merging the advantages of discrete features and continuous features, and make the probability vector and decision category of each sentence more accurate. ereby, the adaptation problem in the field of machine translation is better improved, and, for the data to be translated in a specific field, a higher quality translation output will be obtained. Massive Corpus Screening Experiment Verification. e experimental part of this paper runs on a separate server. e specific software and hardware configuration is shown in Table 3. Because Hadoop installation is a stand-alone mode, the comparison and analysis of experimental results focus on the impact of the proposed method on translation quality. Under the cloud translation platform, bilingual parallel corpora are a wide range of sources, such as translated manuscripts completed by translators, officially published bilingual materials, and automatic extraction of multilingual web pages. e quality of corpus is uneven. erefore, in order to test whether the method is effective, the training First, we count the distribution of the length ratio of the English-Chinese sentence in the training set. e result is shown in Figure 4. e ordinate indicates the number of sentence pairs, and the abscissa indicates the ratio of the length of the sentence between English and Chinese. We can find the ratio of the length of the sentence to a certain distribution law. When the ratio is 1.0, the sentence number is the most, and there are 297,341 pairs of sentences. e figure shows that the highest point of the ratio (1.0) is also relatively large in number of sentences. is verifies our hypothesis that the length ratios of the two languages conform to the law in a continuous range. As shown in the above figure, the contrast value of the sentences appearing in the corpus is 0. To this end, we screened the corpus training comparison system for the different ratio ranges of more than 90% of the total corpus and compared the BLEU scores of the systems on the test set. As shown in Table 5, the statistical distribution of the percentage of total corpus pairs in different ratio ranges is listed. e first line represents the corpus used, and the remaining lines represent the number of sentence pairs contained in the different ratio ranges and the percentage of the corpus. Table 5 shows the number of sentence pairs retained and the proportion of the total corpus when the ER filters different pairs of sentences. e higher the score of ER, the better the effect of word alignment, and the more reliable we think the result is. Based on the method of word alignment information screening corpus, we consider two cases: use the filtered sentence to align the retraining words and then get the translation model; directly use the filtered sentence pairs and alignment information to train the translation model. In the first case, the filtered noise information may affect the calculation of the word alignment probability during the iterative process of word alignment. Realigning after filtering out may improve the word alignment quality and improve the translation effect. In the second case, we use ER to retain the word alignment that is considered reliable, so reword alignment or different alignment results may occur due to the change of word alignment probability, and there may be unreliable alignment results in these results. e experimental results are shown in Table 6. It can be seen from the experimental results that the BLEU scores of each test set are improved in both cases. As far as the overall effect is concerned, it is better not to retrain the word alignment. However, in both cases, the improvement effect on the nist03 and nist05 test sets is not very obvious, the effect of reword alignment on nist03 is slightly better than the latter, and the opposite is on nist45. Use ER to determine whether the word alignment is reliable. When ER is lower than the given threshold, we think that the word alignment result of the sentence pair is not reliable overall. We will filter out the sentence pair, that is, the sentence pair. All alignment information is deleted. In fact, in the word alignment result of the sentence pair, there will be some correct word alignment information; that is, the correct word alignment information is also deleted while deleting the error alignment information. Although the wrong information is not useful for translation tasks, the correct information to be deleted may be helpful for translation tasks. erefore, there is also the possibility of reducing the BLEU score. As shown in Figure 5, we find an instance from the filtered sentence pair to illustrate. e thick solid line in the figure is the intersection of two aligned directions, the thin solid line is the correct result of the extended alignment, and the dashed line is the wrong result of the extended alignment. e sentence in Figure 4 is correct for itself, but its alignment information is incorrect; only partial alignment is correct, and its ER value is −0.22; it can be seen that there is some correct word alignment information, and this part of information can be extracted. A rule facilitates translation. Because the ER value is lower than the given threshold, all alignment information of the sentence pair is filtered out, but the correct information is also filtered out, so there is a problem of degraded translation quality. Adaptive Experimental Verification of Statistical Machine Translation Based on Massive Corpus. We compared the impact of different searched documents and hidden layer lengths on the accuracy of the translation system. e results are shown in Figure 6. Complexity As shown in Figure 6, we found that, for most results, the optimal translation accuracy was obtained when the number of retrieved documents was N � 10. is result confirms that extending the source language input by means of information retrieval is very helpful for determining topic information. It plays an important role in the selection of translation rules. However, in the experiment, when N is large, for example, N � _50, the translation performance is drastically lowered. is is because as the number of retrieved documents increases further, the introduction of topic-independent documents into the neural network will be introduced, and irrelevant documents will bring about unrelated real words, thus affecting the performance of neural network learning. In Figure 7, it can be seen that when L is small, the translation system is relatively accurate. In fact, in the case of L < 600, the difference in translation performance is small. However, when L � 1000, the translation accuracy is worse than other cases. e main reason is that the amount of parameters in the neural network is so large that it cannot be well studied. We know that when L � 1000, there are 100000 × 1000 parameters between the linear and nonlinear layers of the network. e current training data size is not enough to support this network parameter level training, so the model is likely to fall into the local Optimal and unacceptable topic representation information. As shown in Figure 8, we find that the topic similarity feature on the source language side is slightly better for the system than the target language-side similarity feature, and the enhancements that they bring can be accumulated, which means that the neural network based on bilingual data training can help the statistical machine translation system Translation result candidates perform better disambiguation. Further, based on the similarity feature, the topic sensitivity feature of the translation rule can bring more performance improvement, because the translation rules of specific topics are usually more sensitive, when the similarity is similar. e system tends to choose translation rules for specific topics rather than general translation rules. Finally, we see that our methods perform best when using all topicrelated features, with an average of 0.39 BLEU points higher than the LDA-based method. As shown in Figure 9, we use the method of information retrieval to extend the original input, thus avoiding restrictions on bilingual chapters. We use neural network technology for topic modeling. e algorithm is more practical and has good scalability. Under the deep learning framework, our method directly optimizes the bilingual topic similarity, so that the learned topic representation can be easily integrated into statistical machine translation. Domain Labeling Performance. e statistics of the data set are used by the domain tagger; 1% is randomly selected as the test data for the domain tagging performance experiment. e training data for training deep learning is selected from the remaining 99% of the size of the data set. At the same time, the domain labeler based on the domain knowledge and the trained deep learning domain labeler are used to label the test data to obtain the category and probability vector, and then the results are simply linearly e category with the highest probability in the probability vector is selected as the final judgment category, and this judgment category is used as the statistical basis for the accuracy rate and the recall rate and the F-1 value. Four subexperiments are performed. e results are shown in Table 7. e results show that using only the domain knowledge tagger will cause misjudgment and omissions due to the lack of self-built domain knowledge base feature words, so the score is not high, but the judgment efficiency is high; using only the deep learning domain tagger requires a lot of the training data belonging to mining tacit knowledge and continuous features, but training is slow and does not combine prior knowledge; simple linear fusion models linearize the probability vectors of the first two and so on. Proportional weighting combines explicit and tacit knowledge but this simple fusion is difficult to improve most of the misjudgments and missed judgments and has not been greatly improved; the final neural network deep fusion model is deepened through multilayer neural networks. e integration, giving full play to the advantages of the two, greatly reduced the phenomenon of misjudgment and omission and significantly improved. Conclusion When the training corpus is small, some pairs of training sentences related to the test text may be filtered out, which will affect the quality of the translation. But when the training data is large enough, such problems will hardly occur. In addition to learning the domain model independently for each domain, different domains share the same general model. rough the method of massive corpus, these models can be combined to make the model learning more accurate. e experimental results show that this method can significantly improve the translation accuracy of multiple fields in large-scale machine translation tasks. In addition, the performance of this joint tuning method is better than independent model migration. At the same time, this result can be easily applied to the online translation system, training different models for a predetermined number of fields, determining the domain according to the input source language text, and selecting the corresponding domain model or general model for translation. e experimental results also show that when there is no such problem, the method of this paper can effectively improve the translation quality of the statistical machine translation system. e work that needs to be improved in this study is as follows. (1) Consider in the field that the adaptive mechanism is placed in the architecture of human neural machine translation. Calculate different domain vectors, improve the attention mechanism, and make the domain adaptive. (2) How to integrate deep learning methods and prior knowledge to improve the system's performance will be researched in every area of natural language processing in the future. Later, we will try different ways in neural machine translation Chinese and Canadian prior knowledge in the field to improve translation quality for different fields. How to integrate deep learning methods and prior knowledge to improve the performance of the system will be a problem that needs to be studied in the future. After that, we will try to add a priori knowledge in the field of neural machine translation in different ways to improve the translation quality for different fields. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
7,018.2
2020-06-19T00:00:00.000
[ "Computer Science", "Linguistics" ]
Mitigation of installation-related effects for small-scale borehole-to-surface ERT Small-scale resistivity inhomogeneities can result from the local distribution of water and the water and nutrient uptake of plants. Measuring small-scale Electrical Resistivity Tomography (ERT) in the field comes with a set of particularities, especially when including borehole electrodes for a better resolution with depth. We apply small-scale borehole-to-surface ERT over a palaeochannel. Combining surface ERT with detailed borehole-to-surface ERT profiles along the measurement line allows a delineation of finer layering within the coarser lithology. Our field setup includes a borehole electrode tool with 20 ring electrodes, electrically coupled to the ground via a conductive mud. Two main points are addressed in this publication: 1. In the field, we electrically coupled the borehole electrodes to the ground by filling the cavities around the tool with a soil mud, i.e., we need to account for the unknown conductive borehole filling in the inversion. If not incorporated, the mud has a considerable influence on the resistivities close to the borehole tool, but also on the region around the surface electrodes. Consequently, alongside with a 3D inversion scheme representing the electrodes with the Complete Electrode Model (CEM), we include the mud as a separate and uncoupled region. We model the geometry of the mud layer around the tool and do not allow an influence of this region on the rest of the model. 2. Due to the small electrode distances and the overall small-scale nature of the array, the depth of installation of the borehole electrode tool must be known accurately in the inversion model. However, it is not easy to measure the tool depth in the field with the required accuracy, due to small-scale surface roughness, e.g., from a weathered loose soil layer at the surface or from vegetation. We also investigate the influence of a tilted tool installation and optimise for the depth and installation angle of the borehole tool before inverting for resistivities. An accurate knowledge of the borehole electrode positions is crucial for a reliable and precise inversion result. The surface electrodes establish a coordinate system around the borehole tool on the surface, with an angle φ describing the direction around the tool in the top view. The sensitive plane (in-plane) is defined as the x-z plane cutting through φ = 0° and φ = 180°. A tilting of the tool from the vertical direction is described by a tilting angle θ. A tilting of the borehole tool within the sensitive plane manifests in an increased misfit between data points on both sides of the tool, i.e., at φ = 0° and φ = 180°. We use this difference to optimise on the tool angle. The true depth of the borehole tool is found by searching for a minimum of the objective function, describing the goodness of the found model, while assuming different tool depths in each inversion. We see a minimum of the objective function, which can be attributed to the correct depth range, as shown by a synthetic study. Through our optimisations, we can determine a tilting of the tool, i.e., the angle θ, with an accuracy of 2° to 3° and the tool depth with an accuracy of a few centimetres, depending partially on the subsurface resistivities, i.e., our optimisation works mainly in predominantly horizontally layered soils. A tilting in directions out of the sensitive plane (out-of-plane) can be projected onto the sensitive plane, since the out-of-plane tilt has a negligible influence on the data. After this optimisation, we can determine layer resistivities from our field data. Introduction Field-to plot-scale soil heterogeneity is an important factor for the availability and transport of water and nutrients, but also for the distribution of contaminants (see e.g. Pätzold et al., 2008). Detailed knowledge about this heterogeneity can be used in precision agriculture and to parametrise local flow models. There exists a close link between weed growth and soil heterogeneity (Pätzold et al., 2020). Local heterogeneity also plays an important role in Earth System Models, e.g., regarding infiltration-runoff partitioning and recharge in vegetated regions (Fatichi et al., 2020), and can also be integrated in a stochastic manner for accurate water flow modelling (Nezhad et al., 2011). The influence of even smaller heterogeneity on vertical transport, like thin clayey or gravelly layers on the scale of a few cm, and their detection, has been the subject of few studies. Kollet (2009) proposes an up-scaling and averaging approach for sub-grid scale soil heterogeneity, i.e., smaller than the resolution of the model grid, in coupled groundwater flow models. According to the author, soil heterogeneity plays an important role especially in dry periods, but is negligible for water availability in wet periods. The author also points out that a detection of small-scale soil heterogeneity is difficult and usually not implemented in flow models. Studies working on this scale often rely on soil sampling, like in Rosenfeld et al. (2017), who investigate soils on a microscopic scale, or applied to urban soil heterogeneity, as described in Greinert (2015). While helpful in determining the ground truth, with soil sampling, thin layers can be missed and remain unsampled. With geophysical methods however, we can image the subsurface. Therefore, a combination of both approaches is ideal for accurately depicting the subsurface. In the context of soil water, Vanella et al. (2018) show the feasibility of time-lapse small-scale 3D Electrical Resistivity Tomography (ERT) for monitoring the soil-root interaction and root water uptake at the cm scale and investigate the influence of different irrigation regimes on tree roots. They utilise superficial, buried, and borehole electrodes for their measurements. The detection of small-scale water barriers with geophysical methods solely from the surface is desirable but difficult and requires a high resolution at the depth of interest. Geophysical imaging methods such as ERT usually suffer from a decrease of resolution with depth, making it difficult to accurately image the boundaries of very thin layers (see e.g. Ochs & Klitzsch (2020) and references therein). Also, highly conductive layers directly at the surface followed by less conductive material below can focus the current in this topmost layer and reduce the current density and therefore sensitivity in lower layers. A common problem specifically in small-scale ERT is the influence of the rather large electrodes in comparison with the electrode spacing. There are different approaches for minimizing their effect. Verdet et al. (2018) approximate the finite electrodes by equivalent node electrodes with an optimised placement. Alternatively, the electrodes can be very accurately represented by including them as finite bodies in the model, e.g., with the Complete Electrode Model (CEM), as described in Rücker & Günther (2011). This requires 3D modelling. In our study, we incorporate CEM electrodes in all of our models. With our ERT borehole-to-surface (abbreviated as "b2s") field setup, we can detect the dimensions, depth and resistivity of very thin soil layers, i.e., down to about 10 cm thickness, as described in Ochs & Klitzsch (2020). For synthetic data and well controlled measurements with the b2s setup in a backfilled trench, we achieved a high resolution over the depth of the borehole tool and accurate resistivity values. However, a demonstration of the setup for a field application is still missing, which we provide with this publication. We show the specific challenges arising from the field application and how we mitigate their effects. Our study site is an agricultural field with a generally sandy and gravelly soil, intersected by a network of old river channels in shallow depths (palaeochannels) filled with clay and silt. It was subject of several other studies especially in the context of the Transregional Collaborative Research Center 32. von Hebel et al. (2014) show the soil variability with Electromagnetic Induction (EMI) measurements on a nearby field in the same area. Former studies also investigated the distribution of palaeochannels with geophysical methods from the surface Rudolph et al., 2015;Brogi et al., 2020) and combined soil classification maps with a crop health assessment . We aim at resolving the substructures of a palaeochannel. For this, we first imaged the lateral boundaries and shape of the channel by measuring a surface ERT profile across it, then chose eight locations along this profile where we measured b2s profiles. The surface ERT profile is 46 m long, with an electrode spacing of 50 cm, compared to the b2s array, comprising a 5 m long surface line, with 25 cm spacing. Based on the measured field data, we further develop the small-scale b2s ERT setup. First, because of unreliable measured stacking errors, we devise an error model based on the stacking error information. Then we focus on the mitigation of two main installation related effects: 1. Due to the small electrode distances and the overall small-scale nature of the array, i.e., only 5.2 cm electrode spacing along the borehole tool, the electrodes have a big impact on the measurements. Therefore, the depth of installation of the borehole electrode tool must be known accurately in the inversion model. However, it is not easy to measure the tool depth in the field with the required accuracy, due to small-scale surface roughness, e.g., from a weathered loose soil layer at the surface or from vegetation. Additionally, we investigate the influence of a tilted tool installation. 2. In the field, we electrically coupled the borehole electrodes to the ground by filling the cavities around the tool with a soil mud, i.e., we need to account for the unknown conductive borehole filling in the inversion. For addressing these points, we conduct a synthetic study, as described in section 3, after introducing the methodology in section 2. Methodology For simulating and inverting data from the b2s electrode array, we apply the 3D modelling routine depicted in Ochs & Klitzsch (2020) using the CEM. The geometry of the borehole electrode rod with 20 ring electrodes and the 20 surface electrodes are included as 3D bodies in the forward and in the inversion mesh. This is necessary due to the small electrode spacing, 25 cm for the surface electrodes and 5.2 cm for the borehole electrodes (midpoint to midpoint), with electrode lengths of 10 cm and 4 cm (tool diameter), respectively. This violates the threshold of 0.2 for the electrode length to spacing ratio for point electrodes, as introduced by Rücker & Günther (2011) and demonstrated by Ochs & Klitzsch (2020). The geometric factors are calculated according to the positions, shape, and size of the electrodes, represented via the CEM. Figure 1 shows the geometry of the electrode array, depicting the surface electrodes as rods and the borehole electrodes as rings. The electrode tool is situated in the middle of the surface array and the surface electrodes have an active length of 10 cm, i.e., they are pushed 10 cm into the ground. Each of the 20 borehole electrodes is accessible via the console at the top and can be connected to an electrode unit of our measurement device. Figure 2: a) 2D slice of the coverage (logarithm of summed absolute sensitivities), and b) pseudosection of the crossed dipole surface-borehole configurations used in this study. The green point in a) marks the position of the data point in the pseudosections for the depicted four-point configuration as an example. The three layers of the pseudosection, defined by the skip size within the crossed dipoles (0, 1, and 2), and their combination into one pseudosection containing all data are depicted. The zeroth skip data partially overlap with skip 2. For visibility, the overlapping points are shifted slightly to the right, as shown in the figure. Notice also the different markers and colours used for each skip. Due to the geometry of the b2s array in combination with the measured configurations we do not calculate a classical pseudosection for the data, i.e., using the geometric factors for an estimation of pseudo-depth. Instead, we construct the pseudosections based on the midpoints of the dipoles. We demonstrate this for the b2s data in figure 2. In a), we show the coverage, i.e., the sum of absolute sensitivities, of a b2s array. The depicted slice, as well as subsequent inversion results from the b2s array, is an interpolated 2D slice of the 3D model. The 2D mesh is solely used for visualisation of the results (for more details we refer the reader to Ochs & Klitzsch (2020)). The coverage develops in a T-shape around the borehole and the surface electrodes for the crossed dipole configuration in use. One quadrupole of electrodes is marked in green, showing the position where the data point is plotted in part b), i.e., at the x-and z-midpoint of the quadrupole. In b), we can clearly distinguish between the three skip sizes used in our list of configurations, i.e., from using neighbouring electrodes to a separation of three electrodes. All three skip classes are combined in the complete pseudosection shown on the right side of figure 2 b). When inverting b2s field data, we need to consider the following aforementioned aspects of the field measurement procedure: 1. The electrode tool position in the subsurface is unknown to a certain degree, and 2. the borehole electrode tool is surrounded by a conductive mud with an unknown resistivity for better electrical coupling to the subsurface. For the first point, we first have to distinguish between a placement error and the effect of a heterogeneous resistivity distribution. For this investigation, we consider different layered models in a synthetic study and observe the effect of placement errors on the synthetic data and on inversion results. We do not consider the measurement of the borehole tool angle in the field, since it is practically infeasible. However, we can estimate the tool depth in the field by measurement. The second point also requires forward and inverse modelling. The mud was mixed on site with tap water and its resistivity was not measured. Additionally, after filling the cavities around the electrode tool, the mud exchanges water and ions with the surrounding soil, which might change its resistivity over time. We simulate a measurement in a model with a conductive borehole filling around the electrode tool and compare different approaches in the inversion, i.e., decoupling the mud region and finding its single resistivity in the inversion, compared to freely inverting on the mud resistivity without an extra region. In the end, we apply the best solution based on our findings to the field data. Characterisation of borehole electrode positions and a borehole filling in small-scale b2s ERT In our synthetic study, we consider the uncertainty in the placement of the borehole tool in the subsurface. We study both the influence of tilting and of the tool depth, i.e., vertical placement uncertainty. Figure 3: The two angles defining the deviation of the tool from a) the vertical (θ) and from b) the plane containing the surface electrode array (ϕ), starting with 0°in positive x-direction and increasing counter-clockwise. c) Influence of tilting or a depth shift on noise-free data. The mean deviation from the homogeneous value of 100 Ω m is plotted for different input angles and depth shifts. The tilting angle is plotted in black (lower x-axis) and the depth shift in blue (upper x-axis). Both tilting angles are tested for. With regard to tilting, we test the sensitivity of the ERT data to two tilt angles, θ and ϕ (see figure 3 a and b). With these angles and the depth of the topmost electrode, which determines the depth of the whole tool, we can fully describe the position of the electrode tool in the subsurface. The depth of the electrode tool must be defined in relation to the positions of the surface electrodes. We refer to a tilting in the x-z plane as in-plane tilting in the following, i.e., within the plane containing the surface electrode spread as well as the borehole tool (a). On the other hand, a tilting in the y-z plane is referred to as out-of-plane tilting (b). We set up a homogeneous model with a vertical tool to generate synthetic resistance measurements. We then calculate apparent resistivities from the simulated resistance values by numerically determining the geometric factors in an additional forward calculation, assuming different θ, from 1°to 10°. This range is not very large, but we deem it realistic, since in the field a certain effort is taken to ensure a nearly vertical installation. We set ϕ = 0°and ϕ = 90°, respectively, and compute the in-plane and out-of-plane deviation over the whole range of θ for both angles of ϕ separately. The tool tilts within the sensitive plane (ϕ = 0°) from 1°to 10°, and at ϕ = 90°(out of the sensitive plane), also from 1°to 10°. Additionally, we shift the tool in depth by 5 cm in each direction (a negative shift corresponds to a placement that is too deep and a positive to a placement that is too shallow, compared to the correct placement), also separately. A depth misplacement of 5 cm corresponds to the borehole electrode spacing, but due to loose topsoil and uneven ground, combined with a tilted tool installation, the depth placement error could become even higher in extreme situations. To a certain degree, this assumption is constrained in the positive direction by the tool electrodes being completely submerged in the ground or sticking out, which can be checked visually. However, even this visual inspection can be misleading, i.e., it can seem that all electrodes are in the subsurface, but loose soil or vegetation around the topmost electrode can change the actual depth reference to a deeper level, with regard to the surface electrodes. Figure 3 c) summarises the effects as a mean overall deviation from the true model value of 100 Ω m. Due to the numerically calculated geometric factors for the CEM electrodes we have a numerical effect on the calculated apparent resistivities. Therefore, they slightly vary from the true model value of 100 Ω m, even for the correct position of the borehole tool, i.e., we do not reach zero for the mean deviation, although the data are noiseless. A considerable influence on the model comes from a depth shift along the tool axis. On the other hand, the out-of-plane portion of the tilting has almost no influence on the resulting resistivity distribution. However, an in-plane tilting has a recognisable influence on the result and must be addressed. For point electrodes, i.e., when the electrodes are represented by mesh nodes in the model, we could jointly invert for the subsurface resistivities and the electrode positions as demonstrated by Wagner et al. (2015). However, this approach uses a common model vector for resistivity values and electrode positions, and relies on using the same mesh throughout the optimisation. With CEM, the electrodes cannot be placed freely in a given mesh as nodes but need their correct position and size. They require a new mesh for every shift of the electrode tool. This complicates a direct inversion on electrode positions alongside the resistivity inversion, since the mesh cells, and therefore the length and cell attribution of the model vector, change with every shift of the electrode tool. Instead, we utilise the data vector rather than the model vector for the optimisation and optimise the tool position independently from the inversion. The data vector has the benefit of being independent from the mesh. With this approach, we do not need inversions, avoiding the influence of smoothing and other user input on the result. The model vector also includes the information contained in the data, i.e., the sensitivity pattern and geometric factors, which propagate into the model, but it is superimposed by the inversion process. Therefore, we decided to only show the results for the data. The propagation of errors arising from uncertainty in electrode positioning was shown by Oldenborger et al. (2005) and Zhou & Dahlin (2003) to lead to significant perturbations in the near-electrode regions in inverted 2D models, sometimes exceeding 20 % in magnitude. In the field, we measure data with an unknown deviation of the electrode tool from the vertical direction and an uncertain depth. Measured resistances are affected by the position of the borehole electrode tool, due to the current paths influencing the data. The sensitivities of the configurations change as well as the geometric factors, which are used to calculate apparent resistivities. The change in positions of the borehole electrodes caused by a tilting of the tool reflects in the forward data mainly through the geometric factor. Figure 4 shows the deviation to synthetic homogeneous data over a half-space with 100 Ω m generated with a vertical electrode tool. We see slight deviations from the homogeneous model value in the noiseless data, due to numerical effects caused by working with 3D electrodes, which also propagate into the deviation plots shown here. Small variations emerging close to the electrodes influence the other data points at the same depth. When working with the pseudosections, the viewer needs to keep in mind that the positioning of the measured data points is not necessarily representative of the physical volume influencing certain quadrupoles, but is merely a projection based on the quadrupole midpoints. A more physical representation of current paths would be rather triangular, but harder to review due to overlapping points. We then compute apparent resistivities from the simulated resistance data on 3D meshes, i.e., we recompute the geometric factors, assuming different angles θ from 0°to 10°in the direction of ϕ = 0°(a-c). The effect on the geometric factor becomes more pronounced for greater tilting angles. It is very systematic for the surface-borehole configurations used, resulting in a decrease in apparent resistivity on the left side (ϕ = 180°) of the borehole electrodes and an increase on the right side (ϕ = 0°). We observe the strongest influence at the tool top, close to the surface electrodes, where the coverage is largest, but the effect is visible at the bottom as well. Likewise, a depth misplacement reflects in the data, again strongest at the top where the coverage is high (d-e). If we assume the tool too deep, we see a decrease in apparent resistivity close to the tool and an increase in areas further away from it. If we set the tool too shallow compared to the forward model, we observe the opposite behaviour. From this pattern we conclude that assuming a wrong depth of the electrode tool results in an increase in the standard deviation of the data. Figure 5: a)-b) Representation of a one-sided anomaly at 30 cm and 100 cm depth, respectively, and its influence on the apparent resistivities (the anomaly is represented by the black rectangle with dashed boundaries.) The anomaly has a resistivity of 200 Ω m, in a background of 100 Ω m, and it is infinitely extended perpendicular to the sensitive plane, i.e., in ϕ = 90°and ϕ = 270°; c) Black points: All data point positions sorted by their quadrupole midpoint. d) Influence of a one-sided anomaly, situated in different depths (shifted in steps of 10 cm), on the standard deviation of the apparent resistivities (upper, calculated for determining the depth shift of the tool), and on the symmetry (SC, see equation 1) in apparent resistivity between both sides of the borehole tool (lower, used for determining the installation angle of the tool). The standard deviation and symmetry are calculated and plotted for different midpoint depths of the anomaly, considering all data points (black points), and only considering the data points marked in blue (topmost part) or red (bottom part) in c). We use the systematic influence of a tilting on the geometric factors to define a criterion for our optimisation on the tilting angle. We calculate the difference between the mean values of apparent resistivities on the left , i.e. ϕ = 180°, and right side , i.e. ϕ = 0°, of the borehole electrodes, as shown in the pseudosections in figure 5 a) and b). A wrongly estimated tilting angle, of the tool within the sensitive plane (x-z plane) results in a systematic difference in apparent resistivity between both sides (see figure 4). This difference is translated into a symmetry criterion, which becomes minimal for the correct angle. However, in reality, a difference between resistivities in the ϕ = 0°and ϕ = 180°direction of the tool can also be attributed to asymmetric anomalies. In layered soils, these can be stones or small clay lenses on one side of the electrode tool within the sensitive plane. This effect would overlap with the effect of the tilting, making it impossible to separate both influences. For minimizing the influence of such anomalies on our symmetry criterion, we only use data points at the bottom of the borehole tool. Figure 5 a)-b) demonstrates how for deeper anomalies, i.e., further away from the surface electrodes, the apparent resistivities on both sides become more symmetric. The borehole tool itself has a circular sensitivity, and in the lower model parts the influence of the directional resolution of the surface array becomes smaller. An anomaly on one side of the bottom of the tool is mapped almost equally to the other side, resulting in a more symmetrical image. A tilting of the electrode tool disturbs this symmetry, so we can use it to detect the tilting angle within the sensitive x-z plane. Although the coverage is lower at the bottom, we can still detect a difference between both sides, resulting from a tilting of the tool, as shown in figure 4 a-c. For this, we introduce a metric ("SC": symmetry criterion): Our symmetry criterion SC includes a fixed subset of data points, with an equal number of points on the left side and on the right side of the surface electrode array. Figure 5 d) demonstrates how considering all data points (black points) in the calculation of the standard deviation of the data and of SC, we see a much stronger influence of the anomaly on the symmetry within the sensitive plane, compared to only considering the data points marked in blue (topmost part) or red (bottom part) marked in figure 5 c). The anomaly is shifted to the discrete depths represented by the plotted points. We calculate the mean value of the apparent resistivities for the chosen 4-point configurations on the left side and the right side, respectively, i.e., the points within the red square in figure 5 c), and compute the difference (in percent) between the two means, as shown in equation 1. The calculation of SC allows us to evaluate our data with one simple value per tilting angle of θ, describing the symmetry within the sensitive plane. We recalculate the apparent resistivities from the measured resistances for different tilting scenarios. By comparing SC for many different tilting angles θ and ϕ, we can determine a minimum, which occurs close to the tilting angle that was used as input, in case of synthetic data, or, for field data, to the true installation angle. The next point to consider is measurement noise. In the calculation of SC for noisy field data, the uncertainty of the mean over n points is decreased by averaging to p mean = p √ n compared to the noise of p % of each single measurement point. We include n = 89 points on each side of the borehole for this calculation. If we assume a noise level of p = 1 %, we achieve an accuracy for SC of p mean = 1 √ 89 ≈ 0.1 %. This constitutes the limit of resolvability for SC, which increases with the overall noise level of the data, since it directly depends on it. For example, if the overall data noise level would be p = 3 %, the achievable accuracy of SC would be p mean = 3 √ 89 ≈ 0.3 %. Figure 6: The symmetry criterion SC, calculated for different combinations of θ and ϕ in a homogeneous half space. The circle plots can be read as a topview of the electrode tool, with the tilting angle θ being plotted as the points spreading out from the centre, i.e., small tilting angles are plotted close to the centre, and larger angles further out. The direction of the tilting is described by the discrete angles of ϕ, with 0°and 180°d enominating a tilting in the sensitive x-z plane. The points are scaled and coloured according to the value of SC for the according combination of ϕ and θ. All points with values below the achievable uncertainty of SC, based on the applied noise level of 1 %, are framed with black squares. The true values, i.e., input angles ϕ and θ, are marked with green circles. Figure 6 demonstrates the optimisation process. We have data with an input combination of angles ϕ and θ, describing the tilting of the electrode tool. Then we scan a range of discrete values of ϕ and θ and calculate our symmetry criterion SC for every tested combination of both angles. Usually we get a couple of comparably small values around the true angle (which is given in the titles of the subplots of figure 6 and marked with green hollow circles ) fitting the original data below the noise dependent uncertainty threshold of SC (black hollow squares). We also already see that the criterion is not very sensitive to the out-of-plane component of the tilting, which is expected. But since an out-of-plane tilting has a negligible influence on the data, we only need to optimise for the in-plane tilting angle θ and the direction of the tilting (either ϕ = 0°or ϕ = 180°). We also need a parameter that allows us to find the correct tool depth, since the depth has a big influence on the data and consequently the inverse model. However, the effect of a depth shift on the data is not as easily translatable into a data-based criterion as an angled installation. The effect of tilting can be isolated to a high degree from the resistivity distribution of the subsurface, since a difference between left and right side within the sensitive plane is mainly caused by a wrong angle, if we only consider the points around the tool bottom. A depth shift on the other hand causes a more gradual shift in apparent resistivities and the effect of a pure shift is symmetric about the central axis defined by the tool. Therefore, it is influenced by the subsurface resistivities less predictably than the tilting. A wrong depth creates anomalies at the top of the electrode tool, as can be reviewed in figure 4. An underestimation of the tool depth, i.e., setting the tool too shallow in the model, has a bigger influence on the data than an overestimation, due to the borehole electrodes being closer to the surface electrodes (see figure 4 d and e). The artefacts introduced by a depth shift lead to an overall increased range of data values especially in the upper part of the model, which is marked by the blue frame in figure 4 d). We can use this by looking at the standard deviation. If we only use the points in the topmost part, the effect becomes more distinct and identifiable, compared to using data from all depths. We expect a minimum in the standard deviation of apparent resistivities around the input depth (in case of synthetic data), or the true depth of the topmost electrode (in case of field data). However, in an unknown underground, an increased standard deviation of the data spread can not be exclusively attributed to a depth shift of the borehole tool, but also to an inhomogeneous resistivity distribution. A layered subsurface, as our main application, can produce very similar symmetric patterns in the data, i.e., an increase or decrease of resistivity with depth in regions close to the borehole tool. In contrast, a tilting of the tool can be identified more systematically in a layered subsurface, and even to some extent in the case of small asymmetries, as described above. The optimisation on tilting angle and installation depth comprises two separate steps, i.e., we optimise on the installation angle first and then use a model with the optimised angle as the input for the depth optimisation (as demonstrated in figure 10). Consequently, we optimise the depth along the (possibly tilted) tool axis, not the vertical. This naturally introduces more uncertainty into the depth optimisation, since we already have an uncertainty on the optimised angle, but we observe very distinct minima in SC at the input angle in our synthetic study. Therefore, we do not expect a significant increase in uncertainty in the final result. So far we have only considered a homogeneous model. We mostly expect horizontally layered soils as use cases for our ERT setup. Therefore, we test our criteria for the tilting angle and a depth shift in different scenarios of layered models with various number of layers and increasing or decreasing resistivities with depth, respectively (figure 7). Obviously, we often do not know the exact structure of the investigated soil, but in many natural and agriculturally machined soils predominantly horizontal horizons are observed within the first metre (typically A, B, and C horizon), with changes in the soil characteristics mainly occurring vertically. Optimisation of layered models We plot the SC criterion as a proxy for the tilting angle in the sensitive plane (figure 7 a). We only show the minimum of each optimisation with the tool position being defined by the respective combination of the two angles θ and ϕ, while the depth of the electrodes is slightly influenced by the angle θ. This implies that stronger tilting reduces the electrode depths, compared to a vertically installed tool. The two different input tilting angles that are tested within each model are marked with red crosses, the corresponding results from the optimisation are colour-coded. We get an estimate of the tilting angle within the sensitive plane, as can be deduced by projection of the resulting combination of angles onto the line defined by ϕ = 0°a nd ϕ = 180°. This projection is symbolised by the red arrows in the figure, which also allow an estimate of the maximum variation of the recovered angle θ, which is about one degree between the tested models. In figure 7 b, we observe the standard deviation of our data in the topmost part (the first 20 cm) of the pseudosection as a proxy for a depth shift. The true depth is the input depth for the synthetic model (here depicted as a shift of 0 m), assuming a vertical tool without a tilt. The standard deviation is then plotted for the recalculated apparent resistivities for a range of depth shifts. An increasing resistivity with depth (blue shades with square markers in the figure) leads to an underestimation of the tool depth by up to 3 cm, while a decreasing resistivity with depth (green lines with diamond markers) has an opposite, but slightly less strong effect. We can conclude that we are able to resolve the tool depth within a range of 2-3 cm for the subsurface structures we expect, i.e., layered soils. For this magnitude of uncertainty in depth shift we will already see notable artefacts in our final model. In an effort to reduce this uncertainty, we test another criterion for detecting a wrong tool depth, using the value of the objective function of the inversion result. The complete objective function is defined as follows (Günther et al., 2006): It consists of the data functional Φ D describing the data fit, and the model functional Φ M which contains the model constraints. The regularisation parameter λ influences the smoothness of the resulting model. We set up the same layered models as in figure 7 and invert the synthetic data with different tool depths, as can be reviewed in figure 8. We keep the regularisation parameter λ constant and compare the data functional (a), the model functional (b), and the complete objective function values (c). In figure 8 c) we can see a similar behavior for the different layered models as in figure 7, but the corresponding minimum in the objective function gives an estimation of the tool depth which is closer to the input depth, i.e., for all tested models but one it is within 1 cm from the true value. The model functional develops in a similar way, whereas the data functional follows the overall final fit, expressed as the parameter χ 2 (which indicates whether the model is fitted within the data noise level), very closely. In general, all three parameters are slightly influenced by the inversion parameters and the resistivity contrasts in the subsurface. Nonetheless, we use the objective function as the proxy for the tool depth for our field data, since we found that it gives more accurate results in terms of recovering the input depth than the standard deviation of the data for the tested layered models. Influence of a conductive borehole mud In our field experiment, we embedded the borehole tool in a conductive mud for optimal electrical coupling of the electrodes. For this, we augered with a slightly bigger diameter than the tool diameter, lowered the tool down in the middle and filled the surrounding voids with the mud. The mud itself was made from the soil by mixing it with water to make a pourable mud. The hole around the borehole tool was filled from the top and the tool was moved around in the mud to let air bubbles escape. The tool has a diameter of 4 cm, and we estimate a cylinder shell of approximately 0.8-1 cm around the tool, filled with the mud. If we do not explicitely include this additional conductive material around the tool in our 3D inversion, we get an influence on the inversion result, e.g., different sensitivity patterns due to current channelling in the borehole mud, as demonstrated by Doetsch et al. (2010). The borehole tool itself is modelled as a hollow cylinder. The surface is a no-flow boundary, apart from the electrode surfaces that act as current sources and sinks. We introduce the borehole as a separate region with a single resistivity value as the starting model. Since we do not know the resistivity of the mud from the field measurements, we provide a first guess for the mud resistivity as a starting point, as well as a lower boundary value of 5 Ω m and an upper boundary of 200 Ω m. Then we let the algorithm find the best fitting value during the inversion. For testing the approach, we create synthetic data in a forward calculation with a mud region with a thickness of 1 cm surrounding the electrode tool all around. We assign it a resistivity of 20 Ω m and a resistivity of 100 Ω m to the rest of the model. We invert the data on different inversion meshes applying the following assumptions: 1. no borehole region in the mesh, 2. borehole region incorporated and with the correct start value of 20 Ω m, 3.-4. borehole region incorporated with a start value under-and overestimating the true value. As seen in figure 9, we need to account for the borehole mud in a structurally decoupled region in the inversion mesh, allowing for a sharp resistivity contrast. If we invert freely on the borehole mud resistivity without a decoupling, it influences the model outside the borehole too, leading to artefacts (b.1). If we include a distinct region in the inverse mesh and invert on a single resistivity value in this region, the model is negligibly sensitive to the input resistivity for this region. To test the combined influence of a mud region, a tilting of the tool and anomalies surrounding the tool, we apply our optimisation approach to synthetic data from the model shown in figure 10 a). The half space with a resistivity of 100 Ω m is disrupted by two anomalies with 50 Ω m resistivity, on the left and right side of the borehole tool, which is tilted by θ = 5°and ϕ = 0°and surrounded by a mud with 20 Ω m resistivity. The optimisation results for this model are shown in figure 10 b) and c). The angle optimisation recovers the correct direction of the tilting, but underestimates the tilting angle by about 2°, although the two angles surrounding the minimum are also below the maximum achievable precision level, accounting for measurement noise. The relatively big anomalies clearly influence the angle reconstruction. The depth optimisation results in a minimum at 21 cm depth, which is one cm too deep. While we see an influence of the resistivity distribution on our two-step optimisation, we do not see an influence of the inclusion of the mud region during the optimisation. We can choose to include the mud region in the mesh which is used to calculate the geometric factors as well as in the inversion mesh, but it does not significantly change the result of the optimisation. Therefore, we will not include the mud region in the meshes for the optimisation on the position of the electrode tool in the field, since it increases the number of mesh cells. The mud region will only be included in the final inversion results. Next, we analyse our field data and show the results of our optimisation and the inversion. Field application We invert field data acquired on an agricultural field situated in the Rur catchment near Selhausen in Germany. It lies in the region of the eastern upper terrace of the ancient Rur river (Weihermüller et al., 2007). The near-surface geology is characterised by Pleistocene sand and gravel (Brogi et al., 2020). This landscape was disrupted by channels formed by melting water after Weichselian glaciation, depositing finer material in this channel system (as described in von and references therein). Figure 11: The data acquisition site in Western Germany close to Selhausen.a) The agriculturally used area is characterised by a system of palaeochannels that influences the crop performance and is visible as a pattern on satellite images .b) We show a depth slice at about 75 cm depth of apparent conductivity from electromagnetic induction measurements, measured on a regular grid on the field considered in this publication superimposed on the satellite image (blue boundary, EMI data are taken from Rudolph et al. (2015)). Our ERT profile is marked in red and a picture from the field shows the surface electrode array (c). The crop production on the field is influenced by the subsurface distribution of the palaeoriver channels, beginning at a depth of approximately 50 cm. On top of the channels the plants grow well, while outside of the channel areas the plants experience increased and accelerated wilting during dry periods . The lateral positions of the channels are approximately known from EMI data and aerial photos, and the vertical onset of the channels was further investigated by EMI and ERT (Rudolph et al., 2015). The EMI data shown in figure 11 are also published in Rudolph et al. (2015). Our ERT data comprise a long surface ERT profile perpendicularly crossing one of the palaeochannels (see right side of figure 11, red line) as well as eight b2s measurements along this profile. Field data In fall 2017, we initially measured a 46 m long ERT surface profile over one of the palaeoriver channels with an electrode spacing of 50 cm as a combined Schlumberger sounding and profiling with 93 electrodes. We then chose several spots along the profile based on the surface data, where we measured with the small-scale borehole-surface array, with a surface electrode spacing of 25 cm. Our goal is an accurate, highly resolved image of the channel geometry and its sub-structures, i.e., thin layering. We concentrated our measurements around the edges and the center of the channel. For the measurements, we drilled boreholes for the electrode rod and coupled the borehole electrodes to the subsurface by inserting a conductive mud, whose resistivity was not measured in the field. For the first b2s measurement there was a problem with the topmost borehole electrode, which was then removed from the measurement scheme. Unfortunately, the same reduced measurement scheme was then used for all subsequent measurements of b2s data. Therefore, we only used 19 instead of 20 borehole electrodes in each b2s measurement. We noted the depth of the borehole tool in the field for each measurement location. However, the exact depth of the borehole tool in reference to the surface electrodes depends on the small-scale topography of the soil at the site, as well as on the state of the topmost ploughing layer, which is often loose and not compacted. Therefore, the measured tool depth in the field notes must be considered with caution. Although there was a certain effort taken to drill vertical holes, we did not measure the tool tilt. We conclude that the measured depths are probably not always accurate down to the cm, and that the tilting angles of the tool are unknown. Since this can be a common situation, especially for older data, we consider this a valuable use case for our optimisation procedure. Figure 12: Apparent resistivity pseudosections for the field data. The surface profile is 46 m long, the positions of the detailed borehole-surface measurements are marked with black arrows. Note the different colour scales due to the overall narrower range of apparent resistivities measured over the channel. We present the measured data in figure 12 with the tool depth as recorded in the field. The b2s data are measured with a crossed dipole measurement scheme, as described in Ochs & Klitzsch (2020). It has 820 data points for 19 borehole electrodes and 20 surface electrodes, with the A-M dipole always at the surface and the B-N dipole in the borehole. We increase the electrode skip within the crossed dipoles from zero to two, i.e., going from neighbouring electrodes to a separation of three electrodes (compare figure 2). Figure 13: a.1: Double-logarithmic plot of the measured stacking error vs. measured resistance data. The line includes all data without outliers and represents an estimation of the maximum error increase with resistance; a.2: Double-logarithmic cross plot of reciprocal errors and stacking errors for three combined normal-reciprocal dipole-dipole data sets measured at a different site with the borehole tool and the same measurement device. The red line (linear fit encompassing the bulk of the data underneath without some outliers) represents the maximum reciprocal error as function of the stacking error; b: Combined error model (absolute (1) and percentage (2) error) depending on the resistance data. The stacking error model is scaled with the relationship of reciprocal and stacking error. It gives the upper limit of data errors. Estimating the noise level of field data accurately is important for a correct data weighting during the inversion. For our field data, we only have a stacking error information measured by the field instrument. The stacking error cannot account for systematic errors in a fourpoint array, e.g., badly coupled single electrodes, but rather gives an estimate of the precision of single measurements, which is usually very high (LaBrecque & Yang, 2001). We did not measure reciprocal data configurations, which would allow the calculation of reciprocal errors. Those rely on one reciprocal measurement by swapping the current and potential electrodes in an independent measurement, whereas we base our error estimation on several repeated current injections and transfer resistance measurements with the same quadrupole. This procedure was chosen because of our single-channel ERT field device, which is the Lippmann 4-Point Light (by Lippmann Geophysikalische Messgeräte, Germany). We did not measure additional reciprocal data to reduce the time needed for the measurements. Our standard measurement configuration set comprises 820 data points for the b2s measurements and more than 1500 data points for the surface measurement presented. Instead, we work with the stacking errors as an indicator of data quality. We observe generally very low errors, typically with a few outliers of higher percentage. Over 95 % of the measured stacking errors are smaller than 0.1 %, which does not reflect the true data noise and leads to convergence problems in the inversion. This low error estimation is typical for a statistical error based on stacking (Tso et al., 2017;Parsekian et al., 2017). For unknown data errors, the pyGIMLi package (Rücker et al., 2017) has the option to estimate the data errors, following the error model by Friedel (2003). However, the user still needs to give a base error level and a voltage dependent error as an input. We use the characteristics of the stacking error, i.e., the resistance-dependent trend we observe (13 a.1), as a basis for deriving our own error model. We plot the stacking error against the corresponding resistances and fit a linear model to the observed trend, which is tweaked in order to encompass the bulk of the data, while excluding some outliers ( figure 13 a.1). This linear model represents the stacking error, characterized by a very low noise level. For arriving at a more realistic estimate of the overall noise level and the resistance-dependent noise, we incorporate reciprocal error information from additional data measured with the borehole tool at a close-by field site. We combine several data sets from this site, which are dipole-dipole data (normal and reciprocal), measured only with the 20 borehole tool electrodes and the same measurement device as our b2s data. Consequently, we assume the same relationship between stacking and reciprocal error. We plot absolute reciprocal versus stacking errors and fit a linear trend to the data, similar to before, which encompasses the bulk of the data points (red dashed line in figure 13 Before optimising on the tool electrode position, we conduct a first inversion of the field data with the new error model, a vertical tool, and the depth information from the field notes (see table 1). We include the borehole mud region in the inversion mesh and limit the possible resistivity for the mud between 5 Ω m and 200 Ω m, since this range is realistic for the humid soil we filled in. The results are shown in figure 14. The surface data, as the b2s data, are inverted on a 3D mesh with 3D CEM electrodes. We see a lot more detail in the b2s models (b-d) than in the surface result (a), both outside and over the channel structure. Next, we optimise the borehole tool positions for the b2s field data. Optimisation on borehole tool position We search for the projected in-plane tilting angle, i.e., either at ϕ = 0°or at ϕ = 180°, and the correct depth placement of the topmost electrode for all b2s field data sets. The parameter space is bounded by a tilting angle θ of up to 10°and a depth shift from about minus 10 cm to plus 10 cm around the depth from the field notes. In some cases, the minimum lies outside of this range, so we extend the search space there. We can recover the correct tool angle and depth placement without considering the mud region around the tool, as can be reviewed in figure 10. Figure 15 presents the results of optimisation on the in-plane tilting angle and the tool depth for all field data sets. We observe that over the channel itself and away from its edges, where we dominantly expect horizontal layering, the recovered angles are small. On the other hand, we observe a very large optimal angle at the left edge of the channel, i.e., at 18 m. We attribute that to structural reasons due to the location of the measurement and the lateral contrast from the surrounding soil to the channel material. The measurement at 18 m is situated directly over the left boundary of the palaeochannel, which we interpret as the cut bank of the stream, with a clear and abrupt change in resistivity. This results in a rather clear division between ϕ = 0°and ϕ = 180°here, since on the left side we mainly see the coarser sediments outside of the channel, while on the right side we mainly encounter channel sediments. At 38 m, on the right edge of the channel, the picture is not as clear as that, but the recovered tilting angle is still larger than for most other locations. It is interpreted as the slip-off slope, where we see a more gradual change from channel sediments to river bank sediments in the surface profile. It could also be interpreted as a second channel structure overlaying the main channel. Still, we expect a lateral change in resistivity in the data. From the optimisation result at 18 m, we conclude that if the resistivity is different over an extended area on one side of the tool, our optimisation parameter fails to recover the true tilting angle. The prerequisite for the optimisation is a horizontal layering. The coarse structure can be checked in advance by a surface ERT measurement, for making sure that the soil is horizontally layered. Since the result for the b2s data at 18 m profile length is questionable, we do not use an optimised tilting angle for this data set, but assume a straight tool in this case and only use the optimised depth in the inversion. We determine the depth in the next step for all b2s data. For this optimisation, we need to invert the data, scanning a range of tool depths around the depth measured in the field. All relevant inversion settings are listed in table 1, i.e., the depth, tilting θ (positive values mean ϕ = 0°and negative values equal ϕ = 180°), regularisation strength λ, and the fit of the final model, represented by the relative RMS error and the goodness of fit (χ 2 , equals 1 for a perfect fit) before and after the optimisation. We observe on the right side of figure 15 that the final value of the objective function develops in a systematic way for most of the data, resulting in a minimum around a specific tool depth. We show the b2s models after the optimisation in figure 16. The b2s models are able to recover very sharp layer boundaries and the recovered resistivity contrasts are partially very small. Following our findings for synthetic data, we conclude that the recovered layer depths and resistivities are more accurate after the optimisation. The b2s results are consistent in terms of recovered layers over the whole surface profile. Outside the channel (figure 16 a), we have low resistivities in the topmost layer, which is between 30 cm and 50 cm thick. The high resistivity around the top of the borehole tool at 43.5 m seems to be either an artefact, possibly from air around the tool, or a small-scale anomaly like a stone. Below the top layer, the resistivity increases, which is best visible at 2.5 m, where we can clearly identify four layers. At the edge of the palaeochannel (figure 16 b), we can also discriminate several layers. A topmost layer of about 20 cm thickness with a resistivity of about 80 Ω m is followed by a lower resistivity layer with approximately 30 Ω m resistivity and a thickness of 20 cm. Below, the resistivity increases to about 150 Ω m to 180 Ω m, with some sub-layers. For the central part ( figure 16 c), we see a top layer of about 15 cm to 20 cm thickness, followed by a less resistive layer below and then a thin layer with increased resistivity, below which the resistivity drops again. The high resitivity layer with approximately 100 Ω m from about 50 cm to 70 cm depth develops around the 26 m mark and is clearly visible at 29 m profile length, but has faded again at 33 m. Comparing the final results with the models before the optimisation ( figure 14), we see subtle differences outside and over the edges of the palaeochannel. Mainly, the layer boundaries are slightly shifted due to the depth optimisation, and the layer boundaries become sharper after the optimisation. The same is true for the measurements above the central palaeochannel, but additionally we see less artefacts after the optimisation. The depth optimisation has a big influence on the circular artefacts, which resulted from the wrong depth assumption, like the ones visible at 29 m profile length before the optimisation (figure 14 d.3), which are gone after the optimisation (figure 16 c.3). We appreciate the resolution capabilities of the b2s array, which go down to a vertical spatial resolution of about 10 cm at the tool surface and to resistivity contrasts of about 5 Ω m for our data. Summary and discussion We study the influence of borehole-related effects on small-scale b2s ERT and develop approaches to mitigate them. Specifically, we search for the correct placement of the borehole tool regarding its depth and tilt. A tool tilting within the sensitive plane as well as a wrong depth assumption have a considerable influence on the data and inverted models. The effect is already visible at small tilting angles θ and small depth shifts. On the other hand, the portion of the tilting out of the sensitive plane, i.e., at angles ϕ between 0°and 180°or between 180°and 360°, has a negligible influence on the data and can be ignored, by only searching for the angle θ, projected onto the sensitive plane. We suggest mitigation approaches that correct the tool tilt and depth using the measured data and their inversion, respectively. We do not need to keep the same mesh, as would be necessary for a combined inversion on resistivity and positions, making our approach easily applicable to models with 3D electrodes, as shown in this study. Our metric for deriving the angle of installation relies on the directional sensitivity of the surface electrode array surrounding the borehole tool, and does not allow an optimisation of the tilting out of the sensitive plain, which however is negligible. The metric is not completely independent of the surrounding resistivities, since the sensitivity around the tool bottom is still slightly influenced by the surface electrodes and therefore does not perfectly map one-sided anomalies to the other sides of the tool. Furthermore, for larger lateral contrasts in resistivity on the left and right side of the tool, e.g. for lateral boundaries in the soil, the criterion fails to recover the tool angle. In predominantly layered soils the criterion is easy to apply to any kind of b2s ERT data, regardless of the used electrode model. Our depth optimisation is also not perfectly independent from the subsurface resistivity distribution, its accuracy decreases especially in the case of large resistivity contrasts in adjacent layers. However, we consider it a valid and easy to use, although not particularly computationally efficient, approach for determining the unknown depth of a borehole electrode array. Further research should go in the direction of a joint inversion on resistivity and electrode positions, possibly utilising fixed points in the model, where the resistivity is calculated consistently, independent from the changing mesh with CEM electrodes. We additionally account for a borehole filling surrounding the borehole tool for electrical coupling to the ground. When considering the borehole mud accurately by including it in a decoupled borehole region and inverting on its single resistivity, we can eliminate the influence of the mud on the model surrounding the borehole. Subsequently, we combine all mitigations and demonstrate our correction approaches for 8 b2s ERT field measurements, whose positions we chose based on a standard surface ERT result. As no reciprocal data were acquired with the single-channel instrument Lippmann 4-Point light, we developed an error model based on the stacking errors and applied it in the inversion. For this, we used reciprocal data errors from a measurement with the borehole tool alone for determining a base error level for our data. The resulting b2s data errors are reasonable and result in well-fitted, smooth models, which should work well for most layered soils, where the soil type and texture does not change abruptly. With our b2s measurements, we are able to resolve fine layering, absent from the surface result over the same structure. After correcting for the borehole effects, we get accurate depths for the layers and avoid artefacts. The presented methodology of b2s ERT measurements can be a valuable tool for characterising layered soils. Soil core analysis is a great addition for verifying the models with a "ground truth". However, fine soil layering can be difficult to identify visually in the field, since a discrimination between layers in terms of colour, texture, and saturation might be hard to recognise, and the soil can also be altered by the augering. Using b2s ERT data ensures an identification of all present layers with a resistivity contrast. The method becomes even more promising if extended to time-lapse measurements, where the saturation of different layers can be monitored over a period of time at the same spot. This can be a next step in expanding the b2s methodology. Systems: Monitoring, Modelling and Data Assimilation"), funded by the German Research Foundation (DFG).
14,225.4
2022-01-01T00:00:00.000
[ "Environmental Science", "Engineering", "Geology" ]
Osteochondral Allograft Reconstruction of the Tibia Plateau for Posttraumatic Defects—A Novel Computer-Assisted Method Using 3D Preoperative Planning and Patient-Specific Instrumentation Background  Surgical treatment of posttraumatic defects of the knee joint is challenging. Osteochondral allograft reconstruction (OCAR) is an accepted procedure to restore the joint congruity and for pain relief, particularly in the younger population. Preoperative three-dimensional (3D) planning and patient-specific instrumentation (PSI) are well accepted for the treatment of posttraumatic deformities for several pathologies. The aim of this case report was to provide a guideline and detailed description of the preoperative 3D planning and the intraoperative navigation using PSI in OCAR for posttraumatic defects of the tibia plateau. We present the clinical radiographic results of a patient who was operated with this new technique with a 3.5-year follow-up. Materials and Methods  3D-triangular surface models are created based on preoperative computer tomography (CT) of the injured side and the contralateral side. We describe the preoperative 3D-analysis and planning for the reconstruction with an osteochondral allograft (OCA) of the tibia plateau. We describe the PSI as well as cutting and reduction techniques to show the intraoperative possibilities in posttraumatic knee reconstructions with OCA. Results  Our clinical results indicate that 3D-assisted osteotomy and OCAR for posttraumatic defects of the knee may be beneficial and feasible. We illustrate the planning and execution of the osteotomy for the tibia and the allograft using PSI, allowing an accurate anatomical restoration of the joint congruency. Discussion  With 3D-planning and PSI the OCAR might be more precise compared with conventional methods. It could improve the reproducibility and might allow less experienced surgeons to perform the precise and technically challenging osteotomy cuts of the tibia and the allograft. Further, this technique might shorten operating time because time consuming intraoperative steps such as defining the osteotomy cuts of the tibia and the allograft during surgery are not necessary. Conclusion  OCAR of the tibia plateau for posttraumatic defects with 3D preoperative planning and PSI might allow for the accurate restoration of anatomical joint congruency, improve the reproducibility of surgical technique, and shorten the surgery time. Fractures of the proximal tibia occur in 27 per 100.000 per year and are associated with high-energy trauma, especially in younger patients. 1 The disease burden of posttraumatic osteoarthritis (PTOA) is estimated to be 12% of all symptomatic osteoarthritis (OA) of the hip, knee, and ankle. 2 PTOA of the knee occurs at high rates after intra-articular and extraarticular fracture of either the distal femur or the proximal tibia, with the incidence in the literature ranging from 21 to 44%, and can also be seen after ligamentous, meniscal, and high-impact injuries. [3][4][5] It has been shown that total knee arthroplasty (TKA) for PTOA is associated with a lower function, a lower quality of life, and a lower survival rate than for primary OA. 6 Joint-preserving procedures remain the treatment of choice for young patients, because of critical results for UKA (unicondylar knee arthroplasty) and TKA in the long-term with high revision rates. 7 Several options exist to address posttraumatic defects of the knee directly after trauma. Fractures can be treated conservatively for minimally displaced fragments, otherwise surgical management is recommended. 8,9 Surgical possibilities are open reduction and internal fixation (ORIF), external fixation, or arthroscopically assisted osteosynthesis. [10][11][12][13][14][15][16] However, operative reconstruction management of posttraumatic larger damages in the knee joint is challenging and may be associated with malunion if reduction is imprecise, which can lead to following operations and progressive arthrosis. 8,17 Joint preserving options like osteochondral autograft transplantation surgery, autologous chondrocyte implantation, autologous matrix-induced chondrogenesis, and bulk allograft, depending on the size and location of the defect are supposed to be an alternative. 18 Although these procedures have been implemented with varying degrees of success, no consensus exists on the gold-standard treatment. Additionally, the reconstruction after traumatic damages around the knee using osteochondral allograft is an established technique procedure. Previous studies demonstrated the benefit of fresh OCA in the reconstruction of posttraumatic defects of the knee with satisfying long-term results for the tibial plateau. 19,20 The superiority of survival in vivo with higher chondrocyte viability of fresh OCA compared with frozen OCA was shown. 19,20 However, big posttraumatic defects of the knee are a complex three-dimensional (3D) problem and the exact restoration of the joint line, anatomical axis, and tibial slope are challenging for the treating surgeon. The benefit of computer-assisted corrective osteotomies or allograft reconstruction in tumor surgery around the knee has already been emphasized. 21,22 The main advantage of computer-assisted surgery is the precise 3D analysis of the deformity. Therefore, a facilitated surgical planning of the osteochondral allograft reconstruction (OCAR) with accurate reconstruction results can be expected, using 3D computerassisted planning. To our knowledge the use of patientspecific instrumentation (PSI) to treat posttraumatic defects of the tibia plateau with an OCA has not been reported so far. The aim of this case report is to present a step-by-step guideline for posttraumatic OCAR of the tibia plateau using a novel computer-assisted method with 3D preoperative planning and PSI. Patients Informed consent for the publication of this case report and the use of the photographs was obtained from the patient. The local ethical committee approved this study (Zurich Cantonal Ethics Commission, KEK-ZH 2015-0186). The case (►Fig. 1) is a 31-year-old female office employee who suffered at the age of 28 a lateral tibial plateau fracture (Schatzker classification Type II) of the left knee after highenergy trauma. Initially she received an ORIF with lateral and posterior plate via lateral access in an external hospital. Nine months later she was assigned to us for a second opinion by persistent knee pain. We diagnosed a malunion through a computed tomography (CT) and performed 3 months later the removal of the osteosynthesis material. An infection was ruled out after taking intraoperative samples. Four months later 3D computer-assisted planning was performed and 7 months later OCAR with the technique described below was accomplished. Preoperative 3D Deformity Analysis and Planning The analysis of the deformity and the planned reconstruction were performed based on a reconstruction template. This approach has two main advantages. Namely, additional information is available about the ideal size of the allograft need. Furthermore, the time to complete the preoperative planning is reduced when the OCA is delivered (Neutromedics AG Ortho-Biologics & Implants, Cham, Switzerland), as the allograft adjustment has only to be performed. Ideally the delivery of the allograft and the production of the guides cuts of the tibia and the allograft. Further, this technique might shorten operating time because time consuming intraoperative steps such as defining the osteotomy cuts of the tibia and the allograft during surgery are not necessary. Conclusion OCAR of the tibia plateau for posttraumatic defects with 3D preoperative planning and PSI might allow for the accurate restoration of anatomical joint congruency, improve the reproducibility of surgical technique, and shorten the surgery time. should be performed quickly. A 3D triangular surface model of the pathological and the contralateral side is generated based on CT scans (slice thickness, 1 mm, 120 kv: Philips Brilliance 40 CT, Philips Healthcare, Eindhoven, the Netherlands) using thresholding, region growing, and the marching cubes algorithm to identify the cortical bone layer and for separating the tibia from the surrounding bone anatomy as previously described. 21,[23][24][25] The contralateral tibia is supposed to be an accurate three-dimensional reconstruction template, 26 in patients without a history of trauma or pathological condition. Therefore, it is currently our preferred template. The 3D models are imported into the planning software Computer Assisted Surgery Planning Application (CASPA) (Balgrist CARD AG, Zürich, Switzerland). The model of the contralateral tibia is mirrored and subsequently aligned to the pathologic model using a surface registration algorithm. As in similar approaches, the iterative closest point surface registration algorithm is used for bone alignment. 27,28 This method superimposes the undeformed regions of the bone surfaces in an automatic fashion by minimizing the sum of quadratic distances between surface points. 29,30 Defining the resection margins is possible by visualizing the exact 3D relation of the bone defect. After defining the resection planes and the fixation device, patient-specific guides can be designed to transfer the preoperative plan to the surgery. Reference and Osteotomy Guides The reference guides are used to allow the later positioning of the osteotomy guides as accurate as possible (►Fig. 2). They are serving as a registration tool between the 3D planning and the intraoperative situation. The reference guides correspond to the negative contour of the bony anatomy of the tibia. To control the fitting accuracy, a 3D printout of the native bone is regularly used. Meticulous intraoperative positioning of these guides is critical to define the correct osteotomy position that has been preplanned. To avoid a soft tissue interruption the whole periosteum has to be removed. This is particularly possible in the area of the bone that has to be resected without influencing the vascularization of the remaining proximal tibia. The reference guide has an additional wing to allow a maximum contact with prominent bony anatomy and to give additional rotational and translational stability. In earlier approaches it was shown that these wings lead to more precise osteotomies. 31 Two K-wires (2.5 or 3.0 mm) are drilled through the predefined drill sleeves attached to the reference guide. A guide design which constrains the saw blade is normally used as described in earlier approaches. 24 It is important for the osteotomy to calculate the corresponding cuts with an offset to consider the offcut. The reference guide is removed while the 2 K-wires are left in place and the osteotomy guide is positioned over the 2 K-wires. After confirmation of the adequate fit of the osteotomy guide using the 3D-printed models of the preoperative planning, definitive osteotomy is performed (►Figs. 3, 4). Preoperative Planning of the Osteochondral Allograft The size of the OCA needs to be considered for the substitute of the bone defect. In an ideal situation, an equivalent bone (i.e., same bone in the same dimensions) should be used. The advantage of using an equivalent allograft bone is that the allograft needed for insertion can be used in one single piece and complex constructs, which reduce stability, can be avoided. A 3D surface model is created from CT scans of the fresh OCA, similar to what was previously described (►Fig. 5). Allograft adjustment guides have to be prepared for the preparation of the allograft to customize it to the required shape. Therefore, the guides need to be applied on the allograft, as shown (►Figs. 6, 7). They have to be fixed by K-wires through the predefined drill sleeves attached. With the cutting slits and the drill sleeves, the saw blade is constrained to the planned osteotomy planes, and proper customizing of the allograft will be achieved. The consideration of the offcut in the preparation of the allograft is important as well. Insertion and Surgical Technique The intraoperative insertion can be performed either by manipulating the fragments directly, or indirectly using the final implant as a reduction guide. A K-wire-basedreduction guide or a fragment reduction guide as previously described might be used for the direct reduction of the prepared allograft. 24 For this purpose, one part of the undersurface of the guide body matches to the fragment surface in its reduced position and the other part to the reference fragment. Alternatively, the planned screw holes of the definitive plate fixation can be predrilled in the allograft. This allows an intraoperative reduction through the plate holes. In addition, a combination of these reduction guides might be applicable depending on the individual case. Using an anterior midline approach allows adequate visualization of the joint and the proximal tibia. The meniscus was preserved. An osteotomy of the medial and lateral epicondyle should be considered to allow better visualization of the joint for allograft insertion. After completion of the cuts in the tibia and the allograft, the allograft is fitted in the defect. Adequate plate fit and screw orientation for fixation of the allograft should be considered, when choosing the appropriate fixation plate. Fixation of the allograft is then performed using the preoperative planned plate (►Fig. 8). The soft tissue was fixed. Results The postoperative aftercare for the presented patient after OCAR was mobilization on walking sticks with partial load of 5 kg for 8 weeks. The patient initially wore a blocked brace for 2 weeks but with free flexion and extension out of the brace. After 8 weeks the load was subsequently increased over 6 weeks to full load. The patient had a low stress pain and a stable joint in the clinical examination 15 months after operation. She was able to work full time as office employee. Flexion/extension was 135/0/0 degrees. The CT 15 months after operation showed progressive consolidation of the tibial osteotomy gap and in the conventional radiographs 3.5 years postoperative a regular position of the osteosynthesis material was visible (►Fig. 9). In the clinical examination 2 and 3.5 years after operation the patient was still satisfied, had no pain at and was not limited in daily life. Discussion We present in this study, as far as we know, the first performance of an OCAR of the tibia plateau after a posttraumatic defect using preoperative 3D planning and PSI. In the younger population a joint-preserving technique is necessary to restore the function and to avoid an early onset of Fig. 9 (a) Postoperative X-rays after 2 months, (b) after 15 months, (c) CT-scans after 15 months, (d) X-rays after 3.5 years after posttraumatic OCAR using preoperative 3D planning and PSI. 3D, three-dimensional; CT, computer tomography; OCAR, osteochondral allograft reconstruction; PSI, patient-specific instrumentation. PTOA which is associated with severe functional impairment. [3][4][5][6] A simple 2D analysis based on conventional Xray might not be sufficient to restore the anatomy of complex posttraumatic deformities adequately. OCAR of extensive posttraumatic defects of the knee joint is a complex 3D technical challenge which requires considerable experience of the attending surgeon to reconstruct a precise anatomical joint congruency. Excellent accuracy of 3D planning has been proven for several anatomical locations. 24,29,32,33 In knee surgery, its indication has been shown useful for tumor surgery or secondary interventions of malunions after osteotomies. 21,22 This novel computer-assisted method using 3D preoperative planning and PSI in OCAR of the tibia plateau for posttraumatic defects might have several advantages compared with the conventional surgical methods. So far one of the main factors related to failure after OCAR is malalignment, which can lead to higher weight stress for the graft, a chondral destruction, a collapse of the allograft, residual pain, and dysfunction of the knee joint. 34 With 3D preoperative planning and PSI the reconstruction of the anatomical alignment including the joint line, leg axis, and tibial slope might be more precise. In addition, this novel technology could improve the reproducibility, because with preoperative 3D planning and PSI also less experienced surgeons might perform precisely the technical challenging osteotomy cuts of the tibia and the allograft. Another advantage might be a shortened operating time, because of previously performed preoperative 3D planning and PSI time consuming intraoperative steps such as defining the osteotomy cuts of the tibia and the allograft are not necessary. The availability of the allograft is a limiting factor due to the fact that it has to be ordered in advance before a surgery can be accomplished. With this new technique the establishment of an international 3D-database with an allograft storage would be a possibility for the future improving the graft availability. After CT scans of a patient with a posttraumatic defect had been made in an external hospital the 3D planning might be performed in the 3D database where the exactly calculated size of the required allograft could be prepared and delivered. Optional could be the preparation for PSI for tibia osteotomy as well. This might enable the younger patients with posttraumatic defects of the tibia to receive a challenging reconstruction outside of specialized centers, which have a stock of allografts. However, this new operative assessment should be in focus of further studies and analysis with larger number of patients and the clinical results of this new technology have to be compared with the conventional surgical methods. Conclusion OCAR of the tibia plateau for posttraumatic defects with 3D preoperative planning and PSI might restore the anatomical joint congruency accurate, improve the reproducibility, and shorten the surgery time. Funding None.
3,803.8
2021-10-01T00:00:00.000
[ "Medicine", "Engineering" ]
Neural network model as a way of processing complex systems of econometric equations characterizing the interaction of the Russian Arctic This article analyses the possibility of applying neural network modeling for the purpose of automation of a large number of calculations of econometric equations coefficients in order to obtain adequate predictive results. Introduction Any systematic development is possible only in case the competent strategy has been worked out.The strategy should include many predictions that can take into account not only different features of the researched economic object, but the maximum number of external factors as well.This reasoning can be attributed to the formation of Arctic development strategy.It's obvious, that taking into account all the random effects is impossible. Literature review One should have accurate and reliable models for making a qualitative prediction of the economic development of the Arctic and the profitability analysis.The models should be capable of automated assessment of the current economic situation provided that the maximum number of influencing factors is considered.[1,4] During the initial assessment and the subsequent analysis one can use either linear or nonlinear equations.In terms of increased accuracy the application of ADL-model may be optimal.The ADL-models are capable of describing not only the direct relationship between exogenous and endogenous variables, but also reflecting the lagged shifts that are very important in the study of econometric variables. The methodology In fact, the overall view of the equations of that model may be as follows: (1) n, m -ordinal indices.From the modeling point of view the inductive approach can be considered as optimal.This approach means that modeling starts with the lowest levels that include private parameters for certain sectors of research; thereafter the second-level models are created.The models of the second level reflect the relationship between the first-level models.So the process continues until the ultimate top-level model is built.Under this analysis the resulting model will directly describe the key aspects of economic development, while in the same it indirectly reflects all of the factors included in the lower-level models.[3,[5][6][7] Fig. 1.The block diagram of interaction of arctic and subarctic regions. In the proposed model the main two layers are the Arctic and the subarctic zones.Each layer is divided into sub-layers, containing the information on the regions and the entities of the specified zones.The arrows represent the interaction flow (information, material, natural, etc.).Both layers are equal with respect to the final principal object which is the State as a whole.The model allows the evaluation of all possible interactions in various combinations and permutations.[8,9,10] While talking about development strategy, we always have in mind the following: having a set of the input data, such as geopolitical status, geographical location, economic situation, development of infrastructure and social sphere etc., one needs to obtain the final result a set of instructions for further action that could improve the current situation.So far, initially one can only presume the resulting variable, but with no chance of defining its parameters and resulting value beforehand.In order to define the parameters an ADLequation is used.[11][12][13] Model In forecasting the economic development of the Russian Arctic it is the most convenient to use a three-tier model (Fig. 1).From Figure 1 it can be seen that the general model should track the multilevel interaction.It's necessary to have an assessment of both the interactions between specific areas, and between the separate regions and actors.Along with this, it is important to have an idea about the contribution of the represented regions to the development of the national economy.It also needs to be reflected in the model.[2] Developing of the ADL model can be divided into 4 main stages: 1. Stage 1. Determining the key factors that have a maximum impact on the regional economy.At this stage, the economic indicators are selected and statistical data on their dynamics is collected. 2. Stage 2. Identifying the relationship between the selected factors.Determining of the exogenous and the endogenous variables.The selected indicators are checked for linkages with each other.In the simplest scenario, with the system of linear equations, the correlation matrix is constructed and the dependent and influencing factors are determined on the basis of linear estimates. 3. Stage 3. Building of a system of independent ADL equations.Based on the relationships obtained at the second stage, a system of equations that algorithmically reflect these relationships is built for each region.A key feature of ADL model is not just a reflection of the relationship between the variables in certain time points, but also the relationship with a time lag. 4. Stage 4. Determination of interregional interaction.Obtaining the final system of equations that reflects the relationship between the regions in terms of the dependent variables. Ultimately on the each tier there will be a system of interrelated equations of the form: (2) If we consider the models of the lower levels, there will be plenty of systems of equations, given that it is expected to give a detailed description of each area of economic activity in each region.[3] The solution for all the systems will lead to an extremely accurate understanding of economic activities of the regions, their interactions, their foreign trade activities, as well as the impact on the Russian economy as a whole. However, calculation of such a great number of parameters will be too time consuming and won't always give the up-to-date analytical data.In this regard, there arises a problem of optimization of the calculations concerning the systems of equations.For these purposes the neural networks come to use. The overall essence of neural network modeling is quite simple and is logically understandable.The idea is that there are a number of input parameters and the task is to identify some output parameters.To implement this correlation the neural model defines such notion as weight -some ratio, indicating the impact of some input parameter in the resulting output variable. Fig. 2. General view of the neural network model. In Figure 2 the input adder receives input data, thereafter the weights of these parameters are calculated by nonlinear transformations, and after the bifurcation point the results are obtained as the output data.Thus, all the relationships between the input and the output values are fully described.Changing the sets of model parameters one can change the model and track the development of various patterns. The main feature of neural networks is that the relationship between input and output is the matter of learning of the network.It means that with new input data the network itself will be changing on the basis of the values of its elements and, therefore, the result will also change.The more input parameters we have, the more precise are the calculations of the transfer functions and the more accurate is the final result.The models based on neural networks are one of the few, for which a great number of variables and input values is not a problem, but rather a serious advantage. Returning to the task of determining the parameters of interconnected systems of equations, the input data will be the economic indicators selected by the researcher.The nonlinear transformer will be the computer system that will determine the weights (coefficients) of the equations as the output result obtained after the bifurcation point. Considering the functioning of the proposed neural model, one can split it into several stages.At the first stage, after the input indicators has been fed in the system, the algorithm analyses the data and evaluates the extent of their interdependence.The endogenous and exogenous variables are being identified.At the second stage the form relationship between endogenous and exogenous variables is being determined so that the optimal type of equation can be selected.At the third stage the coefficients are calculated for the selected equation type.The fifth stage, the last one, finishes the calculation algorithm of neural network by gathering all the coefficients.At this stage the final quantitative model is shaped.The proposed method provides also a feedback that will allow customizing of calculations on each stage in order to enable the learning process within the network as well as improving the final result by correction of either the input parameters or the computational techniques.[1] In this method the neural model is used as an auxiliary element of both the selection of interdependent variables, and the calculation of the coefficients of equations.This complex approach, i.e. the synthesis of ADL model and neural network modeling will simplify computing tasks for the researcher and will always reflect the most current information by accelerating the calculation process.It will also enable to observe changes at all levels of the model in the real-time mode. On the basis of such complex model one can give not only the estimated values of the output functions, but also quite accurate results of the output parameters, which make the further conclusions possible.Having interpreted and analyzed the results correctly, one can get the precise and clear instruction for the formation of either economic or political development strategy.While creating any model one should always bear in mind that all decisions on formation of development strategies will depend entirely on the person, since even the most advanced computing engine won't give the correct answer to the question "What to do?".Essentially the results there are no more than guidelines and vectors for the research, starting points for decision-making.But the more precise and accurate are these starting points, the more likely the right choice will happen. Conclusions Actually it is practically impossible to create a strategy for a long term in an unstable economic environment.Moreover, the model will need some adjustments due to changing external factors and various influences.Therefore, when designing a model, one should envisage the possibility of data correction within the model.It is important to understand that not only sets of input data may be adjusted, but the number of input parameters, the input parameters themselves and the forms of identified patterns as well. Based on the stated above, it can be noted that creation of recurrent neural network model seems the most useful.The model is based on the proposal of the Figure 1, however with some correcting additions (Fig. 3). Fig. 3. A variation of recurrent neural network model. The advantage of the variation is that this model is able to take into account any changes made by the researcher.
2,373
2018-01-01T00:00:00.000
[ "Economics", "Computer Science" ]
Elimination of Epidemic Methicillin-Resistant Staphylococcus aureus from a University Hospital and District Institutions, Finland From August 1991 to October 1992, two successive outbreaks of methicillin-resistant Staphylococcus aureus (MRSA) occurred at a hospital in Finland. During and after these outbreaks, MRSA was diagnosed in 202 persons in our medical district; >100 cases involved epidemic MRSA. When control policies failed to stop the epidemic, more aggressive measures were taken, including continuous staff education, contact isolation for MRSA-positive patients, systematic screening for persons exposed to MRSA, cohort nursing of MRSA-positive and MRSA-exposed patients in epidemic situations, and perception of the 30 medical institutions in that district as one epidemiologic entity brought under surveillance and control of the infection control team of Turku University Hospital. Two major epidemic strains, as well as eight additional strains, were eliminated; we were also able to prevent nosocomial spread of other MRSA strains. Our data show that controlling MRSA is possible if strict measures are taken before the organism becomes endemic. Similar control policies may be successful for dealing with new strains of multiresistant bacteria, such as vancomycin-resistant strains of S. aureus. From August 1991 to October 1992, two successive outbreaks of methicillin-resistant Staphylococcus aureus (MRSA) occurred at a hospital in Finland. During and after these outbreaks, MRSA was diagnosed in 202 persons in our medical district; >100 cases involved epidemic MRSA. When control policies failed to stop the epidemic, more aggressive measures were taken, including continuous staff education, contact isolation for MRSA-positive patients, systematic screening for persons exposed to MRSA, cohort nursing of MRSA-positive and MRSA-exposed patients in epidemic situations, and perception of the 30 medical institutions in that district as one epidemiologic entity brought under surveillance and control of the infection control team of Turku University Hospital. Two major epidemic strains, as well as eight additional strains, were eliminated; we were also able to prevent nosocomial spread of other MRSA strains. Our data show that controlling MRSA is possible if strict measures are taken before the organism becomes endemic. Similar control policies may be successful for dealing with new strains of multiresistant bacteria, such as vancomycin-resistant strains of S. aureus. ethicillin-resistant Staphylococcus aureus (MRSA) has emerged worldwide as an important nosocomial pathogen. In some U.S. hospitals, MRSA already accounts for 30% to 50% of all nosocomial S. aureus isolates. The situation is comparable in many European centers: according to a recent survey (1), the proportion of MRSA compared to all nosocomial S. aureus isolates studied was >50% in Portugal and Italy and >30% in Turkey and Greece. The methicillin-resistance rate was low (2.0%) in the Netherlands, calling attention to the distinguished Dutch MRSA strategy (2). Switzerland, which had the lowest MRSA prevalence (1.8%) in the European survey (1), is noted for innovative interventions to improve hand hygiene in hospitals and, thereby, to reduce MRSA transmission (3). In the Scandinavian countries, methicillin-resistant strains still account for <1% of all nosocomial S. aureus isolates (4). MRSA has remained uncommon also in Finland (5,6), and until the 1990s, mostly sporadic cases of MRSA were identified in hospitalized patients. In recent years, however, several nosocomial outbreaks caused by different epidemic strains have occurred (6). Two successive MRSA outbreaks at the Turku University Hospital, Finland, and in nearby institutions were the first and, so far, the second largest. We describe the Turku outbreaks and the subsequent yearly numbers of new MRSA cases identified in our district. We also discuss the control measures taken, which have been followed since then, to confine the spread of epidemic MRSA at the university hospital and in the whole Southwest Finland Medical District. Background The Turku University Hospital is a teaching facility that serves as a tertiary referral center for southwestern Finland. Approximately 500,000 inhabitants live in the Southwest Finland Medical District; the density of the population varies from 20-100 inhabitants per square kilometer. The institutions include 1 university hospital, 1 central hospital, 7 regional hospitals, and 22 health-care centers. From August 1991 to October 1992, two successive outbreaks caused by different MRSA strains occurred in the departments of surgery and medicine at University Hospital. During and after these outbreaks, these two MRSA strains were isolated from patients and staff members in five additional institutions in our district. Screening Policy Our policy of screening contact patients of the MRSA-positive patients and the hospital staff for MRSA varied during the different phases of the outbreaks. Unless otherwise indicated, the term contact patient refers to a patient hospitalized on the same ward at the same time with an MRSA-positive patient. Surgical Unit Outbreak During the outbreak in the surgical unit, in most cases MRSA was isolated from a clinical specimen. Initially, a policy decision was made not to screen either the contact patients *Turku University Central Hospital, Turku, Finland; †National Public Health Institute, Turku, Finland; ‡Satakunta Central Hospital, Pori, Finland; §Turku University, Turku, Finland; and ¶National Public Health Institute, Helsinki, Finland M of MRSA-positive patients or the staff on outbreak wards for MRSA. When the number of MRSA cases increased, we performed one cross-sectional study to screen all patients cared for in the department of surgery during that particular day for nasal and wound colonization by MRSA. Medical Unit Outbreak After the first two cases were diagnosed in the medical unit, we began screening other patients treated in the medical intensive-care unit (ICU). During weeks 6-8, all contact patients connected with MRSA-positive patients (treated in the ICU after admission of the index case) were screened once by using nasal swabs if they were still hospitalized. If a new case was identified on a ward, the roommates of the patient were screened. If transmission of MRSA was observed on a ward, all patients were screened. Initially, screening involved only nasal swabbing, but from the first week of June 1992 on, cultures were taken also from the perineum, groin, and axillae, as well as from all open wounds, skin lesions, and, later, the throat. After 10 identified cases of MRSA, we began to label case records of the colonized patients and contact patients with tags showing MRSA information. The contact patients were screened on the next visit; previous roommates of MRSA-positive patients were isolated while waiting for culture results. After providing two sets of negative MRSA cultures, contact patients were no longer screened on subsequent admissions. However, patients once found to be MRSA positive were screened on subsequent admissions and placed in single rooms to be cared for in contact isolation. All patients previously treated at hospitals abroad or with a known MRSA problem were screened at the time of admission and nursed in contact isolation until results from colonization cultures were negative. We screened the staffs of the medical ICU and the hematology and infectious diseases units by nasal swab at varying intervals during the medical outbreak. Screening cultures were done, as described (7,8). Identification of MRSA The isolates grown on culture plates were identified as MRSA following the National Committee for Clinical Laboratory Standards guidelines (9). Genetic resistance to methicillin was verified by the presence of the mecA gene (10). All MRSA isolates were submitted to the Staphylococcal Reference Laboratory at the National Public Health Institute, where they were typed with the international phage set and ribotyping and pulsed-field gel electrophoresis (PFGE), as described (6). Two isolates were defined as different strains if they had different phage types and/or PFGE and ribotypes. We considered phage types different if two or more strong phage reaction differences occurred. Ribotypes were considered different if any band difference occurred. Before 1995, PFGE types were considered different if any band difference occurred. After 1995, PFGE types were considered different if four or more band differences occurred. Elimination Treatment Elimination treatment with topical or combined topical and systemic antimicrobial therapy was given to selected patients (e.g., long-term care patients of health-center wards and nursing homes) and those with severe underlying diseases who were frequently admitted to any hospital in the district (7,8,11). Long-term carriers among the staff were also given elimination treatment. Detailed data on drug regimens will be reported separately. MRSA Strains From 1991 through 2000, a total of 202 persons in the Southwest Finland Medical District were infected or colonized by MRSA (Table). On the basis of phage typing and molecular typing, we identified 15 different MRSA strains isolated from two or more persons. These strains included 10 isolated from hospitalized patients (outbreak strains) and 5 causing intrafamilial clusters in the community (familial strains). The strain causing the surgical outbreak (referred to as the surgical strain) belonged to phage type 75,77,84,85III and had a characteristic ribotype and PFGE pattern. The strain causing the medical outbreak (referred to as the medical strain) was nontypable with phages (NT), but the strain relatedness between different isolates could be ascertained by ribotyping and PFGE. A third MRSA strain typed 54,84,85III/96V/95 caused the Mynamaki Health Center outbreak described previously (7). A detailed typing analysis, including a picture of the PFGE profiles of these major epidemic strains, has been published (6) and describes the corresponding strain identification code as E6 for the surgical strain, E7 for the medical strain, and O9 for the Mynamaki strain. The cases involved in these three outbreaks, as well as the clusters caused by 12 additional MRSA strains, are summarized in the Table. The remaining 63 strains were isolated from one person each. Three (30%) of 10 outbreak strains and 22 (35%) of 63 unique strains were designated as of foreign origin. None of the five familial strains were of foreign origin. Surgical Unit Outbreak The hospitalization periods of the patients during the surgical outbreak and the times when MRSA was first isolated in each case are shown in Figure 1. In August 1991, the surgical strain was isolated from a bone sample of patient 1 who was cared for on an orthopedic ward for posttraumatic osteomyelitis. The patient was referred to the infectious diseases unit to be cared for in contact isolation, but she was readmitted to the orthopedic ward three times during the following 4 months for treatment of osteomyelitis. Each time, the isolation precautions followed by hospital personnel did not comply with the standard adopted later. RESEARCH MRSA was next isolated from head wound of a colonized male patient on the same ward. He was placed in a single room to be cared for in contact isolation, but when the wound healed, the patient was transferred to a three-bed room. Subsequently, three of his roommates (patients 3, 4, and 5) acquired MRSA. By the 3rd week of December 1991, the combined number of patients colonized by epidemic MRSA had increased to eight cases on two wards and in the surgical ICU. A shortage of single rooms and the threat of an expanding outbreak led to implementation of the following control measures: 1) intensive education of the staff on hospital hygiene, 2) nursing of all MRSA-positive patients in single rooms in contact isolation, preferably in the infectious diseases unit, 3) strict adherence to contact isolation precautions and minimal duration of hospitalization whenever an MRSA-positive patient was treated at the department of surgery (e.g., operative treatment required), and 4) cross-sectional screening of all patients nursed on surgical wards and in the surgical ICU on December 19, 1991, for nasal and wound colonization. The screening uncovered three new cases of MRSA on epidemic wards. By year end, all patients identified as MRSA positive had been either discharged or transferred to the infectious diseases unit. Thereafter, no new transmission of MRSA was observed on surgical wards, although by the end of August 1993, the surgical strain was isolated from clinical specimens of eight additional patients who had been cared for on the epidemic wards during 1991-1992. These patients had evidently acquired the surgical strain while hospitalized during the outbreak, but the MRSA colonization was not recognized then because screening was not done routinely. In November 1995, the surgical strain was unexpectedly isolated from an endotracheal aspirate of a patient in the surgical ICU. This patient had also been cared for on the orthopedic ward during the 1991 outbreak. Subsequent screening of contact patients in the ICU showed MRSA colonization in three other patients who had ventilatory support at the same time. No new transmission of MRSA was observed after these patients were transferred to the infectious diseases unit. The total number of University Hospital patients infected or colonized by the surgical strain was 24. a Strain was recovered from 24 patients at the university hospital,10 patients at a regional hospital, and 3 staff members in these hospitals. b Strain was recovered from 30 patients and 18 staff members at the university hospital and from 9 patients in other district institutions. c Strain was recovered from 12 patients and 1 staff member in a health-center ward and associated nursing home (7). d Strain caused a cluster of four cases in an intensive-care unit of a central hospital. e Strain caused a cluster of four infected patients and one infected staff member in a health-center ward at the beginning of 2000, but was subsequently eliminated. f Intrafamilial clusters of two to four MRSA cases in the community. g MRSA strain was transmitted from one patient to another at the university hospital. h MRSA strain was transmitted from one patient to another at a regional hospital. Medical Unit Outbreak The index patient was treated for cerebral hemorrhage in an ICU in Rome, Italy. After his referral to the department of neurology of University Hospital in December 1991, the medical strain was isolated from his endotracheal aspirate. For the next 3 months, the patient was cared for in contact isolation in a single room on a neurologic ward; we found no evidence of MRSA transmission to other patients on that ward. Medical ICU In March 1992, the index patient became ill with septic shock caused by MRSA and was admitted to the medical ICU for respiratory support. For the first 24 hours, he was not isolated because of a misunderstanding but treated in the same room with three other patients who had ventilatory support. His contact patients were neither screened nor isolated. Two weeks later (Figure 2), the medical strain was cultured from an endotracheal aspirate of a patient who had died in the ICU a few days earlier. The devised screening program was delayed, and subsequent screening on weeks 6-8 found six new patient carriers and two staff carriers of MRSA. The medical ICU was closed to new admissions, and an auxiliary ICU was established for those patients who had not been exposed to MRSA. In the auxiliary ICU, a new staff carrier of MRSA was identified on week 12 and a new patient carrier and a staff carrier on week 13. The MRSA-positive patient was immediately referred to the infectious diseases unit. The six other patients who had shared the ICU room with him were simultaneously transferred to that same unit. Screening cultures later revealed MRSA colonization in five of them, indicating that early cohorting of these contact patients may have been critical in preventing further spread. Hematology Unit In May 1992, we identified MRSA colonization in four patients cared for on the hematology ward. Two of them became colonized while being treated in the medical ICU in April and transmitted MRSA to their two roommates on the ward before carriage became manifest. Using nasal swabs, we screened a number of patients treated at that time on the same ward. Many other contact patients already discharged were not screened when they were readmitted, rendering further spread of MRSA possible. At the beginning of July 1992, MRSA was isolated from an endotracheal aspirate of a bone marrow transplant patient cared for on the hematology ward ( Figure 2). Subsequent screening showed colonization in 11 additional hematologic patients and 12 staff members. We prevented nosocomial transmission by immediately closing the hematology ward. For the next 3 months, hematologic patients were cared for in three separate cohorts: 1) those not exposed to MRSA were admitted to the hematology unit when it was reopened, 2) those potentially exposed to MRSA during the previous 4 months were cared for in a separate cohort in the infectious diseases unit until three sets of coloni-zation cultures had proved negative, and 3) those colonized by MRSA were cared for in the infectious diseases unit. The total number of University Hospital patients colonized by the medical strain was 30, and the last case was identified in February 1993. This patient had evidently become colonized in April 1992 while being treated in the ICU at the same time as the index case. His MRSA colonization had remained unknown, since contact patients were not screened at that time. Staff Carriage A total of 20 staff members were colonized with MRSA during these two outbreaks. All five long-term carriers received elimination treatment with a successful outcome. The staff members who were colonized were sent home but could return to work after they had provided three successive negative MRSA cultures. MRSA in Other District Institutions In August 1992, the first case of the surgical strain was identified at Turku City Hospital. Subsequent screening found colonization in seven additional patients on three different wards. After two more cases were identified in 1996, the total number of city hospital patients colonized by the surgical strain was 10. During August and September 1992, we found nine patients in two local hospitals and two health-center wards colonized by the medical strain. In each institution, MRSA was first isolated from a clinical sample, and screening of contact patients on the ward found a few additional cases. The infection control team of University Hospital visited each facility to delineate appropriate control measures for MRSA. Colonized patients were referred to the infectious diseases unit for elimination treatment. Other patients were screened, and those found to be colonized were cared for in contact isolation until they could be admitted to the infectious diseases unit for decolonization. By following this strict control policy, we were able to eliminate MRSA from these five institutions. In 1993, the Mynamaki Health Center outbreak was controlled as previously described after 13 cases (7), and a central RESEARCH hospital ICU outbreak was controlled after four cases (outbreak strain IV). In 2000, MRSA outbreak strain V was eliminated from a long-term care facility after five cases occurred. Of the five additional epidemic MRSA strains, one was eliminated after two cases in a regional hospital, and the other four strains were eliminated after causing two cases each at University Hospital (Table). Long-Term Follow-Up of Patients Of the 37 patients who acquired the surgical strain, 6 died within 1 month after MRSA was identified, 20 died during the following years, and 3 were not part of follow-up. Eight patients remain residents in our district, two of them still carrying the surgical strain. The majority of the 39 patients who acquired the medical strain had severe underlying diseases. Of all 39 patients, 21 died within 3 months of colonization, 12 died during the following years, and 1 was not part of followup. Five patients who still live in Turku, three of whom were treated to eliminate carriage, no longer carry MRSA. Thus, the medical strain has been eliminated from our district. Discussion During past few years, news on MRSA has usually been discouraging. Clinicians and infection control practitioners appear to have lost confidence in their capability to control the nosocomial spread of this pathogen. The number of papers focusing on the overwhelming spread of MRSA is increasing (1,(12)(13)(14)(15)(16)(17), whereas those addressing successful efforts of control or stating that nosocomial spread of MRSA can and should be controlled are few (18)(19)(20)(21)(22). A number of researchers debating the control of MRSA have questioned whether controlling this microorganism is reasonable, feasible, or justified (23)(24)(25) and whether the tracing of colonized people is justified (26). We describe the elimination of MRSA from a university hospital and a medical-district-wide control policy for MRSA after the outbreak. Our results show that controlling or even eliminating MRSA is possible, if strict measures are systematically taken before the organism becomes endemic. Our experience should encourage other countries with a low incidence of MRSA to continue efforts to prevent the spread of this microorganism in hospitals and long-term care facilities. According to 46 published reports on outbreaks, 10% of the hospitals with >40 cases have achieved definite or probable elimination of MRSA (27). Although >100 patients and staff members in our district initially became colonized by epidemic MRSA, this microbe is being controlled almost 10 years after these first outbreaks. We eliminated the medical strain from the whole district, and only a few outpatients presently carry MRSA in the community. Moreover, we were able to prevent nosocomial spread of the almost 100 additional MRSA strains encountered in our area. Even the 22 MRSA strains introduced by patients transferred from hospitals abroad have remained solitary cases, despite their epidemic potential. In fact, after the small university hospital ICU outbreak in 1995, nosocomial transmission of MRSA has been detected in our district hospitals only three times; on each occasion, MRSA colonization of the index case was not known or suspected on admission. Containment of the Turku outbreaks in 1991-1992 was greatly impeded by the fact that we had no national guidelines on how to control MRSA in Finland at that time and very little previous experience with these microorganisms. Detailed guidelines published by authorities from abroad advised an active control policy (28), but stringent measures were perceived by our colleagues as too disruptive for the patient care in our institution. One major argument against adopting an aggressive line of control was the lack of severe MRSA infections because many of our patients were colonized without clinical infection. During the early phase of the medical outbreak, the infection control team adjusted to a lenient control policy because of our previous experience with the surgical strain, which was easily contained. The behavior of the medical strain, however, was quite different from the surgical strain, and the inadequacy of the control measures at the beginning of the medical outbreak is now evident. We may have been able to restrict this outbreak to only a few cases if all ICU patients had been screened for MRSA and MRSA-positive patients had been isolated as soon as we discerned that appropriate control measures had not been taken when caring for the index patient or if the medical ICU had been closed to new admissions after the second or third MRSA case. Similarly, screening of only some of the MRSA contact patients in the hematology unit in May 1992 was clearly insufficient. Had our efforts initially been more aggressive and the outbreaks quickly controlled, we may have saved many persons from becoming colonized with MRSA and considerably reduced the costs of infection control measures required. The most important lesson from these first epidemics was that an ambivalent and permissive control policy for MRSA easily fails. We have subsequently made every effort to avoid making the same mistake. Whenever MRSA has been introduced into our hospitals, rapid steps have been taken to adopt appropriate control measures. The mainstays of our present policy involve continuous staff education, caring for MRSApositive patients in single rooms in contact isolation, systematic screening of patients exposed to MRSA, including all patients transferred from hospitals abroad or with a known MRSA problem, and cohort nursing of MRSA-positive and exposed patients, at least in epidemic situations. National guidelines have proved most beneficial in preventing the spread of MRSA in a few low-incidence countries (2,6). Medical-district-wide guidelines may be equally important when an individual hospital is struggling with MRSA and needs practical or moral support. The Turku MRSA policy involves perceiving our medical district with its approximately 30 institutions as one epidemiologic entity; the infection control team of University Hospital is responsible for the control of MRSA (and also of other multiresistant bacteria) in the whole entity. This overall responsibility ensures that the same control policy for MRSA is followed in all district institutions. If MRSA is encountered in any local hospital or health-center ward, consultation is given; if nosocomial transmission of the microbe is observed, the infection control team visits the institute. We continuously strive to prevent the development of MRSA reservoirs in our extended-care facilities. In so doing, treatment to eliminate MRSA carriage in long-term patients has been favored, while in contrast, efforts to eliminate MRSA in outpatients have not had the same focus. Many of our experiences were taken into account when the National Guidelines for the Control of MRSA in Finland were prepared in 1995 (6). To a great extent, the MRSA control policy finally adopted is in line with that currently followed in the Netherlands (2) and initially recommended by the British authorities in 1990 (28). Because of the increasing prevalence of MRSA, those guidelines were replaced in the U.K. by more lenient instructions in 1998 because the situation in many parts of the country was such that a more flexible approach was considered appropriate (29,30). With the dramatic increase of MRSA, other countries (including the United States) where these microbes are already endemic in hospitals have adopted more flexible control policies (31,32). However, now that vancomycin-resistant S. aureus (VRSA) exists (33), controlling MRSA is even more imperative. A lenient or ambivalent policy is especially inappropriate in those countries where MRSA remains uncommon, since they may still have a fair chance of eliminating this pathogen. In southwestern Finland, the factors possibly contributing to our success include active education and excellent compliance of health-care personnel, a uniform health-care system, and low population density. Anticipating the emergence of new and even more serious strains of multiresistant staphylococci poses a demanding challenge to clinicians and infection-control practitioners worldwide to seek novel methods, which could effectively prevent the spread of these microorganisms. Despite an inability to control MRSA in many countries, we believe that confining these newly emerging multiresistant strains may be possible, provided that vigorous efforts are taken early while the microbe still remains rare. If rapidly begun, aggressive measures may not be needed for long and thereby be cost-effective. To meet future challenges successfully, a stringent and consistent international control policy should be issued and universally obeyed. We have shown that controlling or even eliminating MRSA is possible, if strict measures are taken before the organism becomes endemic. A similar policy may be successful when combating new and even more serious strains of multiresistant bacteria (e.g., VRSA). The recent emergence of VRSA emphasizes the need for unremitting and vigorous control of MRSA. National guidelines for MRSA control policy have proven beneficial in a few low-incidence countries. Our results suggest that firm international guidelines will aid countries in preventing the global spread of any newly emerging multiresistant bacterial pathogen. An ultimate prerequisite for success is the commitment of the health-care personnel worldwide to struggle for that important goal. Dr. Kotilainen is professor of infectious diseases at the Medical School of the University of Turku, Finland. During the outbreaks, she was the infection control physician of Turku University Hospital with responsibility for the overall district. Her recent research interests focus on the epidemiology and control of multidrug-resistant Staphylococcus aureus and S. epidermidis; the application of molecular methods in establishing etiologic diagnoses of bacterial diseases in a clinical setting; and the increasing antimicrobial resistance problems of Salmonella enterica and Campylobacter jejuni.
6,365.8
2003-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Enabling data-driven anomaly detection by design in cyber-physical production systems Designing and developing distributed cyber-physical production systems (CPPS) is a time-consuming, complex, and error-prone process. These systems are typically heterogeneous, i.e., they consist of multiple components implemented with different languages and development tools. One of the main problems nowadays in CPPS implementation is enabling security mechanisms by design while reducing the complexity and increasing the system’s maintainability. Adopting the IEC 61499 standard is an excellent approach to tackle these challenges by enabling the design, deployment, and management of CPPS in a model-based engineering methodology. We propose a method for CPPS design based on the IEC 61499 standard. The method allows designers to embed a bio-inspired anomaly-based host intrusion detection system (A-HIDS) in Edge devices. This A-HIDS is based on the incremental Dendritic Cell Algorithm (iDCA) and can analyze OPC UA network data exchanged between the Edge devices and detect attacks that target the CPPS’ Edge layer. This study’s findings have practical implications on the industrial security community by making novel contributions to the intrusion detection problem in CPPS considering immune-inspired solutions, and cost-effective security by design system implementation. According to the experimental data, the proposed solution can dramatically reduce design and code complexity while improving application maintainability and successfully detecting network attacks without negatively impacting the performance of the CPPS Edge devices. Introduction The journey to the 4th Industrial Revolution, or also known as Industry 4.0 (I4.0), is marked by significant and high-impact technological advances in the industrial context. I4.0 (Lasi et al. 2014) corresponds to the new industrial paradigm where traditional methods of control are being transformed into new mechanisms that allow remote and distributed digital control. I4.0 makes networking and distributed computing an essential characteristic of modern automation systems. Besides, Edge devices or Internet of Things (IoT) enabled devices , i.e., lightweight controllers with limited memory and CPU resources, are becoming increasingly common within industrial settings since they enhance overall efficiency in task performance. The application of Industrial Internet of Things (IIoT) principles is possible with the development of cyber-physical production systems (CPPS), where the physical and virtual worlds are related and mutually dependent. On the one hand, actuation decisions in the virtual world would impact the physical world. On the other hand, physical process data will influence the virtual world for an updated decision-making process. As industrial systems continue to grow in complexity, their attack surface also increases, making it more challenging to guarantee protection and correct operation at all times. CPPS are based on distributed industrial control systems (ICS) that use IIoT technologies, and there is usually an intrinsic integration with legacy components. Thus, enabling security features in such systems is a complex problem and a hot research topic. Unfortunately, the Information and Communication Technology (ICT) security community has ignored the industrial domain in the past years (Loukas 2015). ICS design focuses mainly on efficiency and safety rather than on security. Like many other novel technologies that emerged in the past, developers think about the inherent security properties after the technology is developed and ready to be released. Security in CPPS is a significant research challenge to speed IIoT development towards a robust and trustworthy technology, resulting in further acceptance by the community side. Naturally , efforts have been made to keep ICS secure, for instance, by using classical defense strategies and monitoring systems, such as firewalls, cryptography, access control, intrusion detection systems (IDS) (Sekar and Bowen 1999), among others. Regarding CPPS and IIoT security, more recent efforts focus on mapping solutions from existing related domains, such as supervisory control and data acquisition (SCADA) systems and wireless sensor and actuator networks (WSAN). Although detection and prevention techniques were studied to be applied to SCADA systems and WSAN, these solutions were usually not designed for CPPS and may not cover all the security requirements of IIoT. Even if they were suitable, no system could be full-prove to all attack vectors, especially considering zero-day attacks (Costin et al. 2014;Cui and Stolfo 2010). In the literature, there are mentioned several successful manufacturing-related cyber-physical attacks driven by monetary and political interests (Loukas 2015). Wellknown examples are: (1) the Maroochy Shire Water Services attack in 2000, which resulted in dumping around waterways in Queensland, Australia, large amounts of raw sewerage (Slay and Miller 2008; (2) the Stuxnet worm, which damaged an Iranian nuclear facility of Natanz back in 2010 (Baezner and Robin 2017); and (3) the German Steel Mill attack at the end of 2014, which blast a furnace by preventing it from correctly shutting down (Lee et al. 2014). So, with the increasing adoption of intelligent and autonomous components in CPPS, the implementation of security features can be very complex. Security features should be included in the CPPS already in the design stage. Also, considering system components' heterogeneity and networking capabilities, CPPS development requires flexible automation that provides modifications with less engineering effort. This raises a need to support the design and verification of the CPPS by including security components as default. A solution can be achieved using model-based engineering (MBE) development approaches, which uses models to design software and perform component testing, accommodating the complexity and the dynamics of such systems. The MBE approach enables component-oriented design, where components of the CPPS are modeled separately, clearly defining their roles, functionality, and purpose. These components can be combined and reused on-demand in different cost-effective implementations. This process should automate the demanding and error-prone tasks in the design phase so that no defects are introduced, and the models' final implementation behaves as intended. In manufacturing scenarios, the IEC 61499 standard, compliant with function blocks (FBs), is already being used as a general framework to develop distributed ICS by following the MBE approach. This work aims to research and contribute in the creation of an IDS architecture for industrial Edge devices that adhere to the MBE approach for system design, such as the industrial standard IEC 61499. This IDS should employ a bio-inspired approach , making use of techniques in the artificial immune systems (AIS) family, for online intrusion detection . The main objective is to study the IEC 61499 standard and understand the feasibility of designing a FB-based immune IDS for overall CPPS by design. Thus, based on this, the research questions addressed are: The remainder of this paper is organized as follows. "Related work" section provides a review of the IEC 61499 standard, while providing a state of the art about MBE architectures and design approaches for embedded security capabilities in CPPS. "Proposed approach" section provides a detailed characterization of the proposed approach, an A-HIDS Pipeline for enabling the iDCA as CPPS network intrusion detection already in the design phase of the system. "Test and validation" section describes the experimental setup and methodology for testing and validating the proposed solution and discusses the results achieved. Finally, the "Conclusion" section concludes the paper, stating final remarks about the work presented and providing orientations for future work. Related work Until recently, considering the industrial context, automation systems development relied on the legacy IEC 61131-3 family of languages, such as ladder logic, structured text, and sequential function charts (Otto and Hellmann 2009). These languages are no longer suitable for the current development requirements of such complex systems since they rely on primitive abstractions for hardware and control flow. Within the I4.0 context, the IEC 61499 standard is emerging as a popular design standard (Vyatkin 2013(Vyatkin , 2011, since it allows for MBE, where components and their behaviors can be defined and encapsulated into FBs (Commission et al. 2005). The IEC 61499 standard, compliant with distributed FBs, can facilitate the development of distributed ICS. This standard provides a component-oriented design methodology for developing CPPS, supporting several object-oriented programming techniques and allowing reusable components that encode state and behavior. With IEC 61499, the encapsulation of software components is improved for increased re-usability, providing a vendor-independent format, and simplifying support for Machine-to-Machine (M2M) communication. Its distributed functionality and the inherent support for dynamic reconfiguration offer the required infrastructure for I4.0, enabling the design and management of large automation systems or IIoT applications (Jazdi 2014). Considering FBs compliant with the IEC 61499 standard, they are functional software units that execute software code when triggered by events. After execution, they can generate and pass on new events. The software code typically runs algorithms that process input data to update internal variables and output results. These results are associated with output events when the algorithm finishes execution, which will trigger another FB for execution (Querol et al. 2016). FBs are linked by events and data connections into FB Pipelines, which can be executed using a runtime environment (RTE) (Prenzel et al. 2020). A RTE enables the design and development of distributed architectures using the IEC 61499 by implementing the execution model defined by the standard. Also, along with the IEC 61499 standard, several integrated development environments (IDE) emerged for orchestration, mapping, and deployment of IEC 61499 enabled applications based on different RTEs. Lindgren et al. (2014) addressed the outsets for realtime execution of FB-based designs onto IoT devices. On the other hand, Muthukumar et al. (2019) proposed an MBE approach for an IIoT architecture design, verification, and auto-code generation of control applications in process industries. In this case, the MBE approach was based on multiple system views and used to perform design and verification of an IIoT enabled control within the benchmark problem 'quadruple tank process' . Focusing on security solutions deployed in manufacturing systems that follow the IEC 61499 standard, Sierla et al. (2014) proposed a security risk analysis methodology consisting of vulnerability and impact analysis. The analysis is applied to the communications network topology of an IEC 61499-based electric grid automation system, simulated under a co-simulation environment. In this case, the authors demonstrated the methodology with a case study of fault location, isolation and service restoration (FLISR) smart grid automation. In work related to design-level support for security in CPPS, and with a focus on the communications between programmable logic controllers (PLC), Tanveer et al. (2018) propose embedded encryption in the communication between PLCs, using the IEC 61499 standard. The proposed solution is designated confidentiality layer for function blocks (CL4FB). It consists of a security layer that enables encrypted data communications using advanced encryption standard (AES) by encoding IEC 61499 compatible FBs. The applicability of the CL4FB was validated using an IEC 61499 based solution for protection and control functions in electric power distribution within a Smart Grid test case scenario. Feasibility analysis consisted of assessing the solution's impact on the overall system's performance, characterized mainly by the introduced latency. Results show that, although the CL4FB does introduce latency, most real-time constraints of the PLC can be met. On a different work, Tanveer et al. (2019) investigate the case for providing security protection at the application level of PLCs. The approach consists of adding an IDS at the PLC application level, using IEC 61499 FBs. The IDS is added to the application at compilation time and is used for preventing typical PLC attacks on the application and device levels by analyzing network data between the network interface and the logic components of the system. This analysis is based on the Snort tool (Koziol 2003), and the authors assessed the approach in terms of Snort packets dropped over a time interval, running in an IEC 61499 environment and deployed in Wago PFC200 PLCs. Experiments show that the IDS-like functionality introduced by the FB can successfully log and prevent attacks at the application level, by providing active security protection from unknown (zero-day) attacks. By measuring the performance analysis of Snort when increasing the intensity of attacks, results show that the IDS drops more packets and loses essential data for analysis purposes. The PLC device breaks down at higher attack intensities, resulting in total denial of service (DoS). Dowdeswell et al. (2020aDowdeswell et al. ( , 2020b) also used the MBE approach for integrating the design and creation of fault identification and diagnostic capabilities. The proposed solution, designated fault diagnostic engine (FDE), was designed to recognize and diagnose faults in IEC 61499 FB Pipelines, not detect attacks nor protect communication channels like in previous work. The solution can monitor the system's behavior using appropriate fault detection strategies based on a diagnostic multi-agent system (MAS) that interact with the IEC 61499 FBs. The feasibility of the FDE was assessed when operating with several agent instances in a heating, ventilation, and airconditioning (HVAC) test case scenario. Results show no actual performance issues in the HVAC. More recently, Tanveer et al. (2021) proposed abstract design extensions to the IEC 61499 development standard, designated Secure Links, which implement both lightweight and traditional security mechanisms into ICS applications. Secure Links help include different communication security mechanisms into IEC 61499 applications in a consistent and reusable way, depending on specific security requirements. These requirements are defined in IEC 62443-4-2 and can be managed using the Traceability of Requirements using Splices (TORUS) framework. TORUS enables application development with security by design features, avoiding manual security mechanism coding. These security mechanisms can be transport layer security (TLS)-based and authenticated encryption with associated data (AEAD) security. Later, the developed applications can be deployed into target Edge devices, such as PLCs, since they are fully compliant with the IEC 61499 standard. Experimental results show that Secure Links significantly reduce design and code complexity and improve application maintainability and requirements traceability. On the one hand, the latency of the encryption process and the key exchange are used as metrics to compare lightweight and TLS security mechanisms. On the other hand, the system design complexity ( C * ) is measured regarding structural ( S * ) and data ( DC * ) complexity , as well as the maintainability (MI) of the proposed solution. A higher MI value indicates a more maintainable program, while a higher C * means more overall system complexity. Table 1 presents a summarized overview, for comparison purposes, of the proposed solution main features (more detail in "Proposed approach" section) and similar approaches found in the literature. These features are related with: (1) Methods -the security methods implemented; (2) Application Scenario -the context of the application of the methods implemented; (3) Data Analyzed -type and origin of the data collected and analyzed by the security method; (4) Tools -technologies used to develop and deploy the proposed solution; (5) Attack/ Fault Spectrum -types of cyber-attacks or anomalies supported by the security method methodology; (6) Assessment -evaluation methodology and main results achieved while validating the proposed approach. Proposed approach A-HIDS is a software application that enables malicious activity monitoring at the host level, based on the collection and analysis of system data (Butun et al. 2014). The main goal is to infer intrusions/attacks in the system by identifying deviant behavior compared to a system baseline/normal behavior. An A-HIDS can achieve this while the attack is in progress or afterward. According to (Butun et al. 2014), and taking into account the nature of the processing involved in the behavioral model, an A-HIDS may be further characterized as statistical-based, knowledge-based, machine learning (ML)-based and Soft Computing-based. We are interested in studying Soft Computing-based approaches, such as AIS (Misra et al. 2014;Kim et al. 2007). This work focuses on developing and deploying an A-HIDS, based on the iDCA technique, in the Edge layer of a CPPS. This IDS analyses OPC UA network data to detect cyber attacks, which explore this communication protocol to be deployed in the system. Considering the MBE paradigm, specific FBs compliant with the IEC 61499 standard are created by using the DINASORE open-source technology (DIGI2-FEUP 2021; Pereira et al. 2020). These FBs can be reused for embedded intrusion detection in Edge devices in different application scenarios. iDCA Relevant AIS algorithms abstract biological immune processes related to Negative Selection, Immune Network Models, and Clonal Selection theories (Dasgupta et al. 2011). More recently, the Danger Theory (Aickelin et al. 2002) became the immune model that inspired techniques such as the Dendritic Cell Algorithm (DCA) (Greensmith 2007) and related variants. The DCA is a binary classification algorithm based on the natural functionality of dendritic cells (dc) present in the human immune system (HIS). Considering the network intrusion detection problem, Pinto et al. (2020) applied the DCA to detect network attacks in an OPC UA dataset. More recently, the authors proposed the iDCA, a modified version of the original DCA, suitable for real-time network intrusion detection, coping with the online nature of the anomaly detection problem (Pinto et al. 2021). Figure 2 illustrates the activity diagram of an iDCA-based network IDS. Considering the theoretical foundations of the DCA and iDCA, these algorithms explore the functionalities of dc, which are a type of antigen-presenting cell in the HIS. These cells have the job of catching, processing, and revealing antigens (Ag) to T-cells (to suppress or activate immune responses). Ag is an outside molecule representing a pathogen invading the body, such as bacteria or viruses. Its presence triggers typically an immune response, which starts with the scouting task of dc. The state of each dc in the system greatly depends on the signals sensed from the surrounding environment. These signals can be the pathogenic associated molecular patterns (PAMP), safe signals (Ss), and danger signals (Ds). Thus, if a dc receive more quantity of Ss, this indicates that the Ag collected by that dc was found in a normal context. On the other hand, if Ds and PAMP are produced in more quantity, this indicates abnormality and the need for immune activation. The reader can find more detail about the DCA functionality in previous work (Pinto et al. , 2021. Back to an engineering scenario, the iDCA is characterized by four main stages, namely Pre-processing, Detection, Context Assessment and Classification. The Pre-processing stage focus on mapping selected attributes/features to Ss, Ds and/or PAMP signals, while generating Ag or patterns to be classified, according to the available input data. The input data is available in a streaming fashion instead of a batch format, and it is collected by the Network Traffic Capture component. In this case, the data represents OPC UA network flows captured in real-time within a given Edge device and is made available in a streaming fashion for anomaly detection. The Detection stage focus on the fusion of input signals (Ss, Ds and/or PAMP) with Ag collected. The detection is possible using a population of artificial dc, which will sample different Ag multiple times and combine those with the input signals. This process will enable the dc profile to be updated, by calculating the cumulative output signals. They are represented by the co-stimulatory signal (csm) and the context output value (k), calculated to assess the dc migration state. Based on the process resulting in the Detection stage, the Context Assessment stage focus on the assessment of the Ag sampling context, by differentiating migrated dc into mature or semi-mature. If the dc is presented in a mature state, then the migration process will react to the Ag processed by that dc. Otherwise, if the dc is presented in a semi-mature state, then the migration process will tolerate the respective Ag. Finally, the Classification stage uses the dc context resulting from the previous step to derive the nature of the response, by measuring the number of sampled Ag by dc that are fully matured. It is used the mature context antigen value (mcav) to assess the degree of an anomaly of a given Ag (with the help of the anomaly threshold (At) for binary classification). Function block-based implementation DINASORE adopts the 4DIAC-IDE (Foundation 2021b) as IDE for the FB Pipeline creation and distributed deployment in the Edge devices. DINASORE executes in each Edge device, based on the FORTE RTE (Foundation 2021a). The main advantage is the support of FB development based on Python coding language. Python enables the latest advances in ML to be integrated at the Edge level. Moreover, M2M communication at the Edge level is enabled by default thanks to the OPC UA communication protocol for 3 rd-party integration. In this section, detail is provided on the developing and deploying process of the iDCA technique in an actual Edge layer of an IIoT application, following the IEC 61499 standard. By implementing the iDCA in an FB compliant format with the IEC 61499 standard, the authors explore a new approach to enable data-driven anomaly detection by design, according to the MBE approach, for system design and implementation. The main outcome of this work is the A-HIDS FB Pipeline, which is represented in Fig. 3. Based on this architecture, there are 2 main FBs, the OPCUA_Flow_Sniffer and the iDCA. The first FB sniffs the network for OPC UA packets. On the other hand, the second FB performs the data analysis and intrusion detection, based on the iDCA technique. Building FBs using DINASORE requires Python version 3.6 or higher, along with several Python packages, such as psutil, cryptography, opcua, numpy, scikit-learn and pandas. Also, to operate specifically with OPCUA_ Flow_Sniffer and the iDCA FBs, there are extra Python dependencies, besides the ones already mentioned, namely pyshark (KimiNewt 2021) and river (Montiel et al. 2021). pyshark is a Python wrapper for the Tshark library, required for the network packet sniffing in the OPCUA Flow Sniffer FB. On the other hand, river is a Python library for ML application in streaming data or online ML. This is used in the iDCA FB, for the Figure 4, represents the execution control chart (ECC) of the FB Pipeline proposed. OPCUA flow sniffer The OPCUA_Flow_Sniffer FB can operate in three different modes, which can be defined in the FB input op_mode: • Live: The live packet capture mode corresponds to the Edge device's real-time OPC UA packet collection. All the OPC UA traffic is sniffed using the pyshark package from a live network interface. • File: The file packet capture mode corresponds to the local OPC UA packet collection from a .pcap file, containing historical OPC UA traffic from an Edge device. So, all the OPC UA traffic is read from the file using, again, pyshark. • Dataset: The dataset mode is a backdoor mode. The user can test the implementation using an existing dataset in a .csv file format, containing OPC UA traffic data and network features defined. In this specific mode, the FB was optimized to receive as input the M2M using OPC UA dataset (Pinto 2020). This dataset was already created by capturing OPC UA traffic in a distributed control system, as shown by Pinto et al. (2020). The OPCUA_Flow_Sniffer FB has 2 different data outputs, namely the data_O and the data_model_O. Both these outputs correspond to data inputs in the iDCA FB, which are associated with the input event RUN, when it triggers the iDCA FB execution. The data_O corresponds to the OPC UA bidirectional flow to be analyzed by the iDCA technique. The OPCUA_Flow_Sniffer FB sends OPC UA flows as soon as they are available in a sequential order, to be analyzed by the iDCA. On the other hand, the data_model_O corresponds to the network features extracted from this respective setup, which will be used in the iDCA technique for the processing of the (Ds and Ss input signals). iDCA Regarding the iDCA FB, besides the dynamic data input from the OPCUA_Flow_Sniffer FB, such as the data and the data_model, there are other important data inputs, related to the iDCA initial parametrization. In this case they are: • iterations: Number of iterations of the iDCA. • At : Anomaly threshold used in the Classification phase of the iDCA. • thresh_down: Lower limit of the dc lifespan (expected time to live of the dc). • thresh_up: Higher limit of the dc lifespan. The iDCA can analyze incrementally over the data since the input data (network flows) to be analyzed become available gradually over time (stream data). Intrusion detection is possible using the iDCA technique. Finally, as data output, the iDCA FB makes available the classification result of each OPC UA flow (received in the data input) in the output classification. In this case, a classification of 0 means that the OPC UA flow analyzed is a normal network instance. On the other hand, if it is 1, the OPC UA flow is an abnormal network instance, which is evidence of a possible attack on the Edge device deployed in the network. Finally, suppose classification contains any other value other than 0 or 1. In that case, this means that the iDCA couldn't classify all OPC UA flows due to insufficient information to be used by the iDCA, such as no migrated dc or no Ag sampled by dc. Test and validation Vargas Martinez and Vogel-Heuser (2021) defend that, considering limited computational resources, the integration of an A-HIDS in embedded industrial devices can be pretty challenging, especially under strict operational requirements. So, the authors abstracted a set of requirements while considering both capabilities from ICT A-HIDS and considerations regarding industrial devices. Industrial devices should follow industry environments and security standards to assess the suitability of an IDS approach applied in such context. According to the authors, essential requirements that a general A-HIDS should possess are: • Configurability: It should be possible to configure the A-HIDS according to the requirements of the system to be protected. • Configuration and knowledge security: The A-HIDS configuration and collected information for analysis should be protected. • Resiliency: The A-HIDS should be available all the time, and so itself should be protected against attacks. • Low-performance overhead: While executing in the host being protected, the A-HIDS should not negatively impact the system's performance. • Low detection time: Quick detection and response to intrusions to avoid extra damage in the system being attacked. • Interoperability with other IT: Interaction with other security tools in the system for an overall security evaluation. • Centralized A-HIDS configuration and management: This approach can be used to manage distributed systems, such as CPPS. • Log collection: Enable log data for further processing and analysis. • System audit: Collect host-related performance metrics. • Host network analysis: Analyze network traffic in the device (input and output network packets). • Event-triggered or monitoring analysis: Intrusion detection by monitoring host-related data. • Passive intrusion detection actions: Logging and trigger alarms when facing important IDS events. Considering the A-HIDS FB Pipeline, the proposed solution enables an immune-based A-HIDS to detect realtime network attacks in CPPS. Analysis of network data consists of incoming/outgoing OPC UA network packets in the Edge device. So, it is intended to validate the suitability of the proposed approach in resource-constrained Edge devices. Despite analyzing network packages, the solution can be classified as an A-HIDS since it is executed within Edge devices while analyzing only local network packets. This means that integrating the A-HIDS into the IIoT application can be challenging. Also, while running, certain Edge device operation constraints may not hold. In this scenario, the assessment of the proposed solution may consider multiple metrics in different system properties, such as (1) Detection rate , (2) Resource consumption , and (3) Performance overhead. Regarding the Detection rate, since the A-HIDS analyzes network-related data, there is the advantage of carrying offline assessment in datasets. Typical system properties assessed are the detection accuracy, detection time, and network packets processing capabilities. In this case, the detection effectiveness of the iDCA as an anomaly detection algorithm for network data has been studied in previous work (Pinto et al. 2021). Hence, using the Detection rate system property for the evaluation of the A-HIDS is out of the scope of this contribution. On the other hand, both Resource consumption and Performance overhead properties are suitable for assessment purposes. Resource consumption refers to the evaluation of resource utilization of the A-HIDS itself. In contrast, the Performance overhead refers to assessing the overhead added by the A-HIDS to the Edge device where it is deployed. This means that, for testing purposes, the A-HIDS Pipeline is deployed in a physical test case scenario to be evaluated. This test case scenario includes Edge devices in a M2M scenario (more detail in Experimental Setup). However, to further compare achieved results with related work in the literature, this approach presents several challenges. First, the physical test case isn't a benchmark, so it is impossible to repeatably assess and compare the proposed solution with other similar solutions. Even if one replicates exactly a physical test case similar to an existing one used previously in the literature, different Edge devices might not have the same baseline conditions. Consequently, the evaluation results of the same solution may be completely different when assessed in different host devices. This means that direct comparison of results is not possible using this testing methodology. So, despite being considered for assessing the feasibility of deploying the proposed solution in an industrial Edge layer (more detail is provided in "Evaluation methodology and results" section), a more appropriate approach is used for result comparison with similar previous work. In this case, we consider the impact of a given IEC 61499 FB Pipeline on design complexity and maintainability of the system, i.e., the effort required to develop, deploy, and refine secure CPPS/ICS applications. According to the MBE and IEC 61499 paradigms, advantages for building and maintaining CPPS/ICS systems, design complexity, and maintainability of such solutions are critical measures to quantify the actual impact in the overall industrial system (Zhabelova and Vyatkin 2015). More detail can be found in "FB pipeline measures" section. Next, it is described in detail the CPPS testbed as an experimental setup used to assess the proposed approach. Then, more detail is provided regarding the actual evaluation methodology, such as performance overhead and FB Pipeline metrics (complexity and maintainability). Finally, the results of the tests performed are described and compared with similar state-of-the-art work, along with a critical discussion of achieved results. Experimental setup We deployed the solution in a lab CPPS testbed to assess the feasibility of this approach. The testbed consists of a typical M2M scenario between multiple Edge devices in a CPPS, as represented in Fig. 5. Each Edge device consists of a Raspberry Pi 4 Model B, with 2GB RAM, connected to a cabled Local Area Network (LAN), created by a typical network switch. Each Edge device executes a DINASORE instance with all its respective dependencies. Finally, there is a malicious node (pi@192.168.0.10), which gets access to the network to deploy several attacks, such as DoS, Message Spoofing, and Man-in-themiddle (MITM). Each Edge device implements an OPC UA server (default of DINASORE) to publish in OPC UA variables updates regarding different sensor readings. In this case, each device is simulating dummy temperature sensor readings. Also, one device<EMAIL_ADDRESS>implements an OPC UA client and subscribes to all OPC UA variables from all other devices in the same network. For testing purposes, the A-HIDS is deployed and executed only in this device<EMAIL_ADDRESS>since it has more processing overhead and resource consumption. Most of the OPC UA traffic generated in the network can be found in this node. Figure 6 represents the FB Pipeline used to design and deploy the described functionalities among the Edge devices in the network. Besides the already mentioned OPCUA_Flow_Sniffer and the iDCA FBs, responsible for the A-HIDS functionalities, there are other FBs to be considered in this experimental setup: • SENSOR_SIMULATOR-This FB introduces the functionality of simulating sensor readings. There is one SENSOR_SIMULATOR for each Edge device, which contains 3 main data inputs: (1) UA_NAME-Name of the OPC UA variable that will contain the sensor reading updates; (2) RANGE-Range of possible values to be simulated, according to a Gaussian distribution; (3) RATE-Frequency of the sensor reading update/generation; Also, there is a data output (VALUE), which makes available the sensor readings. • CONNECT_OPCUA -This FB is responsible to create an OPC UA client. This client connects to an existing OPC UA server, so data input can be the IP address (ADDRESS) and port (PORT), to create and connect to the OPC UA server device. As data output, the CONNECT_OPCUA makes available the connection object (CLIENT), which will be passed as input to the OPCUA_SUBS. • OPCUA_SUBS-This FB enables the subscription of one OPC UA variable. As data inputs, Node_ID, and the CLIENT are used to specify the path for the OPC UA variable to be subscribed, considering the client object. DATA_OUT makes available the subscribed sensor readings. There are 7 Edge devices in this experimental testbed, all of them simulating temperature sensor readings. One of them is subscribing to all the sensor readings. The FBs mapped to the subscribing device<EMAIL_ADDRESS>are represented on the top of the figure in yellow, as represented in Fig. 6. In contrast, the FBs that execute in other devices are represented on the bottom of the figure with different colors for each device mapping. So, in each Edge device is deployed a SENSOR_SIMULATOR FB, where the FB instances are designated TEMP20 to TEMP80, according to the respective device<EMAIL_ADDRESS>to pi@192.168.0.80). In the subscribing device, there are deployed six pairs of CONNECT_OPCUA and OPCUA_SUBS FBs, each pair for the connection and subscribing to one specific publishing device. Finally, as mentioned before, the respective FBs for the A-HIDS functionalities (OPCUA_Flow_Sniffer and iDCA) are also deployed in the subscribing device. Evaluation methodology and results The evaluation of the proposed solution is twofold. On the one hand, it considers the performance overhead of the iDCA technique when deployed in the industrial Edge layer. On the other hand, critical FB Pipeline measures, such as design complexity and maintainability, are assessed. Performance overhead The evaluation of the proposed solution considers multiple system properties, according to the HIDS requirements mentioned previously. In this case, the assessment considers host and network-related metrics. On the one hand, host-related properties are used to evaluate the performance overhead of executing the A-HIDS in the Edge device by assessing CPU and RAM consumption. On the other hand, network-based properties assess the solution's suitability to process network data by evaluating network overhead and missing or non-classified network flows. As mentioned before, the iDCA classification performance is out of the scope of this study. It is intended in this work to test the feasibility of embedding intrusion detection capabilities in CPPS Edge devices by design and not the actual classification performance of the immune technique. One can find more information about the classification performance of the DCA and iDCA, as intrusion detection techniques in previous work (Pinto et al. 2021. This evaluation methodology considers the execution of the A-HIDS (see "Proposed approach" section) in one Edge device (pi@192.168.0.20), tackling the experimental setup described in the "Experimental setup" section. Based on this methodology, the goal is to evaluate the performance overhead, such as CPU-intensive, memory-intensive, network-intensive, and the rate of missing classifications of processed network flows. The overhead performance metrics are collected using the nmon tool (Griffiths 2020), a well-known monitor tool for computer performance in Linux-based systems. On the other hand, the A-HIDS is self-aware of the number of non-classifications while executing in the Edge device under normal operation. So, the test scenarios consider three different monitoring moments: (I) At system ramp-up and while no operation is being performed, which means little workload (processes, network connections, etc.); (II) System in operation, by executing the underlying tasks, such as sensor simulation, publishing and subscribing sensor data, as mentioned in "Experimental setup" section, but without executing the A-HIDS; (III) Executing the A-HIDS while the Edge device is under regular operation, including attack injection. The monitoring process in all test scenarios is performed during 60min of system operation. Table 2 summarizes the monitoring results achieved, considering the evaluation methodology described previously. By inspection of the results, one can extrapolate some conclusions on the suitability of the M2M scenario designed for this specific experimental setup-test scenario (II), and the application of the A-HIDS in this distributed CPPS-test scenario (III). First, in test scenario (II), it is evident that deploying the M2M scenario would increase CPU consumption and network traffic while reducing the free system's memory. However, the impact is residual, i.e., only 3.2% of CPU is allocated to the underlying tasks, leaving almost 1GB of free memory. Considering the 2GB of memory of the Raspberry Pi 4 Model B, this represents nearly 77% of free memory. Also, the overall network traffic (read and write) in all the interfaces (local, wireless and wired) is under 14 KB/s. On the other side, considering test scenario (III), the impact is more evident when the A-HIDS is used to process the network traffic in the M2M scenario considered. First, the CPU consumption increases by 10× to 31% CPU allocation. Despite this significant CPU consumption increase, there is no apparent negative impact on this system, considering almost 70% CPU free to be allocated to other processes. Also, a small impact is verified in the RAM since the free memory is reduced to 51%. Secondly, the overall network traffic increases significantly, to almost 300 KB/s, mainly due to the increase in traffic in the local interface. This may be explained by the actual data exchange between FBs, since the network traffic on other external interfaces remains stable. Finally, the rate of missing packet classification is relatively low. Less than 0.1% of the overall packets weren't classified. FB pipeline measures In this work, we perform validation and analysis of both the FB Pipeline of the A-HIDS and the CPPS testbed. For this, we used adapted measures from software, presented earlier by Zhabelova and Vyatkin (2015) and already used by Tanveer et al. (2021) when assessing an industrial security solution proposal. These measures are: • High level design complexity metrics: • Structural Complexity (S(i)): Reflects coupling of the FB i to the rest of the system. The metric con- siders the number of outgoing connections or output events ( f out ) in a FB i, and is given by Eq. (1). • Data Complexity (DC(i)): Shows efficiency of data utilization and information flow. The metric considers the ratio between the number of data inputs NI and outputs NO in a FB i with the number of output events f out , and is given by Eq. (2). • System Complexity (C(i)): As data and structural complexity increases, the overall system complexity grows and modularity decreases. This metric considers both the structural and data complexity in a unique measure, and is given by Eq. (3). Note that S * , DC * , and C * represent the aggregated values for the entire FB Pipeline, defined respectively as the sums of all S(i), DC(i), and C(i) for each FB instance in the Pipeline. • Maintainability index (MI): Represents the degree to which a FB Pipeline is open to change after being deployed in the Edge layer. One of the most popular metrics to measure maintainability takes into account the lines of code (LOC), Halstead's measures of program volume (V) and McCabe's complexity metric M m (explained next), and is given by Eq. (4). All these metrics consist in the sum of both analysis for the FB internal ECC and the FB algorithms source code. • Halstead's Metrics ( M H ): is an indicator of volume and entropy measure, by taking into account the number of operators and operands of both FB's ECC an respective source code. Halstead's Metrics are: program length, program vocabulary, estimated length, purity ratio, program volume, difficulty and program effort. The program volume V is needed for the MI calculus, and is given by Eq. (5), where N is the program length, i.e., total number of operators and operands, while n is the program vocabulary, i.e., total number of distinct operators and operands. flow graph, and it is a different way of representing control and data flow complexity using graph based metrics. It is given by Eq. (6). M m (Alg i ) represents the cyclomatic complexity of the FB i (algorithm source code), by counting the number of if/else statements and loops, such as for and while. On the other hand, M m (ECC) represents the cyclomatic complexity of the FB i (internal ECC), by counting both the number of edges (transitions in the ECC) and nodes (states in the ECC). They are given by Eq. (7). This evaluation methodology considers the deployment of the A-HIDS FB Pipeline (see the "Proposed approach" section) in the CPPS testbed described in the "Experimental setup" section. Based on this methodology, the main goal is to consider the design complexity and maintainability of the standalone A-HIDS FB Pipeline, and the FB Pipeline of the experimental setup (CPPS testbed) with and without the A-HIDS FBs, i.e., executing or not the iDCA. Also, it is pretended to compare the design complexity and maintainability metrics achieved in this study with similar work in the literature. In this case, the only work that targets IEC 61499-based security solutions and uses these metrics for assessment is presented by Tanveer et al. (2021). The authors propose several security solutions based on TLS and AEAD mechanisms while deploying those in two different case studies: an industrial mixer control system (IMCS) case study and a large baggage handling system (BHS) application. The test scenarios considered are: (I) Security Approaches Design: Assess the structural, data and system complexity, and maintainability of three different security approaches proposed, namely: (a) A-HIDS (proposed approach in this study); (b) Average results of the Secure Links approach proposed by Tanveer et al. (2021), which is a set of security mechanisms; and (c) Secure Links best result mechanism proposed in the same work. (II) CPPS Tesbed Impact: Assess the impact (added complexity and reduced maintainability) that the application of the A-HIDS has when deployed in the CPPS Testbed. This is done by comparing the assessment of the structural, data and system com- Based on these test scenarios, Table 3 summarizes and compares the results achieved in this study and the work of Tanveer et al. (2021), considering the evaluation methodology described previously. Considering the results of test scenario (I), the A-HIDS presents great results regarding data complexity (0.65). When compared with the average results of the Secure Links approach, it is clear that the A-HIDS is a better solution in terms of complexity and with very competitive results when compared with the best result of the Secure Links approach. Also, when considering the maintainability metric, the A-HIDS presents great results (100.12), surpassing both average and best results of Secure Links. Overall, despite not being the same type of security solution, the A-HIDS is a better maintainable FB application solution with low complexity compared with the state of art work. When the A-HIDS is deployed in the CPPS Testbed (II), it is undeniable the impact that the IDS has on the testbed. For system complexity and maintainability, one can find a little more than 20% increase in complexity (22.16%) and a decrease in maintainability (23.57%). However, using the A-HIDS has a negligible effect on data complexity. When comparing to the test scenarios (III) and (IV), the increase of system complexity by 22.16%, when deploying the A-HIDS in the CPPS Testbed, is shorter in test scenarios (III) and (IV). This means that the negative impact in terms of complexity when deploying the proposed solution is better when compared to the deployment of the Secure Links in both IMCS and BHS. However, the impact on the system maintainability is worse (23.57% when Secure Links deployment is under 5%). In summary, when considering the standalone security solutions, the A-HIDS is an overall better maintainable solution and with low complexity when compared with Secure Links. Regarding the impact of solution deployment in testbeds, the A-HIDS presents good complexity results but a poorer performance when considering system maintainability. This is mainly due to increased program volume, control and data flow complexity of the A-HIDS. Discussion Considering the requirements described at the beginning of the "Test and validation" section, we took most of them into consideration in the design, implementation, and evaluation of the proposed solution. Regarding the Configurability of the solution, the parametrization of the proposed A-HIDS is possible by updating the data-input fields of the iDCA FB. However, the data model, i.e., the network features considered, are derived internally in the OPCUA_Flow_Sniffer. This means that the user can't specify or add new features to be used without updating the FB source code. On the other hand, the A-HIDS Configuration and knowledge security, i.e., the data model, Regarding Low-performance overhead and Low detection time, according to results in Table 2, there is an expected increase in CPU consumption, RAM usage, and network traffic during the execution of the A-HIDS. However, no evident negative impact is noticeable since there is no evidence of dropped network packets, performance delays, or relevant missing packet classification. The A-HIDS doesn't jeopardize the performance of the Edge device's underlying tasks. Also, the A-HIDS implements the iDCA as a detection technique, which processes input data incrementally and provides online detection in typical stream data scenarios. This way, quick detection and response to intrusions are guaranteed. The Centralized A-HIDS configuration and management can be done using the 4DIAC-IDE, which supports the IEC 61499 standard. By designing and deploying the A-HIDS using DINASORE, it is possible to monitor and manage the distributed system easily. According to results in Table 3, the A-HIDS FB Pipeline is an easily maintainable solution, which presents overall low complexity for configuration and management purposes. On the other hand, regarding System audit, despite analyzing network traffic (input and output OPC UA network traffic in the Edge device), host-related data are collected to assess the approach performance on the host device, such as CPU, RAM, and network traffic overhead. However, considering the monitoring analysis requirement, that host-related data collected for performance purposes is not being considered as input in the actual A-HIDS for detection purposes. This is out of scope in this work. Regarding Passive intrusion detection actions: The log and alert of the A-HIDS are possible. Whenever an intrusion is detected, the A-HIDS logs the respective network flow as possible intrusion, and the IP address of the source machine is included in a timestamped blocklist. Also, regarding Log collection, besides being processed for real-time detection purposes, the collected network data are included in a log file for future offline analysis. In this case, a .csv file type is used to record the network data in a typical ML tabular format dataset by using the data model as features considered in the A-HIDS. This can be very handy since the dataset created can be easily used for validation purposes in other IDSs that use different ML techniques for detection. Finally, the Interoperability with other security tools in the system, for an overall security evaluation, is out of scope in this work. Conclusion IIoT is a critical enabling technology to transform how industrial automation systems are implemented and controlled. Resources such as open connectivity and new computing environments, such as Edge/Fog Computing, are reshaping how CPPS is designed, deployed, and managed. This brings a new level of complexity to these systems, which means that the traditional development approaches are becoming rapidly unfit (Thramboulidis 2005). All system components are developed independently and then integrated into the final system using conventional development methods. As an alternative, the MBE approach is a good approach for developing CPPS since it uses models to design software and perform component testing, accommodating the complexity and dynamics of CPPS. With the emergence of the IIoT and the increasing interconnection of CPPS components with the digital world, cyber-attacks are increasingly common to affect real-world processes and devices. This means that CPPS security can't be treated as a classic ICT security problem. This work considers the security of CPPS Edge layer devices by introducing an A-HIDS that analyses OPC UA network data in typical M2M communication. Also, the challenge of providing security by design is addressed, i.e., enabling the intrusion detection capabilities during the CPPS design phase. This is achieved by exploring the IEC 61499 standard in an MBE approach, developing FBs specifically for introducing the A-HIDS functionalities in Edge devices. The FBs are created and deployed using the DINASORE technology. By analyzing the work in the literature, it is possible to conclude that the proposed approach has many advantages. For starters, it presents a feasible IDS architecture for embedded Edge devices, which considers IDS features from the ICT domain and the industrial domain's specific properties. Secondly, the A-HIDS approach provides a protection scope that may not be covered by classic security solutions in typical information systems. Thirdly, this approach enables a complete anomaly detection approach by implementing complex analysis with bio-inspired techniques, coping with the system's dynamic behaviors over time. Fourth, this approach can act on real-time data by collecting, integrating, analyzing, and detecting attacks while the data are being generated. Data streams processing is a typical scenario in IIoT applications. Thus, the proposed solution can continuously process data streams from Edge devices to get insights without negatively impacting the overall system performance. This is a great advantage since it tackles essential requirements in IIoT applications, according to Vikash et al. (2020). To answer RQ T.1 How effective is IEC 61499 in supporting the implementation of a bio-inspired intrusion detection solution for CPPS security by design, in terms of low complexity and high maintainability?, results show that the IEC 61499 standard is a good approach to support the implementation of security-by-design CPPS. The standard enables security by design, considering the low-complexity and high-maintainable results of different security solutions. When compared with state-of-the-art work (see in Table 3), the A-HIDS Pipeline achieves better maintainability and overall low complexity. On the other hand, to answer RQ T.2 What impact the iDCA algorithm has on the computational resources of Edge devices while enabling intrusion detection at host level?, there is no evidence of the negative effect of the iDCA when executing at host level in constraint Edge devices. Results show that the iDCA can detect intrusions in an online fashion with an insignificant rate of missing packets classification and a reduced overhead in CPPS consumption and RAM (see in Table 2). As for downsides, testing the approach in resource constraining Edge devices instead of proper industrial PCs, such as PLCs, may neglect important performance characteristics related to resource consumption and process overhead in industrial tasksrelated. Also, the proposed solution is not considering the detection performance in a typical distributed ICS since it is only tested in a single Edge device. Finally, the proposed A-HIDS doesn't cope with necessary A-HIDS requirements, such as full configurability, knowledge security, resiliency to other non-OPC UA-driven formats of attack, interoperability with other ICT security tools, and don't consider host-related data for detection purposes. Future work will consider the distributed nature of the CPPS to enhance detection performance while validating the approach in a real industrial scenario, where Edge devices may have different constraints related to the production process. This will include a truly distributed detection scenario. Also, in future work, host-related data should be considered along with the network data for enhanced detection, considering the close relationship between the physical and cyber worlds in a CPPS. Finally, the proposed solution may include in the future a more active reaction to a detected attack/intrusion, which may consist of a dynamic reconfiguration and deployment of the CPPS, following the IEC 61499 standard. Abbreviations AES: Advanced encryption standard; AEAD: Authenticated encryption with associated data; AIS: Artificial immune system; A-HIDS: Anomaly-based host intrusion detection system; BHS: Baggage handling system; CL4FB: Confidentiality layer for function blocks; CPPS: Cyber-physical production system; CPU: Central processing unit;
11,991
2022-05-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Characterization and Bone Response of Carbonate-Containing Apatite-Coated Titanium Implants Using an Aqueous Spray Coating We performed thin carbonate-containing apatite (CA) coating on titanium (Ti) by an aqueous spray coating (ASC) method that consisted of a Ca-CO3-PO4 complex. Two different CA coatings were produced by two different spray amounts and were heat-treated after spraying. We evaluated three-dimensional structures, adhesiveness to Ti, and durability of the CA film. In addition, we performed immersion experiments in simulated body fluid (SBF), and bone responses were evaluated after implantation into a femoral bone defect in rats. The bonding ability of ASC-coated implant into the bone was examined by push-in tests. Unique network structures with small particles were identified on CA coatings. Although heat treatment produced no significant difference in surface morphology, scratch tests revealed that heat treatment improved the adhesion of CA coatings to Ti. Crystal formation progressed on CA-coated specimens, and the sample placement direction influenced crystal formation and growth in SBF immersion. Animal implantation experiments revealed significantly greater bone-to-implant contact ratio and bone mass in both cortical and bone marrow, respectively, four weeks after implantation. Push-in tests suggested that the bonding of the CA coating to Ti is clinically acceptable. Therefore, we conclude that CA coating to Ti by the ASC method would be possible for clinical applications, including dentistry. Morphology and Surface Roughness SEM (scanning electron microscope) pictures of the surfaces of the Ti substrate and four types of CA coatings films prepared by the ASC method revealed CA coatings covering the entire Ti surface (Figure 1). The network structure was approximately 10-15 µm and was observed across all four coating surfaces. Spherical particles with diameters of approximately 0.8-1.4 µm were present on the CA coating surface, with more spherical particles identified in ASC-25 and ASC-25/H in comparison to ASC-5 and ASC-5/H. Borders on the network structures were thicker in ASC-5 and ASC-5/H than in ASC-25 and ASC-25/H. The heat treatment produced no distinct differences in the surface morphology. Laser microscopy of CA coatings revealed the three-dimensional surfaces of the heat-treated specimens (ASC-5/H and ASC-25/H), with ASC-25/H showing a clearer network structure ( Figure 2). In addition, there were significant differences in both surface roughness parameters Sa (three-dimensional arithmetic height) and Sdr (surface deployment area rate) surface deployment area rate) among three specimens (p < 0.05; Table 1). ASC-25/H showed the greatest roughness (p < 0.05), and the Sa and Sdr values for ASC-25/H were approximately 1.5 and 4 times greater than that for ASC-5/H, respectively. Morphology and Surface Roughness SEM (scanning electron microscope) pictures of the surfaces of the Ti substrate and four types of CA coatings films prepared by the ASC method revealed CA coatings covering the entire Ti surface (Figure 1). The network structure was approximately 10-15 μm and was observed across all four coating surfaces. Spherical particles with diameters of approximately 0.8-1.4 μm were present on the CA coating surface, with more spherical particles identified in ASC-25 and ASC-25/H in comparison to ASC-5 and ASC-5/H. Borders on the network structures were thicker in ASC-5 and ASC-5/H than in ASC-25 and ASC-25/H. The heat treatment produced no distinct differences in the surface morphology. Laser microscopy of CA coatings revealed the three-dimensional surfaces of the heat-treated specimens (ASC-5/H and ASC-25/H), with ASC-25/H showing a clearer network structure ( Figure 2). In addition, there were significant differences in both surface roughness parameters Sa (three-dimensional arithmetic height) and Sdr (surface deployment area rate) surface deployment area rate) among three specimens (p < 0.05; Table 1). ASC-25/H showed the greatest roughness (p < 0.05), and the Sa and Sdr values for ASC-25/H were approximately 1.5 and 4 times greater than that for ASC-5/H, respectively. Morphology and Surface Roughness SEM (scanning electron microscope) pictures of the surfaces of the Ti substrate and four types of CA coatings films prepared by the ASC method revealed CA coatings covering the entire Ti surface ( Figure 1). The network structure was approximately 10-15 μm and was observed across all four coating surfaces. Spherical particles with diameters of approximately 0.8-1.4 μm were present on the CA coating surface, with more spherical particles identified in ASC-25 and ASC-25/H in comparison to ASC-5 and ASC-5/H. Borders on the network structures were thicker in ASC-5 and ASC-5/H than in ASC-25 and ASC-25/H. The heat treatment produced no distinct differences in the surface morphology. Laser microscopy of CA coatings revealed the three-dimensional surfaces of the heat-treated specimens (ASC-5/H and ASC-25/H), with ASC-25/H showing a clearer network structure ( Figure 2). In addition, there were significant differences in both surface roughness parameters Sa (three-dimensional arithmetic height) and Sdr (surface deployment area rate) surface deployment area rate) among three specimens (p < 0.05; Table 1). ASC-25/H showed the greatest roughness (p < 0.05), and the Sa and Sdr values for ASC-25/H were approximately 1.5 and 4 times greater than that for ASC-5/H, respectively. Adhesiveness of CA Coating Heat treatment produced significantly higher Lc (critical load) (values (p < 0.05) in the scratch test (Table 2). No significant differences in Lc values existed between ASC-5 and ASC-25, nor between ASC-5/H and ASC-25/H (p > 0.05). Moreover, there was a clearer break down of the coating film at the position of first crack for ASC-5 and ASC-25 in the panoramic images ( Figure 3). Adhesiveness of CA Coating Heat treatment produced significantly higher Lc (critical load) (values (p < 0.05) in the scratch test (Table 2). No significant differences in Lc values existed between ASC-5 and ASC-25, nor between ASC-5/H and ASC-25/H (p > 0.05). Moreover, there was a clearer break down of the coating film at the position of first crack for ASC-5 and ASC-25 in the panoramic images ( Figure 3). Adhesiveness of CA Coating Heat treatment produced significantly higher Lc (critical load) (values (p < 0.05) in the scratch test (Table 2). No significant differences in Lc values existed between ASC-5 and ASC-25, nor between ASC-5/H and ASC-25/H (p > 0.05). Moreover, there was a clearer break down of the coating film at the position of first crack for ASC-5 and ASC-25 in the panoramic images ( Figure 3). Values in brackets are SD. Different superscript letters indicate a significant difference (p < 0.05). Durability After immersion in PBS (phosphate-buffered saline), there were no distinct differences in surface morphology for ASC-5/H and ASC-25/H, as detected by SEM ( Figure 4). However, there was a slight dissolution in both ASC-5/H and ASC-25/H after immersion in citrate-buffered solution ( Figure 4). Specifically, the dissolution of the protruding portion was distinct, and the borders of the network structures became unclear. Greater dissolution was observed in ASC-5/H than in ASC-25/H. Durability After immersion in PBS (phosphate-buffered saline), there were no distinct differences in surface morphology for ASC-5/H and ASC-25/H, as detected by SEM ( Figure 4). However, there was a slight dissolution in both ASC-5/H and ASC-25/H after immersion in citrate-buffered solution ( Figure 4). Specifically, the dissolution of the protruding portion was distinct, and the borders of the network structures became unclear. Greater dissolution was observed in ASC-5/H than in ASC-25/H. SBF Immersion Clear differences were recognized in the crystal formation between Ti and ASC-25/H ( Figures 5 and 6). For horizontal placements, there were no crystals on Ti one day after immersion in SBF. However, ASC-25/H showed that network structures were completely covered by the crystals. Three days after immersion, crystals formed on Ti and ASC-25/H. Fourteen days after immersion, increased growth of spherical crystals was observed in ASC-25/H. Vertical placement showed even greater indication of the differences in crystal formation between Ti and ASC-25/H. For Ti, no crystals were identified even three days after immersion ( Figure 6). Conversely, crystal formation was observed in the network structure of ASC-25/H one day following immersion. Three days after immersion, the whole surface was nearly covered with crystals, and the network structure was barely recognizable. Fourteen days after immersion, both Ti and ASC-25/H showed crystal formation on their surfaces. However, although the entire Ti surface was not completely covered by the crystals, the surface of ASC-25/H was completely covered by the crystals. Therefore, a substantial difference in crystal growth behaviors was detected between horizontal and vertical placements. SBF Immersion Clear differences were recognized in the crystal formation between Ti and ASC-25/H ( Figures 5 and 6). For horizontal placements, there were no crystals on Ti one day after immersion in SBF. However, ASC-25/H showed that network structures were completely covered by the crystals. Three days after immersion, crystals formed on Ti and ASC-25/H. Fourteen days after immersion, increased growth of spherical crystals was observed in ASC-25/H. Vertical placement showed even greater indication of the differences in crystal formation between Ti and ASC-25/H. For Ti, no crystals were identified even three days after immersion ( Figure 6). Conversely, crystal formation was observed in the network structure of ASC-25/H one day following immersion. Three days after immersion, the whole surface was nearly covered with crystals, and the network structure was barely recognizable. Fourteen days after immersion, both Ti and ASC-25/H showed crystal formation on their surfaces. However, although the entire Ti surface was not completely covered by the crystals, the surface of ASC-25/H was completely covered by the crystals. Therefore, a substantial difference in crystal growth behaviors was detected between horizontal and vertical placements. Durability After immersion in PBS (phosphate-buffered saline), there were no distinct differences in surface morphology for ASC-5/H and ASC-25/H, as detected by SEM ( Figure 4). However, there was a slight dissolution in both ASC-5/H and ASC-25/H after immersion in citrate-buffered solution ( Figure 4). Specifically, the dissolution of the protruding portion was distinct, and the borders of the network structures became unclear. Greater dissolution was observed in ASC-5/H than in ASC-25/H. SBF Immersion Clear differences were recognized in the crystal formation between Ti and ASC-25/H ( Figures 5 and 6). For horizontal placements, there were no crystals on Ti one day after immersion in SBF. However, ASC-25/H showed that network structures were completely covered by the crystals. Three days after immersion, crystals formed on Ti and ASC-25/H. Fourteen days after immersion, increased growth of spherical crystals was observed in ASC-25/H. Vertical placement showed even greater indication of the differences in crystal formation between Ti and ASC-25/H. For Ti, no crystals were identified even three days after immersion ( Figure 6). Conversely, crystal formation was observed in the network structure of ASC-25/H one day following immersion. Three days after immersion, the whole surface was nearly covered with crystals, and the network structure was barely recognizable. Fourteen days after immersion, both Ti and ASC-25/H showed crystal formation on their surfaces. However, although the entire Ti surface was not completely covered by the crystals, the surface of ASC-25/H was completely covered by the crystals. Therefore, a substantial difference in crystal growth behaviors was detected between horizontal and vertical placements. XRD (X-ray diffraction) patterns of the precipitated crystals 14 days after immersion revealed peaks derived from apatite structures at around 26.0°, 32.0°, 46.5°, and 49.5° ( Figure 7). The FT-IR (Fourier transform infrared) spectra of the precipitated crystals 14 days after immersion revealed peaks derived from phosphate groups that were detected at approximately 550-600 cm −1 and 900-1200 cm −1 , as well as peaks from carbonyl groups detected at approximately 800-900 cm −1 and 1300-1600 cm −1 ( Figure 8). Therefore, precipitated crystals were identified as CA. XRD (X-ray diffraction) patterns of the precipitated crystals 14 days after immersion revealed peaks derived from apatite structures at around 26.0 • , 32.0 • , 46.5 • , and 49.5 • (Figure 7). The FT-IR (Fourier transform infrared) spectra of the precipitated crystals 14 days after immersion revealed peaks derived from phosphate groups that were detected at approximately 550-600 cm −1 and 900-1200 cm −1 , as well as peaks from carbonyl groups detected at approximately 800-900 cm −1 and 1300-1600 cm −1 ( Figure 8). Therefore, precipitated crystals were identified as CA. XRD (X-ray diffraction) patterns of the precipitated crystals 14 days after immersion revealed peaks derived from apatite structures at around 26.0°, 32.0°, 46.5°, and 49.5° (Figure 7). The FT-IR (Fourier transform infrared) spectra of the precipitated crystals 14 days after immersion revealed peaks derived from phosphate groups that were detected at approximately 550-600 cm −1 and 900-1200 cm −1 , as well as peaks from carbonyl groups detected at approximately 800-900 cm −1 and 1300-1600 cm −1 (Figure 8). Therefore, precipitated crystals were identified as CA. XRD (X-ray diffraction) patterns of the precipitated crystals 14 days after immersion revealed peaks derived from apatite structures at around 26.0°, 32.0°, 46.5°, and 49.5° (Figure 7). The FT-IR (Fourier transform infrared) spectra of the precipitated crystals 14 days after immersion revealed peaks derived from phosphate groups that were detected at approximately 550-600 cm −1 and 900-1200 cm −1 , as well as peaks from carbonyl groups detected at approximately 800-900 cm −1 and 1300-1600 cm −1 (Figure 8). Therefore, precipitated crystals were identified as CA. Histological and Histomorphometrical Evaluation Experimental animals remained in good health and no failure of the implants was observed during the test period. No clinical signs of inflammation or adverse tissue reactions were seen when animals were sacrificed. Differences in the histopathological appearances of cortical bone formation around the implants two weeks after implantation revealed new bone formation in all three implants ( Figure 9). Frequently, no callus formation or other signs of wound healing was observed. The presence of original bone defects could not be identified. Haversian canals were observed in ASC-5/H and ASC-25/H, but not in Ti. The bone marrow demonstrated more distinct differences in new bone formation between Ti and CA-coated specimens. Greater amounts of new bone formation were observed for ASC-5/H and ASC-25/H than for Ti inside the bone marrow. Newly formed bone in bone marrow was trabecular bone and a part of new bone was formed close to the ASC-5/H and ASC-25/H. Four weeks after implantation, bone healing has proceeded and more mature bone formation was identified with the continuation of bone remodeling ( Figure 10). Closer contact of bone formation toward ASC-5/H and ASC-25/H was identified relative to Ti in the cortical bone. Haversian canals were also observed in ASC-5/H and ASC-25/H, but not in Ti. In the bone marrow, ASC-5/H and ASC-25/H showed closer contact and more newly formed bone than Ti. Newly formed bone in bone marrow was trabecular bone, and more bone formation close to ASC-5/H and ASC-25/H was recognized compared with those observed at two weeks. Histological and Histomorphometrical Evaluation Experimental animals remained in good health and no failure of the implants was observed during the test period. No clinical signs of inflammation or adverse tissue reactions were seen when animals were sacrificed. Differences in the histopathological appearances of cortical bone formation around the implants two weeks after implantation revealed new bone formation in all three implants (Figure 9). Frequently, no callus formation or other signs of wound healing was observed. The presence of original bone defects could not be identified. Haversian canals were observed in ASC-5/H and ASC-25/H, but not in Ti. The bone marrow demonstrated more distinct differences in new bone formation between Ti and CA-coated specimens. Greater amounts of new bone formation were observed for ASC-5/H and ASC-25/H than for Ti inside the bone marrow. Newly formed bone in bone marrow was trabecular bone and a part of new bone was formed close to the ASC-5/H and ASC-25/H. Four weeks after implantation, bone healing has proceeded and more mature bone formation was identified with the continuation of bone remodeling ( Figure 10). Closer contact of bone formation toward ASC-5/H and ASC-25/H was identified relative to Ti in the cortical bone. Haversian canals were also observed in ASC-5/H and ASC-25/H, but not in Ti. In the bone marrow, ASC-5/H and ASC-25/H showed closer contact and more newly formed bone than Ti. Newly formed bone in bone marrow was trabecular bone, and more bone formation close to ASC-5/H and ASC-25/H was recognized compared with those observed at two weeks. There were no significant differences in BIC (bone-to-implant contact) in the cortical bone among the three different implants two weeks after implantation (p > 0.05; Table 3). However, four weeks after implantation, ASC-5/H and ASC-25H showed significantly higher BIC than Ti, and BIC was the highest in ASC-5/H (p < 0.05). Indeed, BIC was significantly higher for ASC-5/H and ASC-25/H at four weeks post implantation than that at two weeks (p < 0.05). There were no significant differences in BIC (bone-to-implant contact) in the cortical bone among the three different implants two weeks after implantation (p > 0.05; Table 3). However, four weeks after implantation, ASC-5/H and ASC-25H showed significantly higher BIC than Ti, and BIC was the highest in ASC-5/H (p < 0.05). Indeed, BIC was significantly higher for ASC-5/H and ASC-25/H at four weeks post implantation than that at two weeks (p < 0.05). In the bone marrow, ASC-25/H showed significantly higher BIC than Ti and ASC-5/H at two weeks after implantation. At four weeks after implantation, BIC of ASC-5/H and ASC-25/H was significantly higher than Ti (p < 0.05). No significant differences existed between ASC-5/H and ASC-25/H (p > 0.05). BICs four weeks post-implantation were significantly higher than those at two weeks for ASC-5/H and ASC-25/H (p < 0.05). ASC-5/H and ASC-25H showed significantly higher BM (bone mass) than Ti (p < 0.05), and there were no significant differences between ASC-5/H and ASC-25H (p > 0.05; Table 4). At four weeks after implantation, significant differences were detected among the three different implants, with ASC-25/H showing the highest BM (p < 0.05). The BM of ASC-25/H increased significantly from two to four weeks (p < 0.05). No significant differences in BM were observed between two and four weeks for either Ti and ASC-5/H. ASC-5/H and ASC-25/H showed significantly higher push-in loads than Ti (p < 0.05), and there were no significant differences between ASC-5/H and ASC-25H (p > 0.05; Table 5). In the bone marrow, ASC-25/H showed significantly higher BIC than Ti and ASC-5/H at two weeks after implantation. At four weeks after implantation, BIC of ASC-5/H and ASC-25/H was significantly higher than Ti (p < 0.05). No significant differences existed between ASC-5/H and ASC-25/H (p > 0.05). BICs four weeks post-implantation were significantly higher than those at two weeks for ASC-5/H and ASC-25/H (p < 0.05). ASC-5/H and ASC-25H showed significantly higher BM (bone mass) than Ti (p < 0.05), and there were no significant differences between ASC-5/H and ASC-25H (p > 0.05; Table 4). At four weeks after implantation, significant differences were detected among the three different implants, with ASC-25/H showing the highest BM (p < 0.05). The BM of ASC-25/H increased significantly from two to four weeks (p < 0.05). No significant differences in BM were observed between two and four weeks for either Ti and ASC-5/H. ASC-5/H and ASC-25/H showed significantly higher push-in loads than Ti (p < 0.05), and there were no significant differences between ASC-5/H and ASC-25H (p > 0.05; Table 5). Values in brackets are SD. Different superscript letters indicate a significant difference (p < 0.05). Discussion In the present study, we characterized thin CA coatings produced using the ASC method, and their biocompatibility was evaluated using both in vitro SBF immersion experiments and in vivo animal implantation experiments. It revealed that characteristics of ASC coating were influenced by the conditions of the ASC method such as spraying amounts and heat treatment. Moreover, ASC coating can enhance bone formation. Therefore, the null hypothesis is accepted. The ASC method is a new technique to deposit thin CA coatings onto Ti. The novelty of the ASC method is that it produces thin CA films with network structures. The size of the present Ti substrate is smaller than previous ones employed for animal implantation experiments and the current ASC method can produce network structures similar to a former report on smaller Ti substrates [28,29]. Briefly, a sprayed solution generates concentrated droplets in the mist while traveling from the nozzle to the plate, and a CA film deposits due to the collision of the concentrated droplets that follows water evaporation on the sheathed heater. Although the ESD method also can produce a similar network structure, the network size on the CA coating surface is 5-8 µm thinner with the ASC method [14,15]. In addition, the ASC method is unique in that it can produce rounded CA particles inside the network structure. The presence of small particles made for a rougher surface. Indeed, ASC-25/H had a rougher surface than ASC-5/H due to the greater amount of spray solution. The adhesive nature of the CA coating was evaluated using scratch tests. The adhesiveness of the thin CA coating produced by the molecular precursor method was also evaluated using scratch tests [23,30]. A previous study for ASC coating evaluated the adhesiveness of only heat-treated CA coating specimens [28]. In the present study, it revealed that heat treatment improved the adhesion of CA coating to Ti. It is speculated that the improvement in the density of the CA coating film due to heat treatment is caused a higher bonding of the CA coating. A more apparent breakdown of the coating film, which was observed in ASC-5 and ASC-25, was due to the brittleness of the CA coating film. Heat treatment reduced this brittleness due to improvement in density. Non-heat-treated specimens have a risk of the CA coating peeling from the Ti substrate during storage or surgical operation. Therefore, we conclude that the heat treatment of CA coating is necessary for use in clinical application, and heat-treated specimens were used for durability, SBF immersion, and animal experiments in this study. The durability of CA coatings was evaluated by immersion in two different pH buffer solutions. It is well known that the inflammation caused by drilling or other surgical procedures can produce acidic conditions in tissues that receive implanted materials. Therefore, citrate-buffered solution with a pH of 5.4 was used to simulate inflammatory conditions and revealed that acidic conditions accelerated, but did not completely cause, the dissolution of CA coatings. Due to the greater amounts of spraying on ASC-25/H, there was more CA coating remaining than on ASC-5/H. It is therefore predicted that CA-coated implants inserted into the drilled hole will result in partially dissolved living tissue. Although the mechanism that provides greater better bone formation is remains unclear, it is possible that the elution of calcium ions from dissolved CA film improves the local calcium concentration and helps to activate osteoblast formation. As for the in vitro evaluation of biocompatibility, SBF immersion experiments were performed. In the present study, we used HBSS (Hanks' balanced salt solution) as an SBF [31]. Hanawa et al. [31] reported that an apatite layer formed on a titanium surface after immersion in HBSS. A previous study reported that the difference in spray amounts had little influence on crystal precipitation on the CA-coated surface [29]. Therefore, we only used one heat-treated sample, ASC-25/H, for the immersion study. In addition to CA coating, we investigated whether horizontal or vertical placement of samples in HBSS influenced crystal formation. Although previous studies have generally used horizontal placement in HBSS solution, Suzuki et al. [32] reported that apatite formation and growth on Ti could be influenced by the placement and orientation of samples in SBF immersion experiments. They revealed that CA coating promoted crystal formation after immersion in SBF for both vertical and horizontal placements. Therefore, a better response of bone growth for CA-coated implants would be expected in vivo. However, it appeared that vertical placement led to greater crystal formation and growth than horizontal placement. It is presumed that in vivo mineral formation system would be more substantial, and the influence of the direction of the sample placement on mineralization behaviors should be studied further. Many previous studies have revealed the ability for apatite formation in vitro, which has led to the common prediction that these materials also have in vivo bioactivity [33]. We used animal experiments to reveal that CA coating specimens ASC-5/H and ASC-25/H provided significantly greater amounts of BIC in cortical bone and bone marrow than Ti alone. ASC-5/H was more effective at increasing BIC than ASC-25/H. It is well known that rougher surfaces provide faster and more bone formation [34]. However, the surface roughness did not contribute to an increase of BIC in the present study. Mochizuki et al. [29] previously evaluated the attachment, proliferation, and differentiation of osteoblast-like cells on CA-coated Ti. They found that initial attachments of osteoblast-like cells increased due to CA coating and no difference was observed between ASC-5/H and ASC-25/H. On the contrary, cell differentiation was enhanced more on ASC-5/H than on ASC-25/H. They speculated that reduced border heights in the network structure of CA coating of ASC-5/H was preferred for the spreading of the osteoblast-like cells, and as a result, mineralization would be more accelerated with ASC-5/H. Therefore, higher BIC in the cortical part was obtained for ASC-5/H in the present animal experiments. Albrektsson et al. [35] suggested that osseointegration corresponded to approximately 60% bone contact for titanium implants. The present BICs of the ASC-5/H and ASC-25/H were above the limit proposed by Albrestsson et al. [35], not only for cortical bone but also for bone marrow. We inserted the implants in the cortical part of the rat femur. Cortical bone is known to be denser and stiffer than trabecula bone. The elastic modulus of cortical bone was previously shown to be higher that of trabecular bone [36]. The implant site between cortical and trabecular bone has been shown to influence the bone response to implants [37]. Hayakawa et al. [38] reported that cortical and trabecular bone exhibited different bone responses towards apatite-coated titanium implants. Siebers et al. [39] reported that the application of an ESD CA coating resulted in more bone contact compared with Ti implants after the implantation into the trabecular bone of the femoral condyle of the goat. The bone response after the implantation into the trabecular bone should be further evaluated. New bone formation in the bone marrow generated novel insights for CA coating. In this case, ASC-25/H, which has a higher border height in the network structure than ASC-5/H, enhanced more new bone formation in the bone marrow. Bone marrow is known to be rich in hematopoietic stem cells. Therefore, it is speculated that the ASC coating stimulated the activity of hematopoietic stem cells in the bone marrow. Specifically, ASC-25/H will release more calcium and increase the local concentration of calcium. As a result, ASC-25/H will produce more new bone formation in the bone marrow. Other factors, including mechanical factors, may influence bone formation. Further studies should be conducted to elucidate the mechanism of new bone formation in bone marrow. A push-in test was performed to evaluate the bonding between the implant and surrounding bone. Both CA coating implants produced tighter bonding to bone than Ti. Surface roughness did not influence the values in push-in tests. Lin et al. [40] also reported that surface modification with hydroxyapatite nanoparticles increased the push-in values two weeks after implantation into the femur of rats. In the present study, we only monitored the peaks at the load-displacement curve. Detailed observation of the failure surface after the push-in test will be necessary to analyze the details of bonding behaviors. ASC methods were employed to disc-shaped or rectangular-plate Ti in the present study. To apply the ASC method for dental clinics, CA coatings for cylindrical or screw-shaped Ti implants must be developed. For that purpose, a rotating jig with small a heater is being developed. Cylindrical or screw-type specimens have been set on the rotating jig, and various conditions, such as rotating speed, heating, and spraying methods, are now under investigation. Experiment Specimens We prepared two shapes of commercially pure Ti in this study: Disc-shaped Ti specimens (Diameter 1.5 mm, thickness: 1.0 mm, JIS 2 type, 99.9% mass, Furuuchi Chemical Corp., Tokyo, Japan) and rectangular plate specimens (2 × 1.5 × 0.5 mm, JIS2 type, 99.9% mass, Furuuchi Chemical Corp., Tokyo, Japan). Disc-shaped Ti specimens were used to characterize CA thin films in SBF immersion experiments, and rectangular plate specimens were used for animal experiments. Emery paper, ranging in grit size from #100 to #1200, was used in succession to polish the Ti surface under running water. Afterwards, polished specimens were washed with acetone for 15 min and deionized water for 10 min under ultrasonication. Preparation of ASC Solution and Spray Coating The ASC solution was prepared according to previous reports [28,29]. Briefly, calcium hydroxide (Wako Pure Chemical Industries, Osaka, Japan) was suspended in deionized water, and then CO 2 gas was introduced into the suspended solution by ultrasonication until a clear solution was obtained. Phosphoric acid (Wako Pure Chemical Industries, Osaka, Japan) was added into the clear solution with a Ca/P ratio of 1.67. The final clear solution was stored in a refrigerator until use. Figure 11 shows a schematic drawing of the CA coating using the ASC method [28,29]. An aqueous solution was sprayed onto a disc-shaped or rectangular plate Ti through the nozzle of an air brush (HP-SAR, ANEST IWATA, Kanagawa, Japan). A sheathed heater was used for controlling the Ti substrate at 40 • C. The Ti substrate was placed in the center of the stainless-steel plate at a perpendicular distance of 200 mm using the spray nozzle. Air pressure during spraying was set at 0.2 MPa and spraying speed at 5 mL/min. Two as-sprayed samples, ASC-5 or ASC-25, were obtained by spraying 5 mL or 25 mL of the ASC solution from the spray nozzle onto the Ti substrate, respectively. The sprayed samples then were heated at 600 • C for 10 min under Ar gas flowing at a rate of 0.5 mL/min using a high-speed desktop electric heating furnace (EPKPO12-K, Isuzu Seisakusho, Tokyo, Japan). Hereafter, the heated ASC-5 and ASC-25 samples are abbreviated as ASC-5/H and ASC-25/H. Adhesiveness of CA Films The adhesiveness of the CA thin film to the Ti surface was evaluated using a diamond-stylus scratch method (Nano Scratch Tester, Anton Paar, Graz, Austria). A diamond stylus (rockwell type, tip radius: 200 μm) was moved over each specimen surface under a linearly increasing load until Figure 11. A schematic representation of the apparatus for aqueous spray coating on titanium. Surface Morphologies and Surface Roughness of CA Film The surface morphology of the CA film deposited on the Ti substrate was observed under a scanning electron microscope (SEM; JSM-5600LV, JOEL Ltd., Tokyo, Japan) at an accelerating voltage of 15 kV after sputter coating with Au using an ion coater (QUICK COATER SC-701, Sanyu Electron, Tokyo, Japan). Three-dimensional observations of surface morphology were performed by a shape analyzer laser microscope (VK-X250, KEYENCE, Osaka, Japan). Images were acquired in three-dimensional ranges of decreasing size of 25 × 25 µm 2 . Two surface parameters, three-dimensional arithmetic height (Sa) and surface deployment area rate (Sdr), were obtained. Adhesiveness of CA Films The adhesiveness of the CA thin film to the Ti surface was evaluated using a diamond-stylus scratch method (Nano Scratch Tester, Anton Paar, Graz, Austria). A diamond stylus (rockwell type, tip radius: 200 µm) was moved over each specimen surface under a linearly increasing load until failure occurred. The different scratch test parameters varied as follows: The begin to end load was 1 to 10 N, and the speed of the moving stage was 5 mm/min (length: 4.95 mm). The critical load (Lc) was defined at the first crack in the coating. Scratch tests were performed in three places for each of four specimens, ASC-5, ASC-25, ASC-5H, and ASC-25H, and Lc values were calculated. Measurements were performed thrice for each specimen. Durability of CA Coating Following heat treatment, CA-coated discs (ASC-5/H or ASC-25/H) were immersed in 20 mL of phosphate-buffered saline (PBS) solution with pH = 7.4 or citrate-buffered solution with a pH = 5.4 in a polypropylene bottle for one week. During immersion, solutions were changed every day. After immersion, the specimens were dried in a desiccator. The change in morphology of the CA films was observed by SEM at an accelerating voltage of 15 kV after sputter coating with Au. SBF Immersion Hanks' balanced salt solution (HBSS), without organic species, was employed as an SBF [31]. Ti and ASC-25/H discs were immersed in 20 mL of HBSS with an adjusted pH of 7.4 at 37 • C in a polypropylene bottle. Each specimen was placed either horizontally or vertically in HBSS for immersion experiments, as previously described [32]. For horizontal placement, discs were placed directly at the base of the bottle. For vertical placement, the discs were hung from a nylon wire. The nylon wire was attached to each disc with a plastic clip. To ensure that the discs were constantly exposed to fresh medium, the medium and container were changed daily. At one, three, and 14 days after immersion, the discs were rinsed with double distilled water and immediately dried in a desiccator. The surface appearance of each disc after immersion in HBSS was observed using SEM at an accelerating voltage of 15 kV after sputter coating with Au. The crystallographic structure of the deposited crystals on CA-coated specimens was analyzed after 14 days of immersion using X-ray diffraction (XRD, θ-2θ, MXP-18 AHF22; Bruker AXS, Kanagawa, Japan), which yielded an X-ray source of Cu-Kα, a 50-kV voltage, and a 50-mA current. Fourier transform infrared (FT-IR) spectroscopy (FT/IR-600, JASCO, Tokyo, Japan) was also performed using the KBr method to analyze the results. Animal Experimentation The animal experiment was approved by the animal experiment committee guidelines of the Tsurumi University School of Dental Medicine (approval No. 27A055, No. 28A059, and No. 28A067). A total of 36 six-week-old male Wistar rats weighing 180-200 g were used. Twenty-four rats were used for histological and histomorphometrical evaluation, and 12 rats were used for push-in tests. All animals were housed in a temperature-controlled room with a 12 h alternating light-dark cycle and were provided water and food (standard rat-chew) ad libitum during the experimental period. Each animal received one rectangular plate implant. A total of 36 implants were inserted. In particular, four Ti, four ASC-5/H, and four ASC-25/H were implanted for two weeks, and four Ti, four ASC-5/H, and four ASC-25/H were implanted for four weeks. These implants were used for histological and histomorphometrical evaluation. In addition, four Ti, four ASC-5/H, and four ASC-25/H plates were implanted for two weeks and used in push-in tests. Before all animal experiments, all plate implants were sterilized using ethylene oxide gas. Surgical interventions were conducted under general anesthesia by an intraperitoneal injection of ketamine hydrochloride (0.8 mg/kg) and medetomidine hydrochloride (0.4 mg/kg). Local anesthesia was induced with lidocaine hydrochloride (1.8 mL) and epinephrine bitartrate (0.045 mg) (ORA Injection, Showa Yakuhin Kako, Tokyo, Japan). Surgical procedures were performed in accordance with the methods reported by Suzuki et al. [41]. The right hind limb was shaved after sterilization with ethanol. A longitudinal incision was made on the distal surface of the right hind limb to expose the femur. A bone defect was generated with a very gentle surgical technique with a continuous internal cooling using physiological saline solution. A cortical bone defect measuring 0.8 × 2.0 mm was created through the cortex and medulla at a length of approximately 15 mm from the head of the femur using a carbide bur (#700, Shofu, Kyoto, Japan) at a low rotational speed. After the insertion of the implants into the bone defects by press-fitting, muscle tissue and skin were closed in separate layers by the single-knot technique using nonabsorbable sutures (BioFit-D 5-0, Washiesu Medical Corp., Tokyo, Japan; Nylon 4-0, Mani, Utsunomiya, Japan). To reduce the risk of perioperative infection, a prophylactic antibiotic equivalent to latamoxef sodium (0.01 mg/kg Shiomalin, Shionogi & Co., Osaka, Japan) was administered postoperatively by subcutaneous injection. The animals were sacrificed at two and four weeks after implantation. The femurs were harvested and fixed in 10% neutralized formalin for histological observation. Histological and Histomorphometrical Evaluation The fixed specimens were dehydrated in a graded series of ethanol and embedded into methylmethacrylate. After polymerization, non-decalcified thin sections were prepared using a cutting-grinding technique (EXAKT-Cutting Grinding System, BS-300CP band system and 400 CS microgrinding system, EXAKT Appratebau, Norderstedt, Germany) [42]. Sections of approximately 50-70 µm were made in a transverse direction perpendicular to the long axis of the implants. Undecalcified sections were stained with methylene blue and basic fuchsin and were evaluated using a light microscope (BX51, Olympus Corp., Tokyo, Japan, magnification 200×). We estimated the percentage of bone-to-implant contact (BIC) and bone mass (BM) around the implants using histomorphometrical and image analysis (WinROOF, Visual System Division, Mitani, Tokyo, Japan). BIC was calculated as the percentage of the length of bone-implant contact in the cortical bone and bone marrow. BM was defined as the percentage of newly formed bone within the region of interest in the bone marrow, which is illustrated in Figure 12. Implant Push-In Test An implant push-in test was conducted to evaluate the bonding between the implant body and surrounding bone [40]. Two weeks after implantation, the femurs of rats were extracted and the long axis of the embedded implant plate was placed vertical to the base using wax. The implant was pushed with a custom-made pushing rod (diameter, 0.8 mm) at a cross head speed of 1 mm/min using a universal testing machine (Shimadzu Autograph AG-IS 20 kN, Shimadzu, Kyoto, Japan). BIC rate was calculated as the percentage of the length of bone-to-implant contact in the cortical bone and bone marrow. BM rate was defined as the percentage of newly formed bone within the region of interest (ROI) and is indicated with a dotted line. Implant Push-In Test An implant push-in test was conducted to evaluate the bonding between the implant body and surrounding bone [40]. Two weeks after implantation, the femurs of rats were extracted and the long axis of the embedded implant plate was placed vertical to the base using wax. The implant was pushed with a custom-made pushing rod (diameter, 0.8 mm) at a cross head speed of 1 mm/min using a universal testing machine (Shimadzu Autograph AG-IS 20 kN, Shimadzu, Kyoto, Japan). The applied load and displacement of the implant were monitored. The push-in value was determined by measuring at the peak of the load-displacement curve. Statistical Analysis Data for surface roughness, scratch tests, BIC, and BM of different implants were analyzed using one-way analysis of variance and the post hoc Tukey's test for multiple comparisons among means. We used an unpaired t-test to determine if BIC and BM differed between implant times (two-week or four-week treatments). Statistical analyses were conducted with Origin Pro 9.0 J (OriginLab Corp., Northampton, MA, USA). p values of less than 0.05 were considered significant, and data were expressed as the mean ± standard deviation. Conclusions In the present study, thin CA films were deposited onto Ti using the ASC method. Heat treatment improved the bonding of the CA thin film to Ti. Acidic conditions increased the dissolution of the CA thin film. Crystal formation after immersion in SBF progressed on CA-coated specimens. The direction of sample placement further influenced crystal formation and growth. Animal implantation experiments revealed that ASC-5/H showed a greater BIC than AS-25/H in the cortical part four weeks after implantation, but new bone formation in the bone marrow was enhanced more with ASC-25/H than that with ASC-5/H. The bonding of CA coating to titanium is clinically acceptable. Therefore, we conclude that CA coating on Ti by the ASC method could be used for clinical applications, including dentistry.
9,217.6
2017-12-01T00:00:00.000
[ "Materials Science", "Medicine" ]
A Feel for Numbers: The Changing Role of Gesture in Manipulating the Mental Representation of an Abacus Among Children at Different Skill Levels Abacus mental arithmetic involves the skilled acquisition of a set of gestures representing mathematical algorithms to properly manipulate an imaginary abacus. The present study examined how the beneficial effect of abacus co-thought gestures varied at different skill and problem difficulty levels. We compared the mental arithmetic performance of 6- to 8-year-old beginning (N = 57), intermediate (N = 65), and advanced (N = 54) learners under three conditions: a physical abacus, hands-free (spontaneous gesture) mental arithmetic, and hands-restricted mental arithmetic. We adopted a mixed-subject design, with level of difficulty and skill level as the within-subject independent variables and condition as the between-subject independent variable. Our results showed a clear contrast in calculation performance and gesture accuracy among learners at different skill levels. Learners first mastered how to calculate using a physical abacus and later benefitted from using abacus gestures to aid mental arithmetic. Hand movement and gesture accuracy indicated that the beneficial effect of gestures may be related to motor learning. Beginners were proficient with a physical abacus, but performed poorly and had low gesture accuracy during mental arithmetic. Intermediates relied on gestures to do mental arithmetic and had accurate hand movements, but performed more poorly when restricted from gesturing. Advanced learners could perform mental arithmetic with accurate gestures and scored just as well without gesturing. These findings suggest that for intermediate and advanced learners, motor-spatial representation through abacus co-thought gestures may complement visual-spatial representation of a mental abacus to reduce working memory load. INTRODUCTION Abacus arithmetic is an ideal model for examining the changing beneficial effect of co-thought gestures for learning mathematics at different skill and problem difficulty levels. According to Gesture as Simulated Action theory, learners spontaneously gesture to activate motor programs that assist working memory as imagery tasks pass a threshold of difficulty (Jeannerod, 2001;Hostetter and Alibali, 2008). Gestures, thus, are the "visible embodiment" of simulated actions that reveal implicit knowledge and strategies for solving mathematical problems and enhance learning (Goldin-Meadow et al., 2001;Broaders et al., 2007;Cook et al., 2010). We examine the beneficial effect of spontaneous co-thought gestures performed during mental arithmetic as the visible embodiment of mathematical algorithms for manipulating the mental representation of an abacus. The role of co-thought gestures, i.e., non-communicative hand movements without accompanying speech, in mathematics learning is less well understood than co-speech gestures. In two recent papers, co-thought gestures were found to change from action simulation to representation of action plans (Chu and Kita, 2008Alibali et al., 2011). Participants spontaneously produced more co-thought gestures the greater the angel of mental rotation and solved more problems correctly when allowed to gesture as opposed to the gesture-prohibited condition (Alibali et al., 2011;Chu and Kita, 2011). This beneficial effect of co-thought gestures was generalizable across tasks involving similar visual-spatial transformations. Co-thought gestures also changed over time with learning experience. Experienced participants not only gestured less than novices over subsequent trials solving spatial problems, but also changed the type of gestures they used. In a study of mental rotation, Chu and Kita (2008) found that learners' spontaneous gestures changed from action simulation (e.g., curved palm rotates to the right so as to represent an act of rotating), to representation of an object with hands (e.g., flat palm flips outward so as to represent the object being rotated) in the subsequent trials. The authors proposed that both co-thought gestures and co-speech gestures are produced by the same action generation process, which internally represents and plans purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (Chu and Kita, 2016). To support their action generation hypothesis, they found that participants produced more co-thought and cospeech gestures when stimulus objects afforded actions (i.e., could be easily manipulated), compared to when objects were not easily handled (i.e., having spikes) (Chu and Kita, 2016). Similar to Chu and Kita's motivation for proposing the action generation hypothesis, Pouw et al. (2014) have argued that there is a need to extend GSA to explain how co-thought gestures recruit the motor system to benefit cognitive processes. No studies have examined the possible beneficial effect of co-thought gestures in learning mathematics. Early studies of abacus mental arithmetic have all noted that learners move their hands and fingers when performing mental calculations, as if manipulating a real abacus (Hatano and Osawa, 1983;Stigler, 1984). Yet, these studies have not explained why learners spontaneously gesture nor examined whether the beneficial effects of these hand gestures vary with the skill levels of learners and the difficulty levels of problems. The abacus is an historically significant cultural artifact that affects how the brain processes calculations as part of a living tradition of embodied mathematics. Abacus arithmetic has become one of the most common and widespread forms of early childhood mathematics education throughout Asia. In the highly competitive educational systems of countries like Singapore, Taiwan, and South Korea, nearly all children attend supplemental classes till late in the evening. The trend has been for children to start at an increasingly early age, with most beginning around 5 or 6 years old. Abacus mental arithmetic is taught multimodally, as both visual and motor operations. The abacus uses a finite set of rules or algorithms for moving the abacus beads to perform addition or subtraction of single digits (Supplemental Materials 3: How to Use an Abacus and Example Problems with Gesture Solutions). As Stigler et al. (1986) has pointed out, any arithmetic problem can be solved on an abacus using a fixed sequence of these algorithms. In many abacus schools, these algorithms for moving the beads are taught as stylized two-handed movements using the thumbs and index fingers held over two columns of beads for each operation (Supplemental Materials 1: Abacus Gestures 1-70). Teachers often correct students on the proper form of these hand movements using an abacus as instructional pedagogy (Supplemental Materials 2: The Abacus Hand Movement Lexicon and Correct and Incorrect Gestures). This manner of instruction is similar to how a pianist might learn proper hand positioning and fingering. Although mental arithmetic is practiced at all stages, learners are allowed to use different physical aids during training. In a beginning abacus class, children primarily learn by manipulating a physical abacus using correct hand movements. An abacus is an array of beads with five beads in each column. There is a single upper row of beads separated by a horizontal bar from four lower rows of beads. When a bead in the upper row is pushed downward with the index finger to touch the horizontal bar, it registers a digit value of five for that column. When one of the beads in the four lower rows is pushed upward with the thumb to touch the horizontal bar, it registers a digit value of one for that column. Hence, each column can register a digit value from 0 to 9. When a column is designated as the one's column (×10 0 ), each successive column to the left is a successive power of ×10 n and each column to the right a successive negative power of ×10 −n . Any arithmetic problem can be solved by concatenating a fixed sequence of these hand movements to move the beads according to algorithms for complements of 5 and 10 (see Supplementary Materials 3 for more details). As a transitional or intermediate stage of instruction, children use a picture card or static diagram of an abacus. Instead of moving beads on a physical abacus, learners touch the picture card or spontaneously gesture over it. This reduces the physical tool to an abstract mathematical diagram or sign. At the advanced stage, learners solve problems purely mentally without a physical abacus or any visual aid. However, as problem size and complexity of operations increase, learners may revert back to using one of the physical aids. Learners at all stages of instruction, especially at intermediate and advanced performance levels, spontaneously produce cothought gestures which closely mimic the hand movements when using a physical abacus. These movements can sometimes be exaggerated or vary in form. It is important to note that teachers instruct learners on the proper hand movements when using a physical abacus; but they do not instruct learners on how to spontaneously gesture. The only criterion is to solve the problems mentally. Thus, it is totally the learners' decision to gesture or not. Overall, as abacus learners acquire mental arithmetic skill, they rely less on a physical abacus to perform mental arithmetic. They gradually internalize or embody the abacus tool. Whether and how these spontaneous co-thought gestures facilitate mental arithmetic remains unclear. The Present Study The present study examined whether the beneficial effect and form of abacus co-thought gestures are different among beginning, intermediate, and advanced learners who are asked to solve one-, two-, and three-digit arithmetic problems. We tested the abacus mental arithmetic performance of 6-to 8year-old beginning, intermediate, and advanced learners. Each learner was randomly assigned to one of three conditions for performing calculations at three difficulty levels (onedigit, two-digit, and three-digit): physical abacus, hands-free (spontaneous gesture) mental arithmetic, and hands-restricted mental arithmetic. Based on the current state of research, there are several competing predictions about how abacus co-thought gestures may benefit learners at different skill levels when solving problems of varying degrees of difficulty. According to Image Maintenance theory, co-thought gestures, as bodily acts, should refresh the mental image of an abacus on a "visuospatial scratchpad" (Wesp et al., 2001, p. 592) and have a beneficial effect for learners at all levels of skill and problem difficulty. Moreover, according to Chu and Kita's action generation hypothesis, abacus co-thought gestures should become increasingly representational and abstract as learners internalize the action strategy as they become more advanced in skill. Spontaneous gestures, in other words, would be less mimetic of physically manipulating an abacus. This would mean more advanced learners should have less accurate movements compared to beginning learners. However, neither the Image Maintenance Theory nor the action generation hypothesis are clear about the underlying mechanism by which gestures should have a beneficial effect. In contrast, we hypothesize that abacus co-thought gestures facilitate mental arithmetic as motor programs that complement visual-spatial representation to reduce working memory load. We predict that abacus co-thought gestures will be beneficial only for learners who have acquired motor skills that closely reflect simulated action on a physical abacus. In other words, contrary to Image Maintenance theory, we predict that the beneficial effect of spontaneous abacus gestures will vary depending on skill level. And, contrary to the action generation hypothesis, more advanced learners' spontaneous gestures will more accurately mimic action on a physical abacus. According to GSA theory, spontaneous gestures activate motor programs that assist working memory as imagery tasks pass beyond a threshold of difficulty. As Beilock et al. (2004b) has noted, motor skills operate largely outside of working memory and thus reduce working memory load. This is based on Fitts and Posner (1967) proposal that working memory load is greatest at beginning stages of learning a motor skill because movements must be closely monitored and are often prone to error. With practice, these movements become increasingly automated as motor programs that require little conscious effort to produce increasingly accurate movements. Additionally, numerous studies have shown that motorspatial representation complements visual-spatial representation by encoding visual-spatial sequences as motor plans to reduce working memory load (Sirigu and Duhamel, 2001;Wymbs et al., 2012;Langner et al., 2014;Smithson and Nicoladis, 2014). Such motor-spatial representation, takes longer to acquire than visualspatial representation. Bapi et al. (2000) and Hikosaka et al. (2002) have shown that in visual-motor tasks, learners acquire the sequence of visual-spatial coordinates faster than motor-spatial coordinates. However, after a motor sequence is learned, learners can execute them faster and with little conscious effort. A recent EEG and fMRI case study by Ku et al. (2012) has shown that abacus mental arithmetic may involve two parallel cortical loops, first activating one for visual-spatial processing, shortly followed by a second for motor-spatial processing (see also Hikosaka et al., 1999Hikosaka et al., , 2002. We thus expected that abacus learners would master how to use a physical abacus before becoming proficient in using abacus co-thought gestures. This is because being able to see the beads on a physical abacus as a visual-spatial sequence may demand less working memory compared to maintaining a motor-spatial mental representation of it. Beginning learners should be able to perform calculations well with a physical abacus. However, without the aid of a physical abacus, they would perform poorly under the spontaneous gesture condition and hands-restricted condition. Moreover, the accuracy of their gesturing would be poor because they had not yet acquired the motor-programs for abacus gestures. Intermediate learners would perform mental calculations equally well using spontaneous gestures compared to using a physical abacus for simple one-and two-digit problems. Intermediate learners' gestures should also be highly accurate, following closely the same types of hand movements for moving beads on a physical abacus. In other words, intermediate learners would have acquired motor-programs for abacus gestures to aid in visual-spatial representation of the mental abacus. However, as the demands on working memory increase with problem difficulty, the beneficial effect of co-thought gestures in reducing working memory load should attenuate for the most difficult problems. Intermediate learners, may thus perform less well using spontaneous gestures compared to physical abacus for more difficult three-digit problems. Moreover, the beneficial effect of co-thought gestures should be most salient when comparing intermediate learners' ability to perform mental arithmetic under the hands-free spontaneous gesture to that in the hands-restricted condition. These learners would perform poorly in the hands-restricted condition because they had not yet fully automated and internalized the motor programs for abacus gestures to maintain the mental representation of an imaginary abacus. Different from beginning and intermediate learners, advanced learners would have fully internalized and automated abacus gesture motor-programs. Thus, abacus co-thought gestures should not only have a beneficial effect for mental arithmetic performance but also show increasing movement accuracy similar to manipulating a physical abacus. Hence, advanced learners should not even need overt gesturing to refresh motor-spatial mental representation. They would be able to perform mental arithmetic calculations without much conscious effort through the assistance of gestures. Advanced learners' gestures should be highly accurate and their calculation scores in the hands-free and hands-restricted conditions should be comparably high and nearly as high as with a physical abacus. Evidence for these predictions comes from previous abacus studies. Hatano et al. (1977) tested 10 skilled abacus users and found that prohibiting hand movements and finger tapping similarly reduced performance for most subjects, but did not prevent the two most advanced subjects from correctly answering nearly all the problems. Likewise, Frank and Barner (2011) found that finger drumming significantly interfered with performance (p. 7); however, they noted that the most advanced participants were not affected. In a recent study, Brooks et al. (2017), found that advanced learners performed far worse with motor interference. These findings suggest that the role of gestures may vary for abacus learners at different skill levels, similar to our predictions for intermediate and advanced learners. Understanding how the beneficial effect of abacus gestures changes at different levels of skill and problem difficulty can provide us with insights into the role of visual and motor working memory in abacus mental arithmetic. Some studies have shown that advanced abacus learners perform significantly better on mental arithmetic tasks compared to untrained controls (Lee et al., 2007;Chen et al., 2011). Such studies claim that mental abacus training focuses on visual working memory, resulting in improved calculating performance. In contrast, Frank and Barner (2011) have shown that mental arithmetic skill is not necessarily attributable to enhanced perceptual ability. They found that abacus experts and untrained controls performed similarly on a visual working memory task in which participants estimated the number of dots on flashcards. The study found that abacus experts perform fast mental calculations by employing a strategy of grouping columns of abacus beads to optimize visual working memory. Barner et al. (2016) likewise found participants' differences in spatial working memory affected their individual ability to perform mental arithmetic, but did not change basic cognitive abilities, such as increasing number span. Neurophysiological studies of abacus mental arithmetic show activation in cortical areas important for both visual and motor imagery. Activation occurs in the parietal cortex (Hanakawa et al., 2003) important for integrating visuospatial and motor input from the hands, as well as in the premotor cortex and Supplementary Motor Area (Chen C.L. et al., 2006;Chen F.Y. et al., 2006). Premotor cortex is important for motor planning and preparation of correct or incorrect movement. Supplementary Motor Area is important for movement sequence from memory and mental rehearsal of movement sequences. These studies have also shown that abacus experts compared to non-experts have reduced demands on frontal-subcortical areas related to the global workspace of executive function (Chen C.L. et al., 2006;Chen F.Y. et al., 2006;Li et al., 2013). This is consistent with claims that both visual grouping strategies and motor learning may reduce demands on working memory. The current study also sheds light on how gesture may assist in transitioning from concrete objects to mental representation when learning arithmetic. The use of concrete manipulatives has been a staple of early mathematics education for decades (Post, 1981). Bruner (1966) has proposed that mathematics instruction should proceed in three stages: enactive, iconic, and symbolic. In the enactive stage, multiple physical objects or manipulates assist learners to grasp mathematical concepts by providing a store of concrete or embodied real-world experiences. By comparing these multiple examples, learners in the iconic stage then strip away extraneous perceptual details to use graphical or pictorial representations. Lastly, learners in the symbolic stage extract abstract concepts, represented in formal notation. Theoretical models such as Bruner's for mathematics learning has been widely applied in curricula, especially for early childhood education (Fyfe et al., 2014). Among many notable examples are the Montessori use of concrete to abstract sensorial material such as beads, rods, and blocks for counting, arithmetic, and decimals; the Rational Number Project for teaching fractions; and the Concrete-Representational-Abstract sequence used by MathVIDS and other curricula for students struggling with basic concepts. Since the 1980s, the Concrete-Pictorial-Abstract method has remained a cornerstone of the Singapore Ministry of Education mathematics curricula (Leong et al., 2015). Despite widespread implementation of the enactive-iconicsymbolic model in school curricula, little work has explored the mechanisms underlying how learners shift from embodied concrete perception and action to abstract concepts. This lack of explanation has further led to controversy over whether concrete manipulatives are even effective. Some studies have shown that in certain circumstances instruction with concrete manipulates led to worse performance (Kaminski et al., 2008). Others have noted that there may be many mechanisms underlying how learners transition from concrete objects to symbolic representation (McNeil and Fyfe, 2012). Participants One hundred and eighty children (half males) participated in this experiment from 2010 to 2011. They were English speaking Singaporeans and attended abacus classes at the Classical Mental Arithmetic School (CMA). CMA was one of the popular schools in Singapore teaching young learners abacus mental arithmetic using two hands and four fingers. It has 21 branches in Singapore. In the present study, we collected data in five of them. On average, children were 7;1 (years;months) years old, ranging from 5;11 to 8;1 years old. All of them were typical primary school students. Abacus training is common among Singaporean children, across socioeconomic and educational backgrounds. The selected participants were thus a representative sample. We further note that children at this young an age are at the very beginning stage of their mathematics education in primary school. We chose a narrow age range to minimize the influence of the students' regular school education. All the procedures were approved by the institutional review board of the authors' university at the time of the study, in compliance with the Declaration of Helsinki. We obtained the parents' informed consent prior to the study. The first author presented preliminary work for this article at the workshop, Culture and Cognition in Asia II: Performative Gesture in Religion and Science (17 June 2010 at the National University of Singapore, https://ari.nus.edu.sg/Event/ Detail/1066). Design and Procedures Each child was classified into one of the following three categories: beginning learners (N = 57), intermediate learners (N = 65), and advanced learners (N = 54), based on the CMA program in which the child was enrolled. CMA classifies students into beginning, intermediate, and advanced levels according to a series of finely-grained examinations. Students progress through a series of exercise workbooks in which numbers are aurally presented on CD and must be added or subtracted mentally. These exams neither require students to gesture nor test their movement accuracy. Learners progress through exercises, beginning with 2 one-digit numbers up through 10 one-digit numbers; this is followed by 2 two-digit numbers up through 10 two-digit numbers. At the beginning of each class, students are tested and must score perfectly before being allowed to proceed. Beginning learners must be able to mentally calculate from four one-digit numbers up to three two-digit numbers. Intermediate learners must be able to mentally calculate from four two-digit numbers up to three three-digit numbers. Advanced learners must be able to mentally calculate from four three-digit numbers and above. We then randomly assigned learners from each skill level to one of the following three conditions: (1) physical abacus; (2) hands-free mental arithmetic (spontaneous gesture); (3) hands-restricted mental arithmetic. Table 1 shows the demographic information of participants in all conditions. We refer to mental arithmetic as MA. Learners in all conditions were tested individually at their CMA branch and asked to solve 60 addition and subtraction questions (20 one-digit, 20 two-digit, and 20 three-digit). All questions were designed by teachers in CMA. Learners were given 30 min to complete the test, which was ample time for all to finish the problems that they were able to do. However, learners who found that the problems were too difficult to manage could stop at any time. The entire experiment was videotaped. Each child was closely monitored by an experimenter, one-on-one, for compliance. None of the children moved his/her hands in the hands-restricted mental arithmetic condition. Learners in the physical abacus condition solved the problems using a physical abacus, which was the same as the one they used in their regular class. Learners in the hands-free MA (spontaneous gesture) condition solved the same problems, but without the assistance of an abacus. With prompting, they were able to spontaneously move their hands to perform mental calculations. Learners in the hands-restricted MA condition also solved the same problems using mental calculation, but were restricted from moving their hands by holding a ball with both hands. Scoring and Coding We calculated the mean proportions of questions with correct answers, which were calculated as the number of correct answers separately divided by the total number of questions at each digit-level in each group of learners in each condition. A teacher at CMA then coded the abacus hand movements and abacus gestures produced by learners in the physical abacus condition and hands-free mental arithmetic condition, respectively. Teachers at CMA were well trained in identifying the abacus hand movements and gestures produced by their students. There are two kinds of hand movements: abacus hand movements, produced when manipulating a physical abacus; and abacus hand gestures, produced while doing mental calculation. The teacher coded both kinds of hand movements using a standard answer key, which provided the sequence of gestures to solve each problem. After identifying a gesture, the teacher determined whether the gesture was correct. We sought to understand how learners at different skill levels employed correct gestures or other movements in mental calculations and how the sequence of these correct gestures compared to hand movements when manipulating a physical abacus. Learners are taught in abacus classes stylized or pedagogically correct hand movements using the index finger and thumb up or down in a single column, either as one hand or as two hands in adjacent columns. When learners perform mental arithmetic, they often spontaneously gesture in the air, mimicking these hand movements to move the beads on physical abacus. Each gesture has specific algorithmic meaning depending on context and is executed in a fixed sequence of gestures to solve a particular mathematical problem. However, learners sometimes do not use these correct hand movements or gestures and make mistakes, such as incorrectly moving their index fingers and thumbs, skipping or combining movements, or using fingers other than the index fingers and thumbs. We compared the proportion of correct gestures produced in the hands-free MA (spontaneous gesture) condition to the proportion of correct hand movements in the physical abacus condition. The proportion of correct hand movements or gestures was calculated as the total number of correct hand movements or gestures divided by the total number of fixed algorithmic steps. Inter-coder Reliability To assess inter-coder reliability for the coding of the abacus hand movements/gestures and that of the correct gestures, we randomly selected twelve children (three in each condition) for independent coding by a second trained coder. The coder was also one of the teachers at CMA and she was naive to our hypotheses. The inter-rater agreement was 0.96 (N = 5580; Cohen's kappa = 0.94, p < 0.001) for the coding of the number of abacus hand movements. The inter-rater agreement was 0.92 (N = 4680; Cohen's kappa = 0.94, p < 0.001) for the coding of the abacus hand gestures. As for the coding of the accuracy of the correct abacus hand movements, the inter-rater agreement was 0.88 (N = 5357; Cohen's kappa = 0.84, p < 0.001). With regard to the coding of the accuracy of the correct abacus hand gestures, the inter-rater agreement was 0.85 (N = 4305; Cohen's kappa = 0.82, p < 0.001). RESULTS We examined whether the facilitating role of gesture in solving arithmetic problems varied with the level of abacus skills and the difficulty of problems. We first examined how learners with different levels of abacus skills gestured, by looking at whether these gestures were correct, i.e., following the form of hand movements on an abacus taught in class. We next examined the proportions of correct answers. We investigated these proportions as functions of the method of calculation, level of abacus skills of learners, and level of problem difficulty. The accuracy rate, as the proportion of questions answered correctly, was calculated as the total number of questions answered correctly divided by the total number of questions. Abacus hand movements produced in the physical abacus condition and abacus gestures produced in the mental arithmetic condition were classified into two categories: correct and incorrect. The proportion of correct abacus hand movements or abacus gestures was calculated as the total number of correct abacus hand movements divided by the total number of abacus gestures possibly produced. Figure 1 shows the proportions of correct abacus hand movements or abacus gestures produced in the physical abacus and the mental arithmetic conditions. We ran a repeated measures ANOVA with the difficulty of problems as the independent within-subject variable, condition and skill level as the independent between-subject variables, and the proportion of correct abacus hand movements or abacus gestures as the dependent variable. We found a significant effect for the problem difficulty, F(2,194) = 135.13, p < 0.001, partial η 2 = 0.58, condition, F(1,97) = 32.29, p < 0.001, partial η 2 = 0.25, skill level, F(2,97) = 7.97, p < 0.001, partial η 2 = 0.14, skill level and condition interaction, F(2,97) = 3.25, p = 0.043, partial η 2 = 0.06, problem difficulty and skill level, F(4,194) = 12.06, p < 0.001, partial η 2 = 0.20, problem difficulty and condition, F(2,194) = 29.05, p < 0.001, partial η 2 = 0.23, and threeway interaction, F(4,194) = 3.15, p = 0.02, partial η 2 = 0.06. The post-hoc statistical power for this test with respect to alpha level of 0.05 was 0.95 (G * Power 3.1.9.2; Erdfelder et al., 2005). Given the significant three-way interaction, we separately looked at the differences in the proportions of correct abacus hand movements or abacus gestures produced in the physical abacus and the mental arithmetic conditions among three groups of learners. As for beginning learners, we found a significant effect for the problem difficulty, F(2,70) = 46.05, p < 0.001, partial η 2 = 0.57, condition, F(1,35) = 11.19, p < 0.001, partial η 2 = 0.89, and interaction, F(2,70) = 6.83, p = 0.002, partial η 2 = 0.16. Bonferroni adjusted-pairwise comparisons showed that beginning learners produced correct abacus hand movements more often in one-digit questions than in two-digit questions, p < 0.001, and three-digit questions, p < 0.001, in the physical abacus condition. However, there was no significant difference between two-and three-digit questions, p = 0.53. In the mental arithmetic condition, they produced correct abacus gestures in one-and two-digit questions more often than in three-digit questions, ps < 0.001. As for intermediate learners, we found a significant effect for the problem difficulty, F(2,92) = 3.80, p < 0.030, partial η 2 = 0.08, and condition, F(1,46) = 18.54, p < 0.001, partial η 2 = 0.29. The interaction was not significant, F(2,92) = 1.75, p < 0.180. Interestingly, they marginally produced more correct abacus hand movements or gestures when solving two-digit than onedigit and three-digit problems, ps < 0.060. They produced more correct abacus hand movements or gestures in the physical abacus condition than in the mental arithmetic condition, p = 0.001. As for advanced learners, we found no significant effects for the problem difficulty, F(2,76) = 0.30, p = 0.74, partial η 2 = 0.03, condition, F(1,38) = 1.09, p = 0.30, partial η 2 = 0.03, and interaction, F(2,76) = 0.31, p = 0.740, partial η 2 = 0.008. It suggested that advanced learners were capable in producing correct gestures or hand movements in all three different levels of problem difficulty and in two different conditions. We next examined the proportions of one-digit, two-digit, and three-digit questions answered correctly in three groups of learners in three different conditions. Figure 2 shows the performance of learners. For intermediate learners, there were significant effects for problem difficulty, F(2,134) = 310.68, p < 0.001, partial η 2 = 0.82, condition, F(2,67) = 21.15, p < 0.001, partial η 2 = 0.39, and interaction, F(4,134) = 26.47, p < 0.001, partial η 2 = 0.44. The proportion of one-digit questions answered correctly was comparable across different conditions, F(2,67) = 2.38, p = 0.10. However, there were significant differences in the two-and three-digit questions, two-digit: F(2,67) = 4.11, p = 0.021; three-digit: F(2,67) = 35.08, p < 0.001. Bonferroni adjusted-pairwise comparisons showed that the proportion of two-digit questions answered correctly was greater in the physical abacus condition than in the hand movements prevented condition, p = 0.020. There was no significant difference between the physical abacus condition and the mental arithmetic condition, and between the mental arithmetic condition and the hand movements prevented condition, although participants in the mental arithmetic condition tended to perform better than those in the hand movements prevented condition. The proportion for the threedigit questions answered correctly was greater in the physical abacus condition than in the mental arithmetic condition, p < 0.001, and in the hand movements prevented condition, p < 0.001. The proportion was also higher in the mental arithmetic condition than in the hand movements prevented condition, p < 0.005. The findings in the advanced learners were similar to those in the intermediate learners. There were significant effects for problem difficulty, F(2,114) = 91.86, p < 0.001, partial η 2 = 0.62, condition, F(2,57) = 18.25, p < 0.001, partial η 2 = 0.39, and interaction, F(4,114) = 17.13, p < 0.001, partial η 2 = 0.38. The proportion of one-digit questions answered correctly was comparable across different conditions, F(2,57) = 0.99, p = 0.38. However, there were significant differences in the two-and three-digit questions, two-digit: F(2,57) = 4.23, p = 0.019; three-digit: F(2,67) = 21.77, p < 0.001. Bonferroni adjustedpairwise comparisons showed that the proportion of twodigit questions answered correctly was greater in the physical abacus condition than in the hand movements prevented condition, p = 0.020. There was no significant difference between the physical abacus condition and the mental arithmetic condition, and between the mental arithmetic condition and the hand movements prevented condition, p = 0.25. The proportion for the three-digit questions answered correctly was greater in the physical abacus condition than in the mental arithmetic condition, p < 0.001, and in the hand movements prevented condition, p < 0.001. The proportion in the mental arithmetic condition was not different from that in the hand movements prevented condition, ns, although participants in the mental arithmetic condition tended to perform better than those in the hand movement prevented condition. Interpretations Our results showed that the beneficial effect of abacus gestures on the accuracy of calculations varied with learners' skill level and problem difficulty. There was a clear contrast in the gesturing behavior and calculation performance of learners at different skill levels. Learners first mastered how to calculate using a physical abacus and later benefitted from using abacus gestures, answering more questions correctly when allowed to gesture compared to not gesturing. This suggested that learners acquired the ability to calculate using visual-motor spatial sequence, as the arrangement of abacus beads, followed by motor-spatial sequence, as abacus gestures. At each skill level, the differences between using a physical abacus, gestures, or no gestures also varied according to problem difficulty. The results indicated that as demands on working memory increased with problem difficulty, gestures assisted up to a point for mental arithmetic before learners resorted back to performing better on a physical abacus. Hand movement accuracy for especially intermediate and advanced learners also reflected motor learning. The difference in movement accuracy between the physical abacus and hands-free spontaneous gesture conditions showed a trend in increased movement accuracy following skill level; beginning learners had low movement and gesture accuracy while intermediate and advanced learners had high accuracy. More specifically, beginning learners were able to perform calculations with a physical abacus even up to three-digit problems. However, they performed poorly in both hands-free and hands-restricted conditions to the point that at two-digit and three-digit problems, there was no significant difference between the two mental arithmetic conditions. This showed that beginners were able to correctly solve some difficult problems when able to see the arrangement of beads on a physical abacus, but did not benefit much from using gestures to manipulate an imaginary abacus for mental calculations. This pattern was also reflected in the poor accuracy of beginners' hand movements. Movement accuracy was greatest with a physical abacus, especially for onedigit problems. But, under the hands-free mental arithmetic condition, gesture accuracy was equally poor for one-digit and two-digit problems, and nearly all inaccurate for three-digit problems. This clearly indicated that beginners had not yet learned how to calculate using gestures and still needed the aid of a physical abacus. In contrast, gestures facilitated problem solving for intermediate learners in the hands-free condition, compared to both the physical abacus and hands-restricted conditions. The trend showed that one-digit problems were simple enough for intermediates to perform equally well in all conditions. At two-digit problems, intermediates could calculate just as well using gestures as with a physical abacus, but not when their hands were restricted. By three-digit problems the contrast was even clearer. Intermediates performed best with a physical abacus, indicating that intermediate learners' ability to use gestures assisted only up this point. Yet notably, at three-digit problems, intermediates performed mental arithmetic significantly better when allowed to gesture compared to when their hands were restricted from moving. This clearly showed that intermediate learners had gained the ability to successfully use gestures as well as a physical abacus up to two digits, but not three digits. And, when the demands on working memory were highest at three-digit problems, gestures had a beneficial effect compared to not gesturing during mental arithmetic. This beneficial effect of gestures also seemed to be related to movement accuracy. Overall, intermediates' movement accuracy with gestures was almost as high as with a physical abacus. This trend continued for advanced learners, whose hand movement accuracy was just as high with spontaneous gestures as with a physical abacus. Interestingly, intermediates' movements were significantly more accurate at two-digit problems compared to simpler one-digit problems or more difficult three-digit problems. Studies of motor-skill learning and automaticity have shown that novice and intermediate learners perform better under conditions for online-attentional monitoring of their movements, while advanced learners perform better when explicit attentional control is prevented (Beilock et al., 2004a). Intermediates may have had more accurate movements at two-digit problems compared to one-digit problems because they paid closer attention to their movements for the more difficult task. Moreover, their gesture accuracy declined at three digits compared to two digits because, as noted earlier, two digits was the threshold at which intermediates could use gestures as well as a physical abacus. The pattern of intermediate learners' calculation score and movement accuracy thus reflected motor learning. Intermediate learners had acquired the basic motor programs of abacus gestures and were able to apply gestures more reliably and effectively, but had not yet reached full automaticity in their movements. Advanced learners showed a mastery of mental arithmetic even without the use of gestures or a physical abacus. At two digits, advanced learners performed equally well using just gesture compared to using a physical abacus. In contrast to intermediate learners, advanced learners performed equally well at three digits in the hands-free and hands-restricted conditions. This indicated that advanced learners could use and maintain a mental representation of the abacus even without gesture. In contrast to beginning and intermediate learners, advanced learners gestural movements were highly accurate, regardless of problem difficulty. This indicated a higher degree of motor automaticity and internalization of the abacus representation for advanced learners compared to intermediate and beginning learners. Theoretical Implications These contrasts among beginning, intermediate, and advanced learners in calculation performance and movement accuracy support the interpretation that abacus co-thought gestures are learned as a motor-skill that complements visual-spatial mental representation. A growing body of research shows that motor and visual imagery are complementary processes (Sirigu and Duhamel, 2001). Langner et al. (2014) has shown how the encoding of visual-spatial sequences of dots on the fingers of a schematic hand, translated working memory into sequential motor action. Recall was less accurate for longer sequences, but initiated faster after long delays. An fMRI analysis showed that activation in motor areas, especially basal ganglia, predicted recall after long delays. This indicated that visual spatial sequences were encoded as motor plans, possibly reinforced through mental rehearsal. Similar conversion of visual working memory to motor sequence has been shown in intracranial single-neuron studies of monkey premotor cortex (Ohbayashi et al., 2003). Smithson and Nicoladis (2014) have shown that iconic gesture production facilitates visual-spatial working memory activation during complex visual distractor inference. Additional behavioral and fMRI studies have demonstrated that visual-spatial and motor-spatial sequences are acquired at different rates and skill levels. Bapi et al. (2000Bapi et al. ( , 2006 found that beginning learners on a square grid key-pressing task quickly acquired visual-spatial sequence as coordinates on a rotated visual display. Spatial sequence is first acquired as it is effector-unspecific, but requires maximum attention or working memory. However, intermediate and advanced learners concurrently learned the visual and motor sequence as motor coordinates on a rotated input keypad. Moreover, they showed significant reduction in reaction-time under the motor compared to visual conditions. This indicated that the motor sequence was slower to acquire but quicker to perform once mastered. Hikosaka et al. (1999Hikosaka et al. ( , 2002 has proposed two parallel cortical systems that independently code visual and motor coordinates. Visuospatial representation forms a loop between frontoparietal cortex with the associative regions of basal ganglia (anterior striatum) and cerebellum. Motor representation links the Supplementary Motor Area with the motor regions of basal ganglia (posterior striatum) and cerebellum. Wymbs et al. (2012) suggests that Hikosaka's motor loop may be related to the concatenation of chunks (p. 934) and found increased fMRI BOLD activity in the bilateral putamen of the basal ganglia during the concatenation of motor schemas. Modular Selection and Identification Controller (MOSAIC) theory stipulates that multiple internal models of novel tools are acquired and modularly organized in the cerebellum. After learning an internal model in the cerebellum, the output is sent to premotor regions (Tamada et al., 1999). Imamizu et al. (2000) have further shown that after short but intensive training on a rotated joystick task, cerebral cortex activation decreased in the prefrontal and parietal regions but increased in the premotor and Supplementary Motor Area. We suggest that abacus training may provide an additional experimental paradigm for further research on multimodal representation in the cerebellum and basal ganglia. Analogous to grouping strategies for visual working memory, motor sequences can be grouped or "chunked" as gestures. Wymbs et al. (2012) has shown that visual-spatial sequences can be concatenated as motor-spatial "chunks, " which are executed as a series of schemas. Inspired by musical notation, Wymbs developed a visual-motor sequence task using four fingers on one hand to show that "chunking" of individual movements into a single motor schema reduces the memory load during performance. Chunking forms hierarchical memory structures to support increased speed and accuracy during performance. Single motor schema can be concatenated into a series of motor schemas as longer operations. It is possible that correct abacus gestures form chunked motor sequences representing arithmetic operations, thereby facilitating mental calculations. Once acquired as motor-chunks, learners are then able execute combinations of these gestures in a series to perform more complex calculations. Such conceptual and motor chunking may reduce cognitive load. Skilled learners, who have acquired abacus gestures, thus need only to decide on which gesture to execute, given the arrangement of beads. This reduces working memory load when calculating because changing the arrangement of beads is executed as a motor sequence. While it takes time to learn how to use gestures to do mental arithmetic without the visual assistance of a physical abacus, advanced learners can execute the calculation easily once they have acquired the learned motor-sequence. Previous fMRI studies of abacus mental arithmetic have shown greater activation in non-experts compared to experts of frontal-subcortical areas related to the global workspace of executive function (Chen C.L. et al., 2006;Chen F.Y. et al., 2006). In contrast, experts had less activation of executive areas, but more involvement of right dorsal premotor cortex during mental calculation. This suggests that non-experts, who have not automated abacus gestures as motor-chunks, have a greater working memory load related to executive function. Experts, on the other hand, are able to execute each arithmetic operation without paying close attention to the operation's physical execution. Additional studies of a variety of motor and higher cognitive tasks indicate that expert learners benefit most from mental rehearsal and imaginary practice (Cooper et al., 2001). Mental or covert rehearsal relies on limited working memory without the aid of an external tool like the abacus. Learners with a high level of prior knowledge or skill benefit most from mental practice because they have acquired schemas that free working memory. Experts are thus able to focus on rehearsing or automating these schemas and better able to combine them. Educational Applications and Future Directions Recent studies of abacus mental arithmetic and task switching have found that abacus training also improves higher-order math abilities beyond basic arithmetic, multiplication, and division. Long-term learners perform significantly better than untrained peers on more abstract tasks including algebraic number filling (e.g., 4+_ = 3 + 7), number sequence recognition, numerical working memory, and visual-spatial counting and matching (Hu et al., 2011;Li et al., 2013;Wang et al., 2015). Hence, abacus gestures may promote learning not just as a physical action but support abstract representation (Novack et al., 2014). Spontaneous gestures may thus provide multimodal representations of number complements and relationships that allow learners to grasp complex calculations. This is consistent with previous non-abacus studies that have shown that spontaneous gestures generally aid in learning mathematics (Goldin-Meadow et al., 2009). Notably, our study focused on 6-8 year old children at an early stage of their mathematics education. It would be useful to test if spontaneous abacus gestures not only have a beneficial effect on mathematics performance but also the rate of mathematics learning. CONCLUSION Abacus co-thought gestures have a clear beneficial effect for maintaining a mental representation of the abacus while performing mental arithmetic. These gestures are learned as specific movements using the index fingers and thumbs for moving abacus beads according to algorithms for complements of 5 and 10. Learners first acquire a basic skill in using a physical abacus and then acquire proficiency in using abacus gestures. The results indicate that this beneficial effect and accuracy of abacus gestures is related to motor learning. Beginners benefit little from using abacus gestures and their movement accuracy is poor. Intermediates perform mental arithmetic better when allowed to spontaneously gesture compared to when their hands are restricted. According the Gesture as Simulated Action theory, such spontaneous gestures are used when the demands of working memory reach a threshold. Advanced learners' mental abacus score and gesture accuracy were comparatively high, regardless of whether they gestured or not. This indicates that they had automated the motor programs of abacus gestures. Such automated motor programs can be executed with little conscious effort or demand on working memory. These results are consistent with previous findings on mental arithmetic that found that learners at different skill levels improved in their use of visual strategies. Moreover, our findings suggest that abacus gestures act as motor programs that complement such visual-motor representation. This interpretation is supported by behavioral and neurophysiological studies which indicate that visual-spatial and motor-spatial learning are two complementary systems.
10,719.8
2018-08-07T00:00:00.000
[ "Psychology", "Education", "Mathematics" ]
Sorption of Lead ( II ) Ions on Activated Coconut Husk Background: In recent years, various toxic chemicals/compounds have been widely detected at dangerous levels in drinking water in many parts of the world posing a variety of serious health risks to human beings. One of these toxic chemicals is lead, so this paper aimed to evaluate of efficiency coconut husk as cheap adsorbent for removal lead under different conditions. Methods: In the spring of 2015, batch studies were performed in laboratory (Branch of Hamadan, Islamic Azad University,) to evaluate the influences of various experimental parameters like pH, initial concentration, adsorbent dosage, contact time and the effect of temperature on the adsorption capacity of coconut husk for removal lead from aqueous solution. Results: Optimum conditions for Pb (II) removal were pH 6, adsorbent dosage 1g/100ml of solution and equilibrium time 120 min. The adsorption isotherm was also affected by temperature since the adsorption capacity was increased by raising the temperature from 25 to 45 °C. The equilibrium adsorption isotherm was better described by Freuindlich adsorption isotherm model. Conclusion: It is evident from the literature survey that coconut-based biosorbents have shown good potential for the removal of various aquatic pollutants. Coconut husk-based activated carbon can be a promising adsorbent for removal of Pb from aqueous solutions. INTRODUCTION Water contamination with heavy metals is a serious problem due to toxicity of heavy metals [1].The presence of heavy metals in water source is becoming a major environmental and public health issue [2].The heavy metals are special because of their persistency and toxicity in the environment.About 20 metals are classified as heavy metals, which half of them are harmful to human health because of toxicity [3].As one of the important toxic heavy metals, lead in human causes severe hurts to, liver, nervous system, reproductive system, kidney and brain.Severe exposure to lead has been related with sterility, stillbirths, abortion and neo-natal deaths.Industrial activities, such as battery manufacturing, metal plating and finishing, printing and pigment, ammunition, soldering material, ceramic and glass industries, iron and steel manufacturing units produce large quantities of wastewater containing lead [4].Common purifier methods for the removal of toxic metal comprise membrane separation [5], electrochemical precipitation, fertilization and adsorption, emulsion per traction, ion exchange differ with respect to cost, complexity and efficiency [4]. From last few decades, biosorption method has appeared as an economic and efficient alternative for water and wastewater treatment utilizing natural, cheap, renewable and abundant.At present, biosorption method has been enriched by a vast amount of papers published in different journals [6].Among different purifying methods, adsorption technique is one of the fantastic techniques used for removing heavy metals from wastewater [7].Biosorption is an emerging technique for wastewater treatment utilizing biomaterials such as agricultural wastes [6].Biomass and other waste materials may also offer a cheap, available and renewable additional supply of activated carbon.These waste materials have low or no economic cost and often present a disposal problem.Converting these low-cost byproducts into activated carbon makes it valuable and solving waste disposal and most attractive provide a potentially commercial activated carbon [8].In recent years, some researchers have been searching for more cost-effective methods to obtain the carbon.As carbon can be achieved theoretically from any carbonaceous materials that are rich of element carbon, some low-value and easily available agricultural by-products, such as peanut shell, rice husk, cassava peel, sawdust, olive kernels and wheat straw have been utilized as the precursors of activated carbon to remove heavy metals.Newly, other materials such as algal bloom waste and durian peel have also been explored in order to prepare activated carbon [9].Babassu coconut mesocarp, an abundant agricultural lignocellulosic by-product, is a fibrous residue left after producing the nuts [10].Available abundantly, high adsorption capacity, cost effectiveness and renewability are the major factors making coconut husk as commercial alternatives for water and wastewater treatment [6]. To make better use of this cheap and vast agricultural waste, it is proposed to convert coconut husk (CH) into activated carbon.Conversion of coconut husk to activated carbon will supply a double purpose.First, unvalued agricultural waste is converted to useful, valuable adsorbent and second, the use of agricultural byproducts represents a high potential source of adsorbents, which will enter to solving part of the wastewater treatment problem in many countries [11]. In the present research, efficiency of activated coconut husks for the removal of Pb from aqueous solution has been studied and the results have been analyzed. Preparation of Activated Carbon The coconut (CH) was bought in the spring of 2015, and then coconut husk was separated, cleaned and thoroughly washed with distilled water, and then dried in an oven.This dried coconut husk was then treated with H 2 O 2 solution for 24 h to oxidize adhering organic impurities and washed well with double distilled water to remove the excessive hydrogen peroxide then dried at 110 °C for 1 h under vacuum [12].Dried coconut shell fibers were activated at 700 °C for 2 h.The material was grounded and sieved to desired particle scales such as <106, 106-125, 125-180, 180-212, 212-250, 250-300, >300 BSS mesh.Finally, granules of activated coconut husk (ACH) thus obtained were stored in separate vacuum desiccators until required [13].All experiments have been done in the laboratory of Branch of Hamadan, Islamic Azad University Chemicals A 1000 mgL -1 Pb (II) stock solution was prepared by dissolving a weighed quantity of Pb (NO 3 ) 2 in distilled water.All samples and solutions for adsorption and analysis were prepared by suitable dilution of the freshly prepared stock solution.The pH measurements were made using a Metrohm pH meter.Test solutions pH s were adjusted using reagent grade dilute H 2 SO 4 (0.1N) and NaOH (0.1N). Study of Adsorption Isotherms Experiments were conducted in batch mode.One hundred ml samples of aqueous solutions of metal ions at different initial concentrations (100-800 mgL -1 ) and at modified pH were transferred into 250 ml Erlenmeyer flask.Specified amounts of the coconut husk were added to these solutions.After 24 h, solutions were filtered and Pb ion concentrations in the filtrate were measured.Concentrations of Pb ions in samples were determined ICP-OES.Concentration of metal remained in the sorbent phase q e (mgg -1 ) was calculated from the expression: (1) Where C 0 and C e are the initial and final (equilibrium) concentrations of the metal ion in solution (mgL -1 ), W is the mass of the sorbent (g) and V is the solution volume (L). All experiments were carried out at a constant situation.Throughout the study, the initial metal concentrations from 100 to 800 mgL - 1 , the pH quantities varied from two to six, the initial biomass Concentrations from 2.5 to 4 g 100 mL -1 , the temperature from 25 to 45 °C, and the contact time from 10 to 120 minutes [14]. Langmuir Isotherm Langmuir isotherm supposes monolayer adsorption onto a surface containing a limited number of adsorption sites of uniform strategies of adsorption with no transmigration of adsorbate in the area of surface [15].Equation for linear form of Langmuir isotherm is given by the following equation: (2) Where C e is the equilibrium concentration of the adsorbate (mgL -1 ), q e is the amount of adsorbate adsorbed per unit mass of adsorbent (mgg -1 ), Q o and b are Langmuir constants related to adsorption capacity and rate of adsorption, respectively [15]. Freundlich Isotherm Freundlich isotherm in the other hand assumes heterogeneous surface energies, in which the energy term in Langmuir equation differs as a function of the surface coverage [11].The famous equation for Freundlich isotherm is presented by the following equation: (3) where qe is the measure of adsorbate adsorbed per unit mass of adsorbent (mgg-1), C e is the equilibrium concentration of the adsorbate (mgL -1 ), KF and n are Freundlich constants with n giving an indication of how suitable the adsorption process. RESULTS Adsorption of heavy metal ions onto the surface of a biological material was affected by following factors: biomass concentration, pH, metal ion concentration, time and temperature. Effect of Biomass Concentration Coconut husk (CH) quantity for removal of Pb ions was investigated by adding different amounts of CH in the 1-4 g at a temperature of 25±0.1 ˚C.The number of sites accessible for biosorption depends upon the amount of the adsorbent.The effect of the coconut husk concentration on the metal removal efficiency is shown in Figure 1.The Pb ions removal was increased linearly with the inceasing value of biosobant up to the biomass concentration of 4 g100 mL-1.Beyond this dosage, the increase in removal efficiency was lower. Effect of PH The effect of the solution pH on the adsorption of Pb ions onto coconut husk was assessed at different values, ranging from 2 to 6, with a stirring time of 120 min.In these experiments, the initial metal concentration and dos of adsorbent were set at 100 mgL -1 and 1 g, respectively, for all batch tests in this experiment.As shown in Figure 2, the metal uptake increased with the increasing pH in the range of 2 to 6.At pH values of about 6, sorption capacities achieved maximum values. Effect of Contact Time Experiments were carried out for various contact times with a fixed adsorbent dose, pH and concentration.Typical biosorption kinetics exhibits a fast initial uptake, followed by a slower process.The highest level of heavy metal removal took place within the first 90 min (Figure 3).After this time, the amount of bound metal ions did not change with rapid slope during the course of the process. Effect of Temperature To study the effect of temperature on the Pb (II) adsorption over CH three temperatures 25, 35 and 45 °C were experimented (Figure 4).The results showed that adsorption process increased with adding temperature from 25 to 45 °C.The quantities adsorbed passed through a maximum at 45 °C and then started to decrease with the temperatures increasing to 50 °C. Effect of Initial Concentration of Pb The percentage removal of Pb (II) ion by the adsorbent initially decreased slowly with adding Pb ions concentration to 600 mgL -1 and then declined rapidly when Pb ions concentration reached to 800 mgL -1 (Figure 5). Adsorption Isotherms The adsorption isotherm demonstrates how the adsorption molecules release between the liquid phase and the solid phase when the adsorption process reaches an equilibrium phase.To find the acceptable model that can be used for design isotherm model, different isotherm models were analyzed.Adsorption isotherm study was carried out on two isotherm models: the Langmuir and Freundlich isotherm models.Correlation coefficients R 2 size is used to describe the applicability of the isotherm equation.For the Langmuir isotherm, when C e /q e is traced against C e , a straight line with slope of 1/Q o is earned (Figure 6).The Langmuir constants b and Qo were calculated from Eq [3]. Figure6. Langmuir adsorption isotherm of Pb (II) onto CH at 25 •C For the Freundlich isotherm, the plot of ln q e against ln C e affords a straight line with slope of 1/n, as presented in Figure 7, which showed that the adsorption of Pb (II) on the CH was desirable.Therefore, Freundlich constants KF and n were calculated from Eq [16]. The correlation coefficient, R 2 of 0.99 showed that the adsorption process of Pb ions on the prepared activated carbon was well fitted to the Freundlich isotherm. DISCUSSION As presented in Figure 1, the Pb uptake was increaseD when concentration of bisorbent reacheD to the biomass concentration of 4 g/100ml.Increasing the biosorbent dosage caused a wise in the biomass surface area and in the number of potential binding sites [24].The effect of amount of adsorbent on the rate of uptake of Cr (III) was carried out at 0.5, 1, and 2 g and the uptake increased with increase in amount of the adsorbent material [32]. pH can affect protonation of functional groups (i.e.carboxyl, phosphate and amino groups) on the biosorbent phase, as well as the chemistry of the metal (i.e. its solubility).When the pH decreased, concentrations of protons enhanced and make competition between H + and metal ions on binding sites.Protonated active sites were unable of binding the bind metal ions because free ions remain in the solution [14].Since biosorption is reversible process, decreasing pH would result in deprotonation.This feature is applied in regeneration of biosorbents.Another explanation is that with adding pH, solubility of complexes of metals decreases [33].Description of mechanism of biosorption and parameters affecting its performance is necessary for the facility of the operation conditions for biosorption itself and for recycling the solid phase [34]. It is clear from the Figure 3 that the uptake of Pb (II) increases slowly with the lapse of time and reaches to saturation in 2 h.At the initial contact, time due to high amounts of available adsorbent surface the rate of adsorption was fast.The lesser yield of adsorption after passing time could be due to two reasons.First, saturation of sites reduced the availability of active surface sites on the adsorbent.Second, the remaining empty surface sites were difficult to be occupied due to repulsive force of the adsorbed metal ions on the solid phase [35].Similar results were observed where the effect of contact time on removal of Pb (II) and Hg from aqueous solution by rice husk ash showed adsorption increased with increasing contact time [36]. The equilibrium uptake of metal ions to coconut shells was affected by temperature and increased with the increasing temperature up to 45 °C.It could offer superiority of physical sorption over chemical sorption in this part.High temperature may damage active binding sites and cause decrease adsorption.The process involved the conduction of metal ions from the bulk liquid to the solid phase and the sorption of metal ions onto the biosorbent surface similar to previous study [14]. The uptake of the Pb ions was studied separately over CH at initial Pb concentration ranging from 100 to 800 mgL -1 .The rate of the ) and beyond this, it decreased.At lower concentrations, Pb would interact with the binding sites and thus 100% adsorption happens.As at higher concentrations, binding sites are saturated more Pb ions are left un-adsorbed in solution.This indicates that energetically less favorable sites become involved with increasing ion contents in the aqueous solution [3]. Adsorption isotherm studies are of fundamental importance in determining the adsorption capacity of Pb (II) onto Coconut husk (CH) and to diagnose the nature of adsorption.The correlation coefficient, R 2 of 0.99 showed that the adsorption data of Pb (II) on the prepared activated carbon was well fitted to the Freundlich isotherm.This implies that the adsorption of Pb (II) ions occur on a heterogeneous surface.The activated carbon prepared in this study had a relatively high adsorption capacity of 65 mgg -1 if compared to some data obtained from other papers.The large adsorption volume of the activated carbon prepared in this work could be due to its relatively large surface area and its special mesoporous structure.The differences between adsorption capacities of coconut husk with other adsorbents listed in Table 1, might be due to the variation in the original nature of the precursors, the processes used to produce the adsorbents as well as other conditions applied during the adsorption processes. CONCLUSION Coconut shells are an environmentally friendly potential biosorbent for heavy metals.This study examined the efficiency of this sorbent in removal of Pb (II) ions from aqueous solution.Biosorption is affected by various factors, such as biomass concentration, pH and temperature.This study demonstrated that under optimum conditions (pH=6.0,biomass concentration 1g100L -1 ; temperature=25 °C, and contact time=120 min), maximum biosorption capacities of 65 mgg -1 was obtained in the Langmuir model for Pb (II) ions.The experimental values are evaluated according to the Langmuir and Freundlich isotherms.Both models fit the experimental data but the Freundlich is more suitable.Comparison between this paper and other biosorbents show that coconut husk biomass was an efficient biosorbent for this metal ion.Because carbon is easily prepared from the agricultural by product such as coconut shells it would be useful for the economic treatment of polluted water containing heavy metals. Figure 2 . Figure 2. The adsorption of Pb onto CH as a function of the equilibrium pH (adsorbent dose: 1g100mL -1 ; equilibrium time: 120 min and temperature: 25 °C). Figure 3 . Figure 3.The adsorption of Pb onto CH as a function of the time (adsorbent dose: 1g100ml -1 ; equilibrium pH: 6 and temperature: 25 °C). Figure 4 . Figure 4.The adsorption of Pb onto CH as a function of the temperature (adsorbent dose: 1g100ml -1 ; equilibrium time: 120 min and pH: 6). Figure 5 . Figure 5.The adsorption of Pb onto CH as a function of the initial concentration (adsorbent dose: 1 g/100ml -1 ; equilibrium time: 120 min and pH: 6). Table 1 . Adsorption capacities for some activated carbon.
3,900.4
2016-10-15T00:00:00.000
[ "Engineering" ]
Embedding Words in Non-Vector Space with Unsupervised Graph Learning It has become a de-facto standard to represent words as elements of a vector space (word2vec, GloVe). While this approach is convenient, it is unnatural for language: words form a graph with a latent hierarchical structure, and this structure has to be revealed and encoded by word embeddings. We introduce GraphGlove: unsupervised graph word representations which are learned end-to-end. In our setting, each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm. We show that our graph-based representations substantially outperform vector-based methods on word similarity and analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of WordNet, the geometry is highly non-trivial and contains subgraphs with different local topology. Introduction Effective word representations are a key component of machine learning models for most natural language processing tasks. The most popular approach to represent a word is to map it to a low-dimensional vector (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017;Tifrea et al., 2019). Several algorithms can produce word embedding vectors with distances or dot products capturing semantic relationships between words; the vector representations can be useful for solving numerous NLP tasks such as word analogy (Mikolov et al., 2013b), hypernymy detec-tion (Tifrea et al., 2019) or serving as features for supervised learning problems. While representing words as vectors may be convenient, it is unnatural for language: words form a graph with a hierarchical structure (Miller, 1995) that has to be revealed and encoded by unsupervised learned word embeddings. A possible step towards this can be made by choosing a vector space more similar to the structure of the data: for example, a space with hyperbolic geometry (Dhingra et al., 2018;Tifrea et al., 2019) instead of commonly used Euclidean (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017) was shown beneficial for several tasks. However, learning data structure by choosing an appropriate vector space is likely to be neither optimal nor generalizable: Gu et al. (2018) argue that not only are different data better modelled by different spaces, but even for the same dataset the preferable type of space may vary across its parts. It means that the quality of the representations obtained from vector-based embeddings is determined by how well the geometry of the embedding space matches the structure of the data. Therefore, (1) any vectorbased word embeddings inherit limitations imposed by the structure of the chosen vector space; (2) the vector space geometry greatly influences the properties of the learned embeddings; (3) these properties may be the ones of a space geometry and not the ones of a language. In this work, we propose to embed words into a graph, which is more natural for language. In our setting, each word is a node in a weighted undirected graph and the distance between words is the shortest path distance between the corresponding nodes; note that any finite metric space can be represented in such a manner. We adopt a recently introduced method which learns a representation of data as a weighted graph (Mazur et al., 2019) and use it to modify the GloVe algorithm for unsuper-vised word embeddings (Pennington et al., 2014). The former enables simple end-to-end training by gradient descent, the latter -learning a graph in an unsupervised manner. Using the fixed training regime of GloVe, we vary the choice of a distance: the graph distance we introduced, as well as the ones defined by vector spaces: Euclidean (Pennington et al., 2014) and hyperbolic (Tifrea et al., 2019). This allows for a fair comparison of vector-based and graph-based approaches and analysis of limitations of vector spaces. In addition to improvements on a wide range of word similarity and analogy tasks, analysis of the structure of the learned graphs suggests that graph-based word representations can potentially be used as a tool for language analysis. Our key contributions are as follows: • we introduce GraphGlove -graph word embeddings; • we show that GraphGlove substantially outperforms both Euclidean and Poincaré GloVe on word similarity and word analogy tasks; • we analyze the learned graph structure and show that GraphGlove has hierarchical, similar to WordNet, structure and highly nontrivial geometry containing subgraphs with different local topology. Graph Word Embeddings For a vocabulary V = {v 0 , v 1 , . . . , v n }, we define graph word embeddings as an undirected weighted graph G(V, E, w). In this graph, • V is a set of vertices corresponding to the vocabulary words; • E={e 0 , e 1 , . . . , e m } is a set of edges: • w(e i ) are non-negative edge weights. When embedding words as vectors, the distance between words is defined as the distance between their vectors; the distance function is inherited from the chosen vector space (usually Euclidean). For graph word embeddings, the distance between words is defined as the shortest path distance between the corresponding nodes of the graph: where Π G (v i , v j ) is the set of all paths from v i to v j over the edges of G. To learn graph word embeddings, we use a recently introduced method for learning a representation of data in a form of a weighted graph (Mazur et al., 2019) and modify the training procedure of GloVe (Pennington et al., 2014) for learning unsupervised word embeddings. We give necessary background in Section 2.1 and introduce our method, GraphGlove, in Section 2.2. 2.1 Background 2.1.1 Learning Weighted Graphs PRODIGE (Mazur et al., 2019) is a method for learning a representation of data in a form of a weighted graph G(V, E, w). The graph requires (i) inducing a set of edges E from the data and (ii) learning edge weights. To induce a set of edges, the method starts from some sufficiently large initial set of edges and, along with edge weights, learns which of the edges can be removed from the graph. Formally, it learns G(V, E, w, p), where in addition to a weight w(e i ), each edge e i has an associated Bernoulli random variable b i ∼ Bern(p(e i )); this variable indicates whether an edge is present in G or not. For simplicity, all random variables b i are assumed to be independent and the joint probability of all edges in the graph can be written as p(G) = m i=0 p(e i ). Since each edge is present in the graph with some probability, the distance is reformulated as the expected shortest path distance: where d G (v i , v j ) is computed efficiently using Dijkstra's algorithm. The probabilities p(e i ) are used only in training; at test time, edges with probabilities less than 0.5 are removed, and the graph G(V, E, w, p) can be treated as a deterministic graph G(V, E, w). Training. Edge probabilities p(e i ) = p θ (e i ) and weights w(e i ) = w θ (e i ) are learned by minimizing the following training objective: Here L(G, θ) is a task-specific loss, and is the average probability of an edge being present. The second term is the L 0 regularizer on the number of edges, which penalizes edges for being present in the graph. Training with such regularization results in a graph where an edge becomes either redundant (with probability close to 0) or important (with probability close to 1). To propagate gradients through the second term in (3), the authors use the log-derivative trick (Glynn, 1990) and Monte-Carlo estimate of the resulting gradient; when sampling, they also apply a heuristic to reduce variance. For more details on the optimization procedure, we refer the reader to the original paper (Mazur et al., 2019). Initialization. An important detail is that training starts not from the set of all possible edges for a given set of vertices, but from a chosen subset; this subset is constructed using task-specific heuristics. The authors restrict training to a subset of edges to make it feasible for large datasets: while the number of all edges in a complete graph scales quadratically to the number of vertices, the initial subset can be constructed to scale linearly with the number of vertices. GloVe GloVe (Pennington et al., 2014) is an unsupervised method which learns word representations directly from the global corpus statistics. Each word v i in the vocabulary V is associated with two vectors w i andw i ; these vectors are learned by minimizing Here X i,j is the co-occurrence between words v i and v j ; b i andb j are trainable word biases, and f (X i,j ) is a weight function: f (X i,j ) = min(1, [ X i,j xmax ] α ) with x max = 100 and α = 3/4. The original GloVe learns embeddings in the Euclidean space ;Poincaré GloVe (Tifrea et al., 2019) adapts this training procedure to hyperbolic vector spaces. This is done by replacing w T iw j in for- Table 1). Our Approach: GraphGlove We learn graph word embeddings within the general framework described in Section 2.1.1. Therefore, it is sufficient to (i) define a task-specific loss L(G, θ) in formula (3), and (ii) specify the initial subset of edges. Loss function We adopt GloVe training procedure and learn edge weights and probabilities directly from the co-" " in the loss term occurrence matrix X. We define L(G, θ) by modifying formula (4) for weighted graphs: 1. replace w T iw j with either graph distance or graph dot product as shown in Table 1 (see details below); 2. since we learn one representation for each word in contrast to two representations learned by GloVe, we setb j = b j . Distance. We want negative distance between nodes in a graph to reflect similarity between the corresponding words; therefore, it is natural to replace w T iw j with the graph distance. The resulting loss L(G, θ) is: Dot product. A more honest approach would be replacing dot product w T iw j with a "dot product" on a graph. To define dot product of nodes in a graph, we first express the dot product of vectors in terms of distances and norms. Let w i , w j be vectors in a Euclidean vector space, then Now it is straightforward to define the dot product 2 of nodes in our weighted graph: where d(v i , v j ) is the shortest path distance. Note that dot product (8) contains distances to a zero element; thus in addition to word nodes, we also need to add an extra "zero" node in a graph. This is not necessary for the distance loss (5), but we add this node anyway to have a unified setting; a model can learn to use this node to build paths between other nodes. All loss functions are summarized in Table 1. Initialization We initialize the set of edges by connecting each word with its K nearest neighbors and M randomly sampled words. The nearest neighbors are computed as closest words in the Euclidean GloVe embedding space, 3 random words are sampled uniformly from the vocabulary. We initialize biases b i from the normal distribution N (0, 0.01), edge weights by the cosine similarity between the corresponding GloVe vectors, and edge probabilities with 0.9. ; for both, we use the original implementation 4 with recommended hyperparameters. We chose these models to enable a comparison of our graph-based method and two different vector-based approaches within the same training scheme. Corpora and Preprocessing We train all embeddings on Wikipedia 2017 corpus. To improve the reproducibility of our results, we (1) use a standard publicly available Wikipedia snapshot from gensim-data 5 , (2) process the data with standard GenSim Wikipedia tokenizer 6 . Also, we release preprocessing scripts and the resulting corpora as a part of the supplementary code. 3 In preliminary experiments, we also used as nearest neighbors the words which have the largest pointwise mutual information (PMI) with the current one. However, such models have better loss but worse quality on downstream tasks, e.g. word similarity. 4 Euclidean GloVe: https://nlp.stanford. edu/projects/glove/, Poincaré GloVe: https: //github.com/alex-tifrea/poincare_glove. 5 https://github.com/RaRe-Technologies/ gensim-data , dataset wiki-english-20171001 6 gensim.corpora.wikicorpus.tokenize , commit de0dcc3 Setup We compare embeddings with the same vocabulary and number of parameters per token. For vector-based embeddings, the number of parameters equals vector dimensionality. For GraphGlove, we compute number of parameters per token as proposed by Mazur et al. (2019): (|V |+2·|E|)/|V |. To obtain the desired number of parameters in Graph-Glove, we initialize it with several times more parameters and train it with L 0 regularizer until enough edges are dropped (see Section 2.2). We consider two vocabulary sizes: 50k and 200k. For 50k vocabulary, the models are trained with either 20 or 100 parameters per token; for 200k vocabulary -with 20 parameters per token. For initialization of GraphGlove with 20 parameters per token we set K = 64, M = 10; for a model with 100 parameters per token, K = 480, M = 32. In preliminary experiments, we discovered that increasing both K and M leads to better final representations at a cost of slower convergence; decreasing the initial graph size results in lower quality and faster training. However, starting with no random edges (i.e. M = 0) also slows convergence down. Training Similarly to vectorial embeddings, GraphGlove learns to minimize the objective (either distance or dot product) by minibatch gradient descent. However, doing so efficiently requires a special graphaware batching strategy. Namely, a batch has to contain only a small number of rows with potentially thousands of columns per row. This strategy takes advantage of the Dijkstra algorithm: a single run of the algorithm can find the shortest paths between a single source and multiple targets. Formally, one training step is as follows: 1. we choose b = 64 unique "anchor" words; 2. sample up to n = 10 4 words that co-occur with each of b "anchors"; 3. multiply the objective by importance sampling weights to compensate for non-uniform sampling strategy. 7 This way, a single training iteration with b · n batch size requires only O(b) runs of Dijkstra algorithm. 7 Let X be the co-occurrence matrix. Then for a pair of words (vi, vj), an importance sampling weight is is the probability to choose a pair (vi, vj) in the original GloVe, qi,j = 1 |V | · 1 |{k:X i,k =0}| is the probability to choose this pair in our sampling strategy. After computing the gradients for a minibatch, we update GraphGlove parameters using Adam (Kingma and Ba, 2014) with learning rate α=0.01 and standard hyperparameters (β 1 =0.9, β 2 =0.999). It took us less than 3.5 hours on a 32-core CPU to train GraphGlove on 50k tokens until convergence. This is approximately 3 times longer than Euclidean GloVe in the same setting. Experiments In the main text, we report results for 50k vocabulary with 20 parameters per token. Results for other settings, as well as the standard deviations, can be found in the supplementary material. Word Similarity To measure similarity of a pair of words, we use cosine distance for Euclidean GloVe, the hyperbolic distance for Poincaré GloVe and the shortest path distance for GraphGlove. In the main experiments, we exclude pairs with out-of-vocabulary (OOV) words. In the supplementary material, we also provide results with inferred distances for OOV words. We evaluate word similarity on standard benchmarks: WS353, SCWS, RareWord, SimLex and SimVerb. These benchmarks evaluate Spearman rank correlation of human-annotated similarities between pairs of words and model predictions 8 . Table 2 shows that GraphGlove outperforms vectorbased embeddings by a large margin. 8 We use standard evaluation code from https://github.com/kudkudak/ word-embeddings-benchmarks Word Analogy Analogy prediction is a standard method for evaluation of word embeddings. This task typically contains tuples of 4 words: (a, a * , b, b * ) such that a is to a * as b is to b * . The model is tasked to predict b * given the other three words: for example, "a = Athens is to a * = Greece as b = Berlin is to b * = (Germany)". Models are compared based on accuracy of their predictions across all tuples in the benchmark. The standard benchmarks contain Google analogy (Mikolov et al., 2013a) and MSR (Mikolov et al., 2013c) test sets. MSR test set contains only morphological category; Google test set contains 9 morphological and 5 semantic categories, with 20 -70 unique word pairs per category combined in all possible ways to yield 8,869 semantic and 10,675 syntactic questions. Unfortunately, these test sets are not balanced in terms of linguistic relations, which may lead to overestimation of analogical reasoning abilities as a whole (Gladkova et al., 2016). 9 The Bigger Analogy Test Set (BATS) (Gladkova et al., 2016) contains 40 linguistic relations, each represented with 50 unique word pairs, making up 99,200 questions in total. In contrast to the standard benchmarks, BATS is balanced across four groups: inflectional and derivational morphology, and lexicographic and encyclopedic semantics. Evaluation. Euclidean GloVe solves analogies by maximizing the 3COSADD score: We adapt this for GraphGlove by substituting cos(x, y) with a graph-based similarity function. As a simple heuristic, we define the similarity between two words as the correlation of vectors consisting of distances to all words in the vocabulary: This function behaves similarly to the cosine similarity: its values are from -1 to 1, with unrelated words having similarity close to 0 and semantically close words having similarity close to 1. Another alluring property of sim G (x, y) is efficient computation: we can get full distance vector d G (x) with a single pass of Dijkstra's algorithm. We use sim G (x, y) to solve the analogy task in GraphGlove: For details on how Poincaré GloVe solves the analogy problem, we refer the reader to the original paper (Tifrea et al., 2019). Results. GraphGlove shows substantial improvements over vector-based baselines (Tables 3 and 4). Note that for Poincaré GloVe, the best-performing loss functions for the two tasks are different (cosh 2 d for similarity and d 2 for analogy), and there is no setting where Poincaré GloVe outperforms Euclidean Glove on both tasks. While for GraphGlove best-performing loss functions also vary across tasks, GraphGlove with the dot product loss outperforms all vector-based embeddings on 10 out of 13 benchmarks (both analogy and similarity). This shows that when removing limitations imposed by the geometry of a vector space, embeddings can better reflect the structure of the data. We further confirm this by analyzing the properties of the learned graphs in Section 5. Learned Graph Structure In this section, we analyze the graph structure learned by our method and reveal its differences from the structure of vector-based embeddings. We compare graph G G learned by Graph-Glove (d) with graphs G E and G P induced from Euclidean and Poincaré (cosh 2 d) embeddings respectively. 10 For vector embeddings, we consider two methods of graph construction: 1. THR -connect two nodes if they are closer than some threshold τ , 2. KNN -connect each node to its K nearest neighbors and combine multiple edges. The values τ and K are chosen to have similar edge density for all graphs. 11 We find that in contrast to the graphs induced from vector embeddings: • in GraphGlove frequent and generic words are highly interconnected; • GraphGlove has hierarchical, similar to Word-Net, structure; • GraphGlove has non-trivial geometry containing subgraphs with different local topology. Important words Here we identify which words correspond to "central" (or important) nodes in different graphs; we consider several notions of node centrality frequently used in graph theory. Note that in this section, by word importance we mean graph-based properties of nodes (e.g. the number of neighbors), and not semantic importance (e.g., high importance for content words and low for function words). 10 We take the same models as in Section 4. 11 Namely, K = 13 and τ = 0.112 for Euclidean GloVe, K = 13 and τ = 0.444 for Poincaré Glove. Degree centrality. The simplest measure of node importance is its degree. For the top 200 nodes with the highest degree, we show the distribution of parts of speech and the average frequency percentile (higher means more frequent words). Figure 1 shows that for all vector-based graphs, the top contains a significant fraction of proper nouns and nouns. For G G , distribution of parts of speech is more uniform and the words are more frequent. We provide the top words and all subsequent importance measures in the supplementary material. Eigenvector centrality. A more robust measure of node importance is the eigenvector centrality (Bonacich, 1987). This centrality takes into account not only the degree of a node but also the importance of its neighbors: a high eigenvector score means that a node is connected to many nodes who themselves have high scores. Figure 2 shows that for G G the top changes in a principled way: the average frequency increases, proper nouns almost vanish, many adverbs, prepositions, linking and introductory words appear (e.g., 'well', 'but', 'in', 'that'). 12 For G G , the top consists of frequent generic words; this agrees with the intuitive understanding of importance. Differently from G G , top words for G E and G P have lower frequencies, fewer adverbs and prepositions. This can be because it is hard to make generic words from different areas close for vector-based embeddings, while GraphGlove can learn arbitrary connections. 12 See the words in the supplementary material. k-core. To further support this claim, we looked at the main k-core of the graphs. Formally, k-core is a maximal subgraph that contains nodes of degree k or more; the main core is non-empty core with the largest k. Table 5 shows the sizes of the main cores and the corresponding values of k. Note that the maximum k is much smaller for G G ; a possible explanation is that the cores in G E and G P are formed by nodes in highly dense regions of space, while in G G the most important nodes in different parts can be interlinked together. The Structure is Hierarchical In this section, we show that the structure of our graph reflects the hierarchical nature of words. We do so by comparing the structure learned by Graph-Glove to the noun hierarchy from WordNet. To extract hierarchy from G G , we (1) take all (lemmatized) nouns in our dataset which are also present in WordNet (22.5K words), (2) take the root noun 'entity' (which is the root of the WordNet tree), and (3) construct the hierarchy: the k-th level is formed by all nodes at edge distance k from the root. We consider two ways of measuring the agreement between the hierarchies: word correlation and level correlation. Word correlation is Spearman's rank correlation between the vectors of levels for all nouns. Level correlation is Spearman's rank correlation between the vectors l and l avg , where l i is the level in WordNet tree and l avg i is the average level of l i 's words in our hierarchy. We performed these measurements for all graphs (see Table 6). 13 We see that, according to both correlations, G G is in better agreement with the WordNet hierarchy. The Geometry is Non-trivial In contrast to vector embeddings, graph-based representations are not constrained by a vector space 13 The low performance of threshold-based graphs can be explained by the fact that they are highly disconnected (we assume that all nodes which are not connected to the root form the last level). geometry and potentially can imitate arbitrarily complex spaces. Here we confirm that the geometry learned by GraphGlove is indeed non-trivial. We cluster G G using the Chinese Whispers algorithm for graph node clustering (Biemann, 2006) and measure Gromov δ-hyperbolicity for each cluster. Gromov hyperbolicity measures how close is a given metric to a tree metric (see, e.g., Tifrea et al. (2019) for the formal definition) and has previously been used to show the tree-like structure of the word log-co-occurrence graph (Tifrea et al., 2019). Low average δ indicates tree-like structure with δ being exactly zero for trees; δ is usually normalized by the average shortest path length to get a value invariant to metric scaling. Figure 4 shows the distribution of average δhyperbolicity for clusters of size at least 10. Firstly, we see that for many clusters the normalized average δ-hyperbolicity is close to zero, which agrees with the intuition that some words form a hierarchy. Secondly, δ-hyperbolicity varies significantly over the clusters and some clusters have relatively large values; it means that these clusters are not tree-like. Figure 3 shows examples of clusters with different values of δ-hyperbolicity: both tree-like ( Figure 3a) and more complicated (Figure 3b-c). Related Work Word embedding methods typically represent words as vectors in a low-dimensional space; usu- ally, the vector space is Euclidean (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017), but recently other spaces, e.g. hyperbolic, have been explored (Leimeister and Wilson, 2018;Dhingra et al., 2018;Tifrea et al., 2019). However, vectorial embeddings can have undesired properties: e.g., in dot product spaces certain words cannot be assigned high probability regardless of their context (Demeter et al., 2020). A conceptually different approach is to model words as probability density functions (Vilnis and McCallum, 2015;Athiwaratkun and Wilson, 2017;Bražinskas et al., 2018;Muzellec and Cuturi, 2018;Athiwaratkun and Wilson, 2018). We propose a new setting: embedding words as nodes in a weighted graph. To learn a weighted graph, we use the method by Mazur et al. (2019). Prior approaches to learning graphs from data are eigher highly problemspecific and not scalable Escolano and Hancock (2011); Karasuyama and Mamitsuka (2017); Kang et al. (2019) or solve a less general but important case of learning directed acyclic graphs (Zheng et al., 2018;Yu et al., 2019). The opposite to learning a graph from data is the task of embedding nodes in a given graph to reflect graph distances and/or other properties; see Hamilton et al. (2017) for a thorough survey. Analysis of word embeddings and the structure of the learned feature space often reveals interesting language properties and is an important research direction (Köhn, 2015;Bolukbasi et al., 2016;Mimno and Thompson, 2017;Nakashole and Flauger, 2018;Naik et al., 2019;Ethayarajh et al., 2019). We show that graph-based embeddings can be a powerful tool for language analysis. Conclusions We introduce GraphGlove -graph word embeddings, where each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. The graph is learned end-to-end in an unsupervised manner. We show that GraphGlove substantially outperforms both Euclidean and Poincaré GloVe on word similarity and word analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of Word-Net; the geometry is highly non-trivial and contains subgraphs with different local topology. Possible directions for future work include using GraphGlove for unsupervised hypernymy detection, analyzing undesirable word associations, comparing learned graph topologies for different languages, and downstream applications such as sequence classification. Also, given the recent success of models such as ELMo and BERT, it would be interesting to explore extensions of GraphGlove to the class of contextualized embeddings. A Appendix: Additional benchmarks A.1 Variance study As our method relies on random initialization of a graph in PRODIGE, a natural question is whether different choice of drawn edges significantly affects the quality of representations in the end of training. Figure 5 demonstrates that after running the training procedure with distance-based loss for 5 different random seeds, the final metrics values have a standard deviation of less than 1 point in 10/13 tasks and have a standard deviation of at most 1.34 percent for the RareWord dataset. Thus, we can conclude that GraphGlove results are relatively stable with respect to selection of random edges before training. Some word pairs in each similarity benchmark are out of vocabulary (OOV). In the main evaluation, we drop such pairs from the benchmark. However, there's also a different way to deal with such words. A popular workaround is to calculate the distance between w i and OOV as an average distance from w i to other words. In the rare case when both words are OOV, we can consider them infinitely distant from each other. We report similarity benchmarks including OOV tokens in Tables 9, 10 and 11.
6,721.4
2020-10-06T00:00:00.000
[ "Computer Science" ]
Porous Invariants We introduce the notion of porous invariants for multipath (or branching/nondeterministic) affine loops over the integers; these invariants are not necessarily convex, and can in fact contain infinitely many 'holes'. Nevertheless, we show that in many cases such invariants can be automatically synthesised, and moreover can be used to settle (non-)reachability questions for various interesting classes of affine loops and target sets. Introduction We consider the reachability problem for multipath (or branching) affine loops over the integers, or equivalently for nondeterministic integer linear dynamical systems. A (deterministic) integer linear dynamical system consists of an update matrix M ∈ Z d×d together with an initial point x (0) ∈ Z d . We associate to such a system its infinite orbit (x (i) ) consisting of the sequence of reachable points defined by the rule x (i+1) = M x (i) . The reachability question then asks, given a target set Y , whether the orbit ever meets Y , i.e., whether there exists some time i such that x (i) ∈ Y . The nondeterministic reachability question allows the linear update map to be chosen at each step from a fixed finite collection of matrices. When the orbit does eventually hit the target, one can easily substantiate this by exhibiting the relevant finite prefix. However, establishing non-reachability is intrinsically more difficult, since the orbit consists of an infinite sequence of points. One requires some sort of finitary certificate, which must be a relatively simple object that can be inspected and which provides a proof that the set Y is indeed unreachable. Typically, such a certificate will consist of an overapproximation I of the set R of reachable points, in such a manner that one can check both that Y ∩ I = ∅ and R ⊆ I; such a set I is called an invariant. Formally we study the following problem for inductive invariants: Meta Problem 1. Consider a system with update functions f 1 , . . . , f n . A set I is an inductive invariant if f i (I) ⊆ I for all i. Given a reachability query (x, Y ) we search for a separating inductive invariant I such that x ∈ I and Y ∩ I = ∅. Meta Problem 1 is parametrised by the type of invariants and targets that are considered; that is, what are the classes of allowable invariant sets I and target sets Y , or equivalently how are such sets allowed to be expressed. Fixing a particular invariant and target domain, a reachability query has three possible scenarios: (1) the instance is reachable, (2) the instance is unreachable and a separating invariant from the domain exists, or (3) the instance is unreachable but no separating invariant exists. Ideally, one would wish to provide a sufficiently expressive invariant domain so that the latter case does not occur, whilst keeping the resulting invariants as simple as possible and computable. For some classes of systems, it is known that distinguishing reachability (1) from unreachability (2,3) is undecidable; it can also happen that determining whether a separating invariant exists (i.e., distinguishing (2) from (3)) is undecidable. We note that the existence of strongest inductive invariants 3 is a desirable property for an invariant domain-when strongest invariants exist (and can be computed), separating (2) from (1,3) is easy: compute the strongest invariant, and check whether it excludes the target state or not; if so, then you are done, and if not, no other invariant (from that class) can possibly do the trick either. However, unless (3) is excluded, computing the strongest invariant does not necessarily imply that reachability is decidable. Unfortunately, strongest invariants are not always guaranteed to exist for a particular invariant domain, although some separating inductive invariant may still exist for every target (or indeed may not). In prior work from the literature, typical classes of invariants are usually convex, or finite unions of convex sets. In this paper we consider certain classes of invariants that can have infinitely many 'holes' (albeit in a structured and regular way); we call such sets porous invariants. These invariants can be represented via Presburger arithmetic 4 . We shall work instead with the equivalent formulation of semi-linear sets, generalising ultimately periodic sets to higher dimensions, as finite unions of linear sets of the form {b + p 1 N + · · · + p m N} (by which we mean {b + a 1 p 1 + · · · + a m p m | a 1 , . . . , a m ∈ N}, see Definition 2). Let us first consider a motivating example: Example 1 (Hofstadter's MU Puzzle [7]). Consider the following term-rewriting puzzle over alphabet {M, U, I}. Start with the word M I, and by applying the following grammar rules (where y and z stand for arbitrary words over our alphabet), we ask whether the word M U can ever be reached. The answer is no. One way to establish this is to keep track of the number of occurrences of the letter 'I' in the words that can be produced, and observe that this number (call it x) will always be congruent to either 1 or 2 modulo 3. In other words, it is not possible to reach the set {x | x ≡ 0 mod 3}. Indeed, Rules 2 and 3 are the only rules that affect the number of I's, and can be described by the system dynamics x → 2x and x → x − 3. Hence the MU Puzzle can be viewed as a one-dimensional system with two affine updates, 5 or a twodimensional system with two linear updates. 6 The set {1 + 3Z} ∪ {2 + 3Z} is an inductive invariant, and we wish to synthesise this. The problem can be rephrased as a safety property of the following multipath loop, verifying that the 'bad' state x = 0 is never reached, or equivalently that the above loop can never halt, regardless of the nondeterministic choices made. The MU Puzzle was presented as a challenge for algorithmic verification in [4]; the tools considered in that paper (and elsewhere, to the best of our knowledge) rely upon the manual provision of an abstract invariant template. Our approach is to find the invariant fully automatically (although one must still abstract from the MU Puzzle the correct formulation as the program x → 2x || x → x − 3). Main Contributions. Our focus is on the automatic generation of porous invariants for multipath affine loops over the integers, or equivalently nondeterministic integer linear dynamical systems. -We first consider targets consisting of a single vector (or 'point targets'), and present the classes of invariants and systems for which invariants can and cannot be automatically computed for the reachability question. A summary of the results for linear and semi-linear invariants for these targets is given in Table 1. For completeness we also consider R, R + -(semi)-linear sets, where we complete the picture from prior work by showing that strongest R-semilinear invariants are computable. • We establish the existence of strongest Z-linear invariants, and show that they can be found algorithmically (Theorem 2). These invariants may or may not separate the target under consideration. • If a Z-linear invariant is not separating, we may instead look for an Nsemi-linear invariant (which generalises both Z-semi-linear and N-linear invariants), and we show that such an invariant can always be found for any unreachable point target when dealing with deterministic integer linear dynamical systems (Theorem 4). 5 One-dimensional affine updates are functions of the form f (x) = ax + b. 6 a b 0 1 • However, for nondeterministic integer linear dynamical systems, computing an N-semi-linear invariants is an undecidable problem in arbitrary dimension (Theorem 5). Nevertheless we show how such invariants can be constructed in a low-dimensional setting, in particular for affine updates in one dimension (Theorem 6). As an immediate consequence, this establishes that the multipath loop associated with the MU Puzzle belongs to a class of programs for which we can automatically synthesise N-semi-linear invariants. -For full-dimensional 7 Z-linear targets we show that reachability is decidable, and, in the case of unreachability that a Z-semi-linear invariant can always be exhibited as a certificate (Theorem 3). If the target is not fulldimensional then the reachability problem is Skolem-hard and undecidable for deterministic and nondeterministic systems respectively. -In Section 6 we present our tool porous which handles one-dimensional affine systems for both point and Z-linear targets, solving both the reachability problem and producing invariants. Inter alia, this allows one to handle the multipath loop derived from the MU Puzzle in fully automated manner. Related Work The reachability problem (in arbitrary dimension) for loops with a single affine update, or equivalently for deterministic linear dynamical systems, is decidable in polynomial time for point targets (that is Y = {y}), as shown by Kannan and Lipton [16]. However for nondeterministic systems (where the update matrix is chosen nondeterministically from a finite set at each time step), reachability is undecidable, by reduction from the matrix semigroup membership problem [22]. In particular this entails that for unreachable nondeterministic instances we cannot hope always to be able to compute a separating invariant. In some cases we may compute the strongest invariant (which may suffice if this invariant happens to be separating for the given reachability query), or we may compute an invariant in sub-cases for which reachability is decidable (for example in low dimensions). For some classes of invariants, it is also undecidable whether an invariant exists (e.g., polyhedral invariants [8]). Various types of invariants have been studied for linear dynamical systems, including polyhedra [23,8], algebraic [15], and o-minimal [1] invariants. For certain classes of invariants (e.g., algebraic [15]), it is decidable whether a separating invariant exists, notwithstanding the reachability problem being undecidable. Other works (e.g., [5]) use heuristic approaches to generate invariants, without aiming for any sort of completeness. Kincaid, Breck, Cyphert and Reps [18] study loops with linear updates, studying the closed forms for the variables to prove safety and termination properties. Such closed forms, when expressible in certain arithmetic theories, can be interpreted as another type of invariant and can be used to over-approximate the reachable sets. The work is restricted to a single update function (deterministic loops) and places additional constraints on the updates to bring the closed forms into appropriate theories. Bozga, Iosif and Konecný's FLATA tool [2] considers affine functions in arbitrary dimension. However, it is restricted to affine functions with finite monoids; in our one-dimensional case this would correspond to limiting oneself to counterlike functions of the form f (x) = x + b. Finkel, Göller and Haase [9], extending Fremont [10], show that reachability in a single dimension is PSPACE-complete for polynomial update functions (and allowing states can be used to control the sequences of updates which can be applied). The affine functions (and single-state restriction) we consider are a special case, but we focus on producing invariants to disprove reachability. Other tools, e.g., AProVE [11] and Büchi Automizer [14] may (dis-)prove termination/reachability on all branches, but may not be able to prove termination/reachability on some branch. Inductive invariants specified in Presburger arithmetic have been used to disprove reachability in vector addition systems [20]. A generalisation, 'almost semi-linear sets' [21] are also non-convex and can capture exactly the reachable points of vector addition systems. Our nondeterministic linear dynamical systems can be seen as vector addition systems over Z extended with affine updates (rather than only additive updates). Preliminaries We denote by Z the integers and N the non-negative integers. We say that A point y is reachable if there exists m ∈ N and B 1 , . . . , B m such that B 1 · · · B m x (0) = y and B i ∈ {M 1 , . . . , M k } for all 1 ≤ i ≤ m. The reachability set O ⊆ Z d of an LDS is the set of reachable points. Definition 2 (K-(semi)-linear sets). A linear set L is defined by a base vector b ∈ Z d and period vectors p 1 , . . . , p d ∈ Z d such that For convenience we often write {b + p 1 K + · · · + p d K} for L. A set is semi-linear if it is the finite union of linear sets. N-semi-linear sets are precisely those definable in Presburger arithmetic (FO(Z, +, ≤)) [12]. However, we can also consider Z-semi-linear sets (corresponding to FO(Z, +) without order), and the real counterparts (R and R + ). Note that even if K = N we still allow p i ∈ Z d . Definition 3. Given an integer linear dynamical system Note in particular that every inductive invariant contains the reachability set (O ⊆ I). We are interested in the following problem: In our setting, we are interested in classes D of invariants that are linear, or semi-linear. When a separating inductive invariant I exists, we also wish to compute it. Since (semi)-linear invariants are enumerable, the decision problem is, in theory, sufficient-although all of our proofs are constructive. R Invariants: R-linear and R-semi-linear Before delving into porous invariants, let us consider invariants over the real numbers, i.e., described as R-(semi)-linear sets. Strongest R-linear invariants are given precisely by the affine hull of the reachability set, and can be computed using Karr's algorithm [17]. Moreover, we will show that strongest R-semi-linear invariants also exist and can be computed by combining techniques for algebraic invariants [15] and R-linear invariants. R-linear. Recall that a set L is R-linear if L = {v 0 + v 1 R + · · · + v t R} for some v 0 , . . . , v t ∈ Z d that can be assumed to be linearly-independent 8 without loss of generality (and thus t ≤ d). Given two distinct points of L, every point on the infinite line connecting them must also be in L. Generalising this idea to higher dimensions, given a set S ⊆ R d , let the affine hull be Fix an LDS (x (0) , {M 1 , . . . , M k }) and consider its reachability Then O a is precisely the strongest R-linear invariant. Karr's algorithm [17,26] can be used to compute this strongest invariant in polynomial time. The next lemma follows from Theorem 3.1 of [26]. Let R 0 = x (0) , r 1 , . . . , r d be obtained as per Lemma 1, with d ≤ d. The R-linear invariant of the LDS is the affine span R 0 a , which can be written as the R-semi-linear. Let us now generalise this approach to R-semi-linear sets. The collection of R-semi-linear sets, { m i=1 L i | m ∈ N, L 1 , . . . , L m are R-linear sets}, is closed under finite unions and arbitrary intersections 9 . Thus for any given set X, the smallest R-semi-linear set containing X is simply the intersection of all R-semi-linear sets containing X. Let us denote by X R this smallest R-semi-linear set. We are interested in O R . Algebraic sets are those that are definable by finite unions and intersections of zeros of polynomials. For example, {(x, y) | xy = 0} describes the lines x = 0 and y = 0. The (real) Zariski closure X z of a set X is the smallest algebraic subset of R d containing the set X. The Zariski closure of the set of reachable points, O z , can be computed algorithmically [15]. An algebraic set A is irreducible if whenever A ⊆ B ∪ C, where B and C are algebraic sets, then we have A ⊆ B or A ⊆ C. Any algebraic set (and in particular a Zariski closure) can be written effectively as a finite union of irreducible sets [3]. Proof. Since A i ⊆ X R = ∪ j L j , and A i is irreducible, we have A i ⊆ L j for some j (as the L j 's are algebraic sets). Since L j is R-linear, and A i a is the smallest , where y ∈ A \ W . Such a point y can always be found using quantifier elimination in the theory of the reals. Each step necessarily increases the dimension, which can occur at most d times, ensuring termination, at which point one has A a = W . Strongest Z-linear Invariants Recall that a Z-linear set {q + p 1 Z + · · · + p n Z} is defined by a base vector q ∈ Z d and period vectors p 1 , . . . , p n ∈ Z d . Equivalently, a Z-linear set describes a lattice, i.e., {p 1 Z + · · · + p n Z}, in d-dimensional space, translated to start from q rather than 0. The image of a Z-linear set L = {q + p 1 Z + · · · + p n Z} by a matrix M is the Z-linear set: M (L) = {M q + (M p 1 )Z + · · · + (M p n )Z}. The following lemma asserts that when two points are in a Z-linear set, the direction between these two points can be applied from any reachable point, and hence this direction can be included as a period without altering the set. Proposition 2. Let L = {q + a 1 p 1 + · · · + a n p n | a 1 , . . . , a n ∈ Z} be a Z-linear set. If x, y ∈ L then for all z ∈ L and all a ∈ Z we have z + (y − x)a ∈ L. In particular, we have L = {q + a 1 p 1 + · · · + a n p n + a (y − x) | a 1 , . . . , a n , a ∈ Z}. Next we show minimality as a straightforward consequence of Proposition 2. Clearly the vectors p 1 , . . . , p n can be added by Proposition 2 because any two points of L 1 differing by p i guarantees that adding p i does not alter the resulting set. Similarly, t 1 , . . . , t m can also be included. Finally, by Proposition 2, the vector s − q can be included because q and s both belong to L 1 ∪ L 2 . A d-dimensional lattice can always be defined by at most d vectors; and thus if d is the dimension of the matrices, no more than d period vectors are needed in total. However, Proposition 3 induces a representation which may over-specify the lattice by producing more than d vectors to define the lattice. The Hermite normal form can be used to obtain a basis of the vectors that define the lattice. Consider a lattice L i = {p 1 Z + · · · + p d Z}. The lattice remains the same if p i is swapped with p j , if p i is replaced by −p i , or if p i is replaced by p i + αp j where α is any fixed integer 10 . These are the unimodular operations. The Hermite normal form of a matrix M is a matrix H such that M = U H, where U is a unimodular matrix (formed by unimodular column operations) and H is lower triangular, non-negative and each row has a unique maximum entry which is on the main diagonal. Such a form always exists, and the columns of H form a basis of the same lattice as the columns of M , because they differ up to unimodular (lattice-preserving) operations. There are many texts on the subject; we refer the reader to the lecture notes of Shmonin [25] for more detailed explanations. The columns of a matrix in Hermite normal form constitute a unique basis for the lattice (up to additional redundant zero columns). Hence a basis of minimal dimension can be obtained by computing the Hermite normal form of the matrix formed by placing the period vectors into columns. We now prove the main theorem: Proof (Proof of Theorem 2). We claim that Algorithm 1 returns the strongest Z-linear invariant I. Algorithm 1 proceeds in two phases: -First find a necessary subset L 0 ⊆ I of the invariant having already the same dimension as I. -Then compute a growing sequence L 0 L 1 · · · L m−1 = L m = I, where at each step the algorithm merely increases the density of the attendant sets in order to 'fill in' missing points of the invariant. Recall the set . Applying M 1 , . . . , M k can only increase the density, but not the dimension. As each r i and x (0) are in O, by Proposition 2 we can assume that each of the directions (r i − x (0) ) must be represented in any Z-linear set containing O, and we therefore have that L 0 ⊆ I. In the second phase, we 'fill in' the lattice as required to cover the whole of O. To do this we repeatedly apply the covering procedure of Proposition 3. That is, To keep the number of vectors small, we keep the period vectors of the Z-linear set in Hermite normal form. The vectors ) form a parallelepiped (hyper-parallelogram) that repeats regularly. There are a finite number of integral points inside this parallelepiped. If new points are added in some step, they are added to every parallelepiped. Thus we can add new points finitely many times before saturating or becoming fixed. The volume of the parallelepiped is bounded above by |p 1 | · · · |p d |. At each step, the volume of the parallelepiped must at least halve, thus the volume at step t is vol t ≤ |p 1 | · · · |p d |/2 t . The procedure must saturate at or before the volume becomes 1, which occurs after at most log(|p 1 | · · · |p d |) = i log(|p i |) steps. At each step, for efficiency considerations, we convert the Z-linear set into Hermite normal form to retain exactly d period vectors. Claim (I is the strongest invariant). For every invariant J, we have I ⊆ J. By induction, let us prove that every invariant J must contain L i . Clearly this is the case for L 0 because all points of R 0 ⊆ O must be in J and every period vectors in L 0 can be present, without loss of generality, thanks to Proposition 2. Assume L i ⊆ J. Then it must be the case that J contains every M j (L i ), as otherwise it would not be an invariant. It therefore follows that J must contain L i+1 , since the latter is the minimal Z-linear set containing L i and M j (L i ) for all j ≤ k. Finally, since I is itself one of the L i 's, we have I ⊆ J as required. Remark 1. Note that a Z-linear set is not sufficient for the MU puzzle: both 1 and 2 are in the reachability set, thus {1 + 1Z} = Z is the strongest Z-linear invariant. Extensions of Z-linear sets without strongest invariants In this section we show that several generalisations of Z-linear domains fail to admit strongest invariants. Z-semi-linear sets are unions of Z-linear sets, and therefore can include singletons. Consider the deterministic dynamical system starting from point 1 and doubling at each step M = (1, (x → 2x)). This system has reachability set O = 2 k | k ∈ N , which is not even N-semi-linear (our most general class). For this LDS we can construct the invariant 2, 4, 8, ..., 2 k ∪ 2 k+1 p 1 | p 1 ∈ Z for each k. For any proposed strongest Z-semi-linear invariant, one can find a k for which the corresponding invariant is an improvement. N-linear sets generalise Z-linear sets (observe that Z-linear sets are a proper subclass, since {x + p i Z} can be expressed as {x + (−p i )N + p i N}, but {x + p i N} is clearly not Z-linear). Consider the LDS ((x 1 , x 2 ), ( 0 1 1 0 )), with a reachability set consisting of just two points x = (x 1 , x 2 ) and y = (x 2 , x 1 ). There are two incomparable candidates for the minimal N-linear invariant: {x + (y − x)N} and {y + (x − y)N}. Similarly for R + -linear invariants, the sets {y + (x − y)R + } and {x + (y − x)R + } are incomparable half-lines. Z-linear targets We have so far only considered invariants for point targets. We now turn to lattice-like targets, in particular targets specified as full-dimensional Z-linear sets. Furthermore, for unreachable instances, a Z-semi-linear inductive invariant can be provided. Theorem 3 requires the targets to be full-dimensional. For nondeterministic systems reachability is undecidable for non-full-dimensional targets (in particular point targets) [22]. However, even for deterministic systems, when Z-linear targets fail to be full-dimensional the reachability problem becomes as hard as the Skolem problem (see, e.g. [24]), for example by choosing as target the set Towards proving Theorem 3, we first show that full-dimensional linear sets can be expressed as 'square' hybrid-linear sets. Hybrid-linear sets are semi-linear sets in which all the components share the same period vectors, and thus differ only in starting position (whereas semi-linear sets allow each component to have distinct period vectors). By square, we mean that all period vectors are the same multiple of standard basis vectors. Then there is an integral combination of p 1 , . . . , p d such that m i e i is an admissible direction in Y . By the presence of And therefore Y can be written as b∈B {b We now prove Theorem 3. Proof (Proof of Theorem 3). Choose m and B as in Lemma 2, so that Y is of the form b∈B {b + me 1 Z + · · · + me d Z}. We build an invariant I of the form If there exists y ∈ B ∩ I then return Reachable. This is because the same sequence of matrices applied to x (0) to produce y ∈ I would, thanks to the modulo step, wind up inside the set {y + me 1 Z + · · · + me d Z}, which is a part of the target. Otherwise, return Unreachable and I as invariant. By construction, I is indeed an inductive invariant disjoint from the target set. Remark 2. By the same argument, Theorem 3 extends to a restricted class of Z-semi-linear targets: the finite union of full-dimensional Z-linear sets. N-semi-linear Invariants We now consider N-semi-linear invariants, our most general class. N-semi-linear invariants gain expressivity thanks to the 'directions' provided by the period vectors. For example, the only possible Z-semi-linear invariant for the LDS (0, (x → x + 1)) is Z, yet the reachability set, N, is captured exactly by an N-linear invariant. We show that a separating N-semi-linear invariant can always be found for unreachable instances of deterministic integer LDS, although the computed invariant will depend on the target. However, finding invariants is undecidable for nondeterministic systems, at least in high dimension. Nevertheless, we show decidability for the low-dimensional setting of the MU Puzzle-one dimension with affine updates. Existence of sufficient (but non-minimal) N-semi-linear invariants for point reachability in deterministic LDS Kannan and Lipton showed decidability of reachability of a point target for deterministic LDS [16]. In this subsection, we establish the following result to provide a separating invariant in unreachability instances. To do so, we will invoke the results from [8] to compute an R + -semi-linear inductive invariant, and then extract from it an N-semi-linear inductive invariant. More precisely, the authors of [8] show how to build polytopic inductive invariants for certain deterministic LDS. Such polytopes are either bounded or are R + -semi-linear sets. In the first case, the polytope contains only finitely many integral points, which can directly be represented via an N-semi-linear set. In the second case, we build an N-semi-linear set containing exactly the set of integral points included in the R + -semi-linear invariant, thanks to the following lemma. Proof (Proof of Theorem 4). We note that every invariant produced in [8] has rational period vectors, as the vectors are given by the difference of successive point in the orbit of the system, and thus Lemma 3 can be applied. The authors of [8] build an inductive invariant in all cases except those for which every eigenvalue of the matrix governing the evolution of the LDS is either 0 or of modulus 1 and at least one of the latter is not a root of unity. This situation however cannot occur in our setting. Indeed, the eigenvalues of an integer matrix are algebraic integers, and an old result of Kronecker [19] asserts that unless all of the eigenvalues are roots of unity, one of them must have modulus strictly greater than 1 (the case in which all eigenvalues are 0 being of course trivial). This concludes the proof of Theorem 4. Undecidability of N-semi-linear invariants for nondeterministic LDS If the enhanced expressivity of N-semi-linear sets allows us always to find an invariant for deterministic LDS, it contributes in turn to making the invariantsynthesis problem undecidable when the LDS is not deterministic. We establish this through a reduction from the infinite Post correspondence problem (ω-PCP) that can be defined in the following way: given m pairs of non-empty words This problem is known to be undecidable when m is at least 8 [13,6]. Theorem 5. The invariant synthesis problem for N-semi-linear sets and linear dynamical systems with at least two matrices of size 91 is undecidable. Proof (Sketch). We first establish the result in the case of several matrices in low dimension; this can then be transformed in a standard way to two larger matrices (of size 91). The proof is by reduction from the infinite Post correspondence problem. Given an instance of this problem the pair of words corresponding to each sequence of tiles has an integer representation, using base-4 encoding. An important property of our encoding is that the operation of appending a new tile to an existing pair of words can be encoded by matrix multiplication. Recall that if the instance of ω-PCP is negative, then every generated pair of words will differ at some point. Our encoding is such that this difference of letters creates a difference in their numerical encodings that can be identified with an N-semi-linear invariant. On the other hand, when there is a positive answer to the ω-PCP instance, there can be no N-semi-linear invariant. Nondeterministic one-dimensional affine updates The previous section shows that point reachability for nondeterministic LDS is undecidable once there sufficiently many dimensions, motivating an analysis at lower dimensions. The MU Puzzle requires a single dimension with affine updates (or equivalently two dimensions in matrix representation, with the coordinate along the second dimension kept constant). We consider this one-dimensional affine-update case, and therefore, rather than taking matrices as input, we directly work with affine functions of the form f i (x) = a i x + b i . Theorem 6. Given x (0) , y ∈ Z, along with a finite set of functions Moreover, when y is unreachable, an N-semi-linear separating inductive invariant can be algorithmically computed. We note that decidability of reachability is already known [9,10]. We refine this result by exhibiting an invariant which can be used to disprove reachability. In fact our procedure will produce an N-semi-linear set which can be used to decide reachability, and which, in instances of non-reachability, will be a separating inductive invariant. We have implemented this algorithm into our tool porous, enabling us to efficiently tackle the MU Puzzle as well as its generalisation to arbitrary collections of one-dimensional affine functions. We report on our experiments in Section 6. We build a case distinction depending on the type of functions that appear: Simplifying assumptions Lemma 4. Without loss of generality, redundant functions are redundant; more precisely, we can reduce the computation of an invariant for a system having redundant functions to finitely many invariant computations for systems devoid of such functions. Proof. Clearly the identity function has no impact on the reachability set, and so can be removed outright. For any other redundant function, its impact on the reachability set does not depend on when the function is used, and we may therefore assume that it was used in the first step, or equivalently, using an alternative starting point. Hence the invariant-computation problem can be reduced to finitely many instances of the problem over different starting points, with redundant functions removed. Finally, taking the union of the resulting invariants yields an invariant for the original system. Proof and b = c (as f = g) these two functions are opposing. Two opposing counters. Let us first observe that when there are two opposing counters, we essentially move in either direction by some fixed amount. This will entail that only Z-(semi)-linear invariants can be produced, rather than proper N-(semi)-linear invariants. Therefore, starting with x (0) + dZ ∈ I we can 'saturate' the invariant under construction using the following lemma: Consider the function f (x) = ax + b. If x = y + dk ∈ I, then f (x) = ax + b = ay + adk + b = f (y) + adk ∈ I. Now thanks to the presence of counter h(x) = x + d, by choosing the initial k ∈ Z appropriately and applying h(x) sufficiently many times (say m ∈ N times), one can reach f (x) + adk + dm = f (x) + dn for any desired n ∈ Z. Without loss of generality if {x + dZ} is in the invariant, then 0 ≤ x < d. We then repeatedly use Lemma 8 to find the required elements of the invariant. Since there are only finitely many residue classes (modulo d), every reachable residue class {c 1 , . . . , c n } can be found by saturation (in at most d steps), yielding invariant {c 1 + dZ} ∪ · · · ∪ {c n + dZ}. Thanks to Lemma 6, in all remaining cases there is without loss of generality at most one pure inverter. No Counters. If we are not in the preceding case and there are no counters, then there must be growing functions and by Lemma 6, without loss of generality at most one pure inverter. We show that all growing functions increase the modulus outside of some bounded region. Proof. By the triangle inequality we have: This is the only situation in which the invariant is not exactly the reachability set, and requires us to take an overapproximation. If there is one pure inverter g(x) = −x + d then observe that −C is mapped to C + d and C + d is mapped to −C. Thus intuitively we want to use the interval (−C, C + d). However two problems may occur: (a) since d could be less than 0 then C + d may no longer be growing (under the application of the growing functions), and (b) an inverting growing function only ensures that −C is mapped to a value greater than or equal to C, rather than C+d. Hence, we choose C to ensure that C ± d is still growing by at least |d| (under the application of our growing functions). Let C = max C Non-opposing counters. The only remaining possibility (if there do not exist two opposing counters, and not all functions are growing or pure inverters), is that there are counter-like functions, but they are all counting in the same direction. There may also be a single pure inverter, and possibly some growing functions. Pick a counter h(x) = x+d to be the reference counter; the choice is arbitrary, but it is convenient to pick a counter with minimal |d|. As a starting point, we have x (0) + dN ⊆ I. Proof. Let r = g(x) + dm for m ∈ Z. We show r ∈ I. Consider x + dn for n ∈ N, then g(x + dn) = −a(x + dn) + b = −ax + b − adn = g(x) − adn. Hence g(x) − adn + dk, n, k ∈ N, is reachable by applying k times the function h(x). Hence for any m ∈ Z there exists k, n ∈ N such that k − na = m, so that r is indeed reachable. Similarly to the situation with two opposing counters, whenever the invariant contains some Z-linear set, Lemma 8 allows us to saturate amongst the finitely many reachable residue classes. However, the invariant may contain subsets that are not Z-linear. Consider {x + dN} ⊆ I, which is not yet invariant. We repeatedly apply non-inverting functions to {x + dN} to obtain new N-linear sets (not Z-linear sets). When the function applied 'moves' in the direction of the counters this will ultimately saturate (in particular when applying other counter functions). However, in the opposite direction, we may generate infinitely many such classes. Clearly we can examine all reachable residue classes defined by our reference counter. Any residue class reachable after an inverting function induces a Z-linear set. So it remains to consider those N-linear sets reachable without inverting functions. The remaining case to handle occurs when we repeatedly induce Nlinear sets until they repeat a residue class in the direction opposite to that of the reference counter. We consider the case for h(x) = x + d with d ≥ 0. The case with h(x) = x − d is symmetric. It remains to detect when a set {x + dN} leads to {y + dN} by a sequence of non-inverting functions with x ≡ y mod d. Then by repeated application of these functions one can reach sets {z + dN} with z arbitrarily small, hence we can replace {x + dN} by {x + dZ}. We give further details in the appendix. Reachability. The above procedure is sufficient to decide reachability. In all cases apart from that in which there are no counters, the invariants produced co-incide precisely with the reachability sets. A reachability query therefore reduces to asking whether the target belongs to the invariant. In the remaining case, the invariant obtained is parametrised by the target via the bound C . The target lies within the region (−C , C +d), within which we can compute all reachable points. Thus once again, the target is reachable precisely if it belongs to the invariant. However, for a new target of larger modulus, a different invariant would need to be built. Complexity. Lemma 11. Assume that all functions, starting point, and target point are given in unary. Then the invariant can be computed in polynomial time. Without the unary assumption, the invariant could have exponential size, and hence require at least exponential time to compute. That is because the invariant we construct could include every value in an interval, for example, (−C, C), where C is of size polynomial in the largest value. As shown in [10], the reachability problem is at least NP-hard in binary, because one can encode the integer Knapsack problem (which allows an object to be picked multiple times rather at most once). Moreover the Knapsack problem is efficiently solvable in pseudo-polynomial time via dynamic programming; that is, polynomial time assuming the input is in unary, matching the complexity of our procedure. The POROUS Tool Our invariant-synthesis tool porous 11 computes N-semi-linear invariants for point and Z-linear targets on systems defined by one-dimensional affine functions. porous includes implementations of the procedures of Theorem 3 (restricted to one-dimensional affine systems) and Theorem 6. porous is built in Python and can be used by command-line file input, a web interface, or by directly invoking the Python packages. porous takes as input an instance (a start point, a target, and a collection of functions) and returns the generated invariant. Additionally it provides a proof that this set is indeed an inductive invariant: the invariant is a union of N-linear sets, so for each linear set and each function, porous illustrates the application of that function to the linear set and shows for which other linear set in the invariant this is a subset. Using this invariant, porous can decide reachability; if the specific target is reachable the invariant is not in itself a proof of reachability (since the invariant will often be an overapproximation of the global reachability set). Rather, equipped with the guarantee of reachability, porous searches for a direct proof of reachability: a sequence of functions from start to target (a process which would not otherwise be guaranteed to terminate). Table 2. Results varying by size parameter (last row includes all instances tested). Times are given in seconds, with the average and maximum shown (except reachability proof time, which are all approximately 30s due to instances that terminate just before the timeout). Experimentation. porous was tested on all 2 7 − 1 possible combinations of the following function types, with a ≥ 2, b ≥ 1: positive counters (x → x + b), negative counters (x → x − b), growing (x → ax ± b), inverting and growing (x → −ax ± b), inverters with positive counters (x → −x + b), inverters with negative counters (x → −x − b) and the pure inverter (x → −x). For each such combination a random instance was generated, with a size parameter to control the maximum modulus of a and b, ranging between 8 and 1024. The starting point was between 1 and the size parameter and the target was between 1 and 4 times the size parameter. Ten instances were tested for each size parameter and each of the 2 7 − 1 combinations, with between 1 and 9 functions of each type (with a bias for one of each function type). Our analysis, summarised in Table 2, illustrates the effect of the size parameter. The time to produce the proof of invariant is separated from the process of building the invariant, since producing the proof of invariant can become slower as |I| becomes larger; it requires finding L k ∈ I such that f i (L j ) ⊆ L k for every linear set L j ∈ I and every affine function f i . In every case porous successfully built the invariant, and hence decided reachability very quickly (on average well below 1 second) and also produced the proof of invariance in around half a second on average. To demonstrate correctness in instances for which the target is reachable porous also attempts to produce a proof of reachability (a sequence of functions from start to target). Since our paper is focused on invariants as certificates of non-reachability, our proof-of-reachability procedure was implemented crudely as a simple breadth-first search without any heuristics, and hence a timeout of 30 seconds was used for this part of the experiment only. Our experimental methodology was partially limited due to the high prevalence of reachable instances. A random instance will likely exhibit a large (often universal) reachability set. When two random counters are included, the chance that gcd(b 1 , b 2 ) = 1 (whence the whole space is covered) is around 60.8% and higher if more counters are chosen. Overall around 86% of instances were reachable (of which 84% produced a proof within 30 seconds). Of the 14% of unreachable instances, all produced a proof, with the invariant taking around 0.2 seconds to build and 0.6 seconds to produce the proof. The 30-second timeout when demonstrating reachability directly is several orders of magnitudes longer than answering the reachability query via our invariant-building method. A typical academic/consumer laptop was used to conduct the timing and analysis (a four-year-old, four-core MacBook Pro). Conclusions and Open Directions We introduced the notion of porous invariants, which are not necessarily convex and can in fact exhibit infinitely many 'holes', and studied these in the context of multipath (or branching/nondeterministic) affine loops over the integers, or equivalently nondeterministic integer linear dynamical systems. We have in particular focused on reachability questions. Clearly, the potential applicability of porous invariants to larger classes of systems (such as programs involving nested loops) or more complex specifications remains largely unexplored. Our focus is on the boundary between decidability and undecidability, leaving precise complexity questions open. Indeed, the complexity of synthesising invariants could conceivably be quite high, except where we have highlighted polynomial-time results. On the other hand, the invariants produced should be easy to understand and manipulate, from both a human and machine perspective. On a more technical level, in our setting the most general class of invariants that we consider are N-semi-linear. There remains at present a large gap between decidability for one-dimensional affine functions, and undecidability for linear updates in dimension 91 and above. It would be interesting to investigate whether decidability can be extended further, for example to dimensions 2 and 3. [26] refers to linear independence, this can be converted to affine independence by increasing the dimension by one. The procedure works via a pruned version breadth-first search, with nodes only expanded if its children are linearly independent of the current set. Hence, the first point found in the tree is the initial point x (0) , and therefore this point is included. The maximum depth of the tree that needs to be explored is d, and so every point included is reached with at most d applications of matrices to x (0) . Hence, if the largest absolute value of a point or matrix entry is µ, after d iterations, the largest absolute value is d d−1 µ d . This is by induction on the largest possible value µ for every entry: Base case: The result of [26] is in polynomial time in the number of arithmetic operations, we observe that this is also polynomial time in the bit-size. The independence checking in the algorithm involves checking linear independence of at most d vectors all having bit size at most log((dµ) d ) = d log(d) + d log(µ), which can be done in polynomial time in the bit-size (for example by Bareiss algorithm for calculating the determinant). Proof. We will prove the result for m + 5 matrices of size 7. This can then be transformed in a usual way to two matrices of size 7m + 35 (See Theorem 9 of [8] for instance). B Proof of Lemma 3 In order to simplify the main part of the proof, let us first show that one can enforce an order between the matrices using affine transformations on one dimension. Let us denote p this dimension, it is initially equal to 1 and its target value is 0. Consider the three following affine transformation: f 1 (p) = 2p − 1, f 2 (p) = 2p − 2 and f 3 (p) = 2p, then the only sequences of transformation allowing to reach the target are of the form f * 3 f 2 f * 1 . Indeed, let I = {p | p ≥ 2 ∨ p ≤ −1}, we have (1) if p ∈ I, then for all i ∈ {1, 2, 3}, f i (p) ∈ I, (2) f 1 (1) = 1 and f 1 (0) ∈ I, (3) f 2 (1) = 0 and f 2 (0) ∈ I and (4) f 3 (1) ∈ I and f 3 (0) = 0. As a consequence, the inductive invariant I ensure that any sequence of transformation that do not have the desired order cannot reach the target. In the following, we will call type 1, 2 or 3 the transformations we define, depending on whether they implictly contain the function f 1 , f 2 or f 3 . We reduce an instance {(u 1 , v 1 ), . . . , (u m , v m )} of the ω-PCP problem to the invariant synthesis problem. In order to simplify future notations, given a finite or infinite word w, we denote by |w| the length of the word w and given an integer i ≤ |w|, we write w i for the i'th letter of w. Given a finite or infinite word w on alphabet {1, . . . , m} we denote by u w and v w the words on the alphabet {0, 2} such that u w = u w1 u w2 . . . and v w = v w1 v w2 . . We work with 5 dimension, (s, c, d, n, k), and define the following transformations: -For i ≤ m, the type 1 transformation Simulate i on (s, c, d, n, k) encode the action of reading the pair (u i , v i ) and increases the counters n and k: it These m+5 transformations need 7 dimensions in total: the five above, (s, c, d, n, k), the one used for ordering the transformations,p, and one dimension constantly equal to 1, required to use affine transformations. We now show that there is a solution to the given instance of the ω-PCP problem iff there does not exist a N-semi-linear invariant for the system with initial point x = (0, 1, 1, 0, 0, 1, 1), target y = (0, 0, 0, 1, 0, 0, 1) and using the matrices inducing the transformations defined above. Assume first that there is a solution w to the ω-PCP instance. Consider the sequence of points (x n ) obtained as follows: for all j ∈ N, denoting w ≤j the prefix of w of length, x j = (s j , c j , 0, n j , k j , 0, 1) = Transfer Simulate w ≤j x where Simulate w ≤j represents the transformation Simulate w j . . . Simulate w2 Simulate w1 . We have that s j and c j are negative. Indeed, let (s, c, d) be the three first components of Simulate w ≤j x, we have that s = c[u wi ] − d[v wi ]. As w ≤j is a prefix of a solution to the ω-PCP instance, assuming |u w i| ≤ |v w i| this implies that Due to the above, by applying to the points x j a number of time the transformations Inc s and Inc c , we obtain the sequence of points (y j ) where y j = (0, 0, 0, n j , k j , 0, 1). We claim that any semi-linear invariant containing all the points y j also contains a point of the shape (0, 0, 0, 0, n j + d, k j , 0, 1) where d is a positive integer. This will imply the result as from such a point, one can reach the target by d − 1 applications of Dec k and k j applications of Dec and thus there is no semi-linear invariant of the system that does not intersect the target. Let us now prove the above claim. Let I be a semi-linear set containing every point (y j ) (which we will see as two-dimensional objects by projecting on the 4th and 5th dimension). Then there exists a linear set I ⊆ I that contains infinitely many vectors of (y j ). This set I is defined by an initial vector, and a set of period vectors. As I contains infinitely many vectors of (y j ) where the ratios between the first and second component is increasing, one of the period vectors is of the form (d, 0) where d is a strictly positive integer. Let j be such that y j ∈ I , then (n j + d, k j ) ∈ I which implies the claim. As a consequence, every N-semi-linear set over-approximating the system intersects with the target. Conversely, assume that there is no solution to the ω-PCP instance. There exists n 0 ∈ N such that for every infinite word w on alphabet {0, . . . , m} there exists n ≤ n 0 such that u w n = v w n . Indeed, consider the tree which root is labelled by (ε, ε) and, given a node (u, v) of the tree, if for all n ≤ min(|u|, |v|) we have u n = v n , then this node has m children: the nodes (uu i , vv i ) for i = 1 . . . m. This tree is finitely branching and does not contain any infinite path (which would induce a solution to the ω-PCP instance). Thus, according to König's lemma, it is finite. We can therefore choose the height of this tree as our n 0 . We define the invariant I = I 1 ∪ I 2 ∪ I 3 where . . , m} * ∧ |w| ≤ n 0 + 1 , ∧ w ∈ {1, . . . , m} * ∧ |w| ≤ n 0 + 1 ∧ s, t, n, k ∈ N and I 3 = (s, c, d, n, k, p, By definition, I is an N-semi-linear set, contains x and does not contain y. The difficulty is to show stability under the transformations. • Let z = Simulate w (x) ∈ I 1 , for some w ∈ {1, . . . , m} * with |w| ≤ n 0 + 1. By ordering if we apply a transformation outside Transfer or a Simulate i for some i, we reach I 3 . As c ≥ d, this shows that Simulate wi z ∈ I 3 . -Transferz ∈ I 2 . • Let z ∈ I 2 and f be one of the transformations, then f (z) ∈ I 2 if f increased (resp. decreased) a negative (resp. positive) component. Otherwise f (z) ∈ I 3 . There is three possibilities (1) p = 2 and thus f (z) ∈ I 3 , (2) f = Transfer then p = 0 and either s ≥ 1 or c ≥ 1 and thus f (z) ∈ I 3 or (3) f = Simulate i for i ≤ m. In the latter case without loss of generality, assume that d c (this is completely symmetric in c and d ). We have that by assumption on |s| since max i c ≥ c , max i d/3 ≥ d (as m i ≥ 4) and max i 4. This shows that f (z) ∈ I 3 . Therefore I is inductive and thus a N-semi-linear invariant of the system. This concludes the reduction. Lemma 12. For , k coprime, the sequence a n = (n mod k) for n ∈ N cycles through every modulo class {0, . . . , k − 1}. Proof. Any path longer than k visits some class twice, and if the shortest cycle is k, then it visits every class. Suppose there is a cycle of length less than k; then n = c+mk and (n+i) = c + m k and hence i = (m − m)k, with i < k. Since is an integer i divides (m − m)k then i = pr for p, r ∈ N such that m −m p is integer and k r is integer. Observe that since r ≤ i < k we have k r > 1. But this implies that k r divides k and , contradicting gcd(k, ) = 1. Proof. Let b = kd, c = d, where k, are co-prime. We show there exists m, n ≥ 0 such that mb − cn = d. We have mb − cn = d ⇐⇒ mkd − n d = d ⇐⇒ mk − n = 1. Then choose m = 1+n k . By Lemma 12 there exists n such that n is in any modulo class modulo k, and thus too for 1 + n and so k divides 1 + n for some n. Hence the set {x + dN} is included in the reachability set: we obtain x + jd by applying function f mj times and applying function g nj times. Similarly, we can find m , n ≥ 0 such that m b − cn = −d and thus {x + dZ} is within the reachability set. D.2 Extended argument for non opposing counters The following shows that if {x + dN} does lead to {y + dN}, with y < x and y ≡ x mod d, then indeed we can reach {z + dN} for any z ≡ x mod d by reapplying the same set of functions which lead from x to y. . Consider x (0) ∈ I and a path x (0) , f i1 , x (1) , f i2 , . . . , f im , x (m) such that x (j) = f ij (x (j−1) ), x (j) ≤ −B, x (m) < x (0) and x (0) ≡ x (m) mod d. Remark 3. By symmetry, Lemma 13 also holds for the opposite direction. That is when h(x) = x − d, d > 0, inequalities are inverted and C is used in place of −B. We now consider inductively applying non-inverting functions to sets {x + dN} ∈ I. Then add {f i (x) + dN} provided it is not already a subset of some set already in I. If {f i (x) + dN} is new and a new modulo class we can again apply Lemma 10, from whence we may also need to apply Lemma 8. However, when this procedure does not saturate there eventually exists be a sequence of actions in which {x + dN} leads to {y + dN} with x ≡ y mod d according to a path in Lemma 13. In particular y < x < −B since if x < y then {y + dN} ⊆ {x + dN}, some modulo class must repeat after at most d steps, and eventually the procedure must stay < −B for at least d steps. Then, according to Lemma 13, a new Z-linear set can be added ({x + dZ}) (which again can be saturated using Lemma 8). We repeat this process until all N-linear sets are invariant. This process terminates, as each application of Lemma 13 adds a new Z-linear set with period d, of which there are at most d. D.3 Proof of Lemma 11 Lemma 11. Assume that all functions, starting point, and target point are given in unary. Then the invariant can be computed in polynomial time. Proof. In the no-counter case, by Lemma 9, there is an interval [−C, C] of size approximately |b|+|M | |a|−1 , where |b|, |M |, |a| are all numbers represented in the input, and thus is of polynomial of size. This means the gap is of polynomial size, and thus the saturation algorithm, which must in each step add a point or terminate, is of polynomial time. In each counter-case there is a reference counter period d arising directly from the input as the counter part of some function, or in the case of two opposing counters, possibly the sum of two counter parts. For this period d there are at most 3d possible types of non-singleton invariant ({x + dN} or {x − dN} for some x and x + dZ for x ∈ {0, . . . , d} ). Singletons only arise in the interval [−C, C] if they exist. Hence, there are at most O(2C + 3d) steps which change the invariant. In the case of two opposing counters, immediately all invariants are of the form x + dZ for x ∈ {0, . . . , d}, and the reachable modulo classes can be found in O(dk) (recall k is the number of functions), by breadth first search. In the case of all counters in the same direction, there are two phases, each has a bounded number of steps. First we consider updates which move in the direction of the counters and secondly we consider updates which move against the counters. In the case of moving with the counters, outside of [−C, C] all functions are growing. Hence, by conducting breadth first search on a priority queue that always expands the minimal element we can find the sets of the form x + dN for x ∈ {0, . . . , d} in polynomial time. Only inside [−C, C] does the search result in smaller elements (which there are at most 2C such steps), and in the remaining case we either expand to find an element already covered, or we find the smallest element in that modulo class. Thus this step takes O(dk + 2C) time. Secondly we search for cycles in the direction opposing the counters, to see if we can turn any x + dN sets into x + dZ sets, that is invariants induced by Lemma 13. There can be a path of length at most d steps outside of [−C, C] before a cycle is found, so the running time is O(2Cd). E Tool The tool's output, when, applied to the MU Puzzle can be seen to produce the invariant {1 + 3Z} ∪ {2 + 3Z} of Example 1: The web-interface can be found at http://invariants.davidpurser.net.
14,536.2
2021-06-01T00:00:00.000
[ "Mathematics" ]
LHC Optics Measurement with Proton Tracks Detected by the Roman Pots of the TOTEM Experiment Precise knowledge of the beam optics at the LHC is crucial to fulfil the physics goals of the TOTEM experiment, where the kinematics of the scattered protons is reconstructed with the near-beam telescopes -- so-called Roman Pots (RP). Before being detected, the protons' trajectories are influenced by the magnetic fields of the accelerator lattice. Thus precise understanding of the proton transport is of key importance for the experiment. A novel method of optics evaluation is proposed which exploits kinematical distributions of elastically scattered protons observed in the RPs. Theoretical predictions, as well as Monte Carlo studies, show that the residual uncertainty of this optics estimation method is smaller than 0.25 percent. Introduction The TOTEM experiment [1] at the LHC is equipped with near beam movable insertions -called Roman Pots (RP) -which host silicon detectors to detect protons scattered at the LHC Interaction Point 5 (IP5) [2]. This paper reports the results based on data acquired with a total of 12 RPs installed symmetrically with respect to IP5. Two units of 3 RPs are inserted downstream of each outgoing LHC beam: the "near" and the "far" unit located at s = ±214.63 m and s = ±220.00 m, respectively, where s denotes the distance from IP5. The arrangement of the RP devices along the two beams is schematically illustrated in figure 1. Each unit consists of 2 vertical, so-called "top" and "bottom", and 1 horizontal RP. The two diagonals top left of IP-bottom right of IP and bottom left of IP-top right of IP, tagging elastic candidates, are used as almost independent experiments. The details of the set-up are discussed in [3]. Each RP is equipped with a telescope of 10 silicon microstrip sensors of 66 µm pitch which provides spatial track reconstruction resolution σ(x, y) of 11 µm [4]. Given the longitudinal distance between the units of ∆s = 5.372 m the proton angles are measured by the RPs with an uncertainty of 2.9 µrad. During the measurement the detectors in the vertical and horizontal RPs overlap, which enables a precise relative alignment of all the three RPs by correlating their positions via common particle tracks. The alignment uncertainty better than 10 µm is attained, the details are discussed in [4,5]. The proton trajectories, thus their positions observed by RPs, are affected by magnetic fields of the accelerator lattice. The accelerator settings define the machine optics which can be characterized with the value of β * at IP5. It determines the physics reach of the experiment [3]: runs with high β * = 90 -2500 m are characterized by low beam divergence allowing for precise scattering angle measurements while runs of low β * = 0.5 -11 m, due to small interaction vertex size, provide higher luminosity and thus are more suitable to study rare processes. In the following sections we will analyze two representatives of these LHC runs, corresponding to machine optics with β * = 3.5 m and 90 m, respectively [2,6]. In order to reconstruct the kinematics of proton-proton scattering precisely, an accurate model of proton transport is indispensable. TOTEM has developed a novel method to evaluate the optics of the machine by using angle-position distributions of elastically scattered protons observed in the RP detectors. The method, discussed in detail in the following sections, has been successfully applied to data samples recorded in 2010 and 2012 [8][9][10][11][12]. Section 2 introduces the so-called transport matrix, which describes the proton transport through the LHC lattice, while machine imperfections are discussed in section 3. The proposed novel method for optics evaluation is based on the correlations between the transport matrix elements. These correlations allow the estimation of those optical functions which are strongly correlated to measurable combinations and estimators of certain elements of this transport matrix. Therefore, it is fundamental to study these correlations in detail, which is the subject of section 4. The corresponding eigenvector decomposition of the transport matrix is used to gain insight into the magnitude of the reduction of uncertainties in the determination of LHC optics that can be obtained from using TOTEM data and provides the theoretical baseline of the method. Section 5 brings the theory to practice, by specifying the estimators, obtained from elastic track distributions measured in RPs. Finally, the algorithm that we applied to estimate the LHC optics from TOTEM data is described and discussed in section 6. The uncertainty of this novel method of LHC optics determination was estimated with Monte Carlo simulations, that are described in detail in section 7. The trajectory of protons produced with transverse positions ‡ (x * , y * ) and angles (Θ * x , Θ * y ) at IP5 is described approximately by a linear formula Proton transport model where d = (x, Θ x , y, Θ y , ∆p/p) T , p and ∆p denote the nominal beam momentum and the proton longitudinal momentum loss, respectively. The single pass transport matrix is defined by the optical functions [13]. The horizontal and vertical magnifications v x,y = β x,y /β * cos ∆µ x,y and the effective lengths L x,y = β x,y β * sin ∆µ x,y (4) ‡ The ' * ' superscript indicates that the value is taken at the LHC Interaction Point 5. are functions of the betatron amplitudes β x,y and the relative phase advance and are of particular importance for proton kinematics reconstruction. The D x and D y elements are the horizontal and vertical dispersion, respectively. Elastically scattered protons are relatively easy to distinguish due to their scattering angle correlations. In addition, these correlations are sensitive to the machine optics. Therefore, elastic proton-proton scattering measurements are ideally suited to investigate the optics the LHC accelerator. In case of the LHC nominal optics the coupling coefficients are, by design, equal to zero m 13 , ..., m 42 = 0 . Moreover, for elastically scattered protons the contribution of the vertex position (x * , y * ) in (1) is canceled due to the anti-symmetry of the elastic scattering angles of the two diagonals. Also, those terms of (1) which are proportional to the horizontal or vertical dispersions D x,y vanish, since ∆p = 0 for elastic scattering. Furthermore, the horizontal phase advance ∆µ x = π at 219.59 m, shown in figure 2, and consequently the horizontal effective length L x vanishes close to the far RP unit, as it is shown in figure 3. Therefore, dL x /ds is used for the reconstruction of the kinematics of proton-proton scattering. In summary, the kinematics of elastically scattered protons at IP5 can be reconstructed on the basis of RP proton tracks using (1): The vertical effective length L y and the horizontal magnification v x are applied in (7) due to their sizeable values, as shown in figures 4 and 5. As the values of the reconstructed angles are inversely proportional to the optical functions, the errors of the optical functions dominate the systematic errors of the final, physics results of TOTEM RP measurements. The proton transport matrix T (s; M), calculated with MAD-X [14], is defined by the machine settings M, which are obtained on the basis of several data sources: the magnet currents are first retrieved from TIMBER [15] and then converted to magnet strengths with LSA [16], implementing the conversion curves measured by FIDEL [17]. The WISE database [18] contains the measured imperfections (field harmonics, magnet displacements and rotations) included in M. Machine imperfections The real LHC machine [2] is subject to additional imperfections ∆M, not measured well enough so far, which alter the transport matrix by ∆T : The most important transport matrix imperfections are due to: -the magnet current-strength conversion error: σ(k)/k ≈ 10 −3 , -the beam momentum offset: σ(p)/p ≈ 10 −3 . Their impact on the important optical functions L y and dL x /ds is presented in table 1. It is clearly visible that the imperfections of the inner triplet (the so called MQXA and MQXB magnets) are of high influence on the transport matrix while the optics is less sensitive to the strength of the quadrupoles MQY and MQML. Generally, as indicated in table 1, for high-β * optics the magnitude of ∆T is sufficiently small from the viewpoint of data analysis. However, the sensitivity of the low-β * optics to the machine imperfections is significant and cannot be neglected. The proton reconstruction is based on (7). Thus it is necessary to know the effective lengths L x,y and their derivatives with an uncertainty better than 1-2 % in order to measure the total cross-section σ tot with the aimed uncertainty of [19]. The currently available ∆β/β beating measurement with an error of 5−10 % does not allow to estimate ∆T with the uncertainty, required by the TOTEM physics program [20]. However, as it is shown in the following sections, ∆T can be determined well enough from the proton tracks in the Roman Pots, by exploiting the properties of the optics and those of the elastic pp scattering, so that the aimed 1% relative uncertainty in the determination of the total pp cross-section becomes within the reach of TOTEM. Correlations in the transport matrix The transport matrix T defining the proton transport from IP5 to the RPs is a product of matrices that describe the magnetic field of the lattice elements along the proton trajectory. The imperfections of the individual magnets alter the cumulative transport function. It turns out that independently of the origin of the imperfection (strength of any of the magnets, beam momentum offset) the transport matrix is altered in a similar way, as can be described quantitatively with eigenvector decomposition, discussed in section 4.1. Correlation matrix of imperfections Assuming that the imperfections discussed in section 2 are independent, the covariance matrix describing the relations among the errors of the optical functions can be calculated: where T r is the relevant 8-dimensional subset of the transport matrix which is presented as a vector for simplicity. The optical functions contained in T r differ by orders of magnitude and, are expressed in different physical units. Therefore, a normalization of V is necessary and the use of the correlation matrix C, defined as is preferred. An identical behaviour of uncertainties for both beams was observed and therefore it is enough to study the Beam 1. In case of the β * = 3.5 m optics the following error correlation matrix is obtained: The non-diagonal elements of C, which are close to ±1, indicate strong correlations between the elements of ∆T r . Consequently, the machine imperfections alter correlated groups of optical functions. Since the two largest eigenvalues λ 1 = 4.9 and λ 2 = 2.3 dominate the others, the correlation system is practically two dimensional with the following two eigenvectors Therefore, contributions of the individual lattice imperfections cannot be evaluated. On the other hand, as the imperfections alter approximately only a two-dimensional subspace, a measurement of a small set of weakly correlated optical functions would theoretically yield an approximate knowledge of ∆T r . Error estimation of the method Let us assume for the moment that we can precisely reconstruct the contributions to ∆T r of the two most significant eigenvectors while neglecting that of the others. The error of such reconstructed transport matrix can be estimated by evaluating the contribution of the remaining eigenvectors: where and N = (ν 1 , ..., ν 8 ) is the basis change matrix composed of eigenvectors ν i corresponding to the eigenvalues λ i . The relative optics uncertainty before and after the estimation of the most significant eigenvectors is summarized in Table 2. Nominal values of the optical functions T r,i and their relative uncertainty before ( V i,i / |T r,i |) and after (δ∆T r,i / |T r,i |) the determination of the two most significant eigenvectors (β * = 3.5 m, Beam 1). limit ourselves only to the first two most significant eigenvalues, the uncertainty of optical functions due to machine imperfections drops significantly. In particular, in case of dL x /ds and L y a significant error reduction down to a per mil level is observed. Unfortunately, due to ∆µ x = π (figure 2), the uncertainty of L x , although importantly improved, remains very large and the use of dL x /ds for proton kinematics reconstruction should be preferred. In the following sections a practical numerical method of inferring the optics from the RP proton tracks is presented and its validation with Monte Carlo calculations is reported. Optics estimators from proton tracks measured by Roman Pots (β * =3.5 m optics) The TOTEM experiment can select the elastically scattered protons with high purity and efficiency [8,9]. The RP detector system, due to its high resolution (σ(x, y) ≈ 11 µm, σ(Θ x,y ) ≈ 2.9 µrad), can measure very precisely the proton angles, positions and the angle-position relations on an event-by-event basis. These quantities can be used to define a set of estimators characterizing the correlations between the elements of the transport matrix T or between the transport matrices of the two LHC beams. Such a set of estimatorsR 1 , ...,R 10 (defined in the next sections) is exploited to reconstruct, for both LHC beams, the imperfect transport matrix T (M) + ∆T defined in (8). Correlations between the beams Since the momentum of the two LHC beams is identical, the elastically scattered protons will be deflected symmetrically from their nominal trajectories of Beam 1 and Beam 2: which allows to compute ratios R 1,2 relating the effective lengths at the RP locations of the two beams. From (1) and (18) we obtain: where the subscripts b 1 and b 2 indicate Beam 1 and 2, respectively. Approximations present in (19) and (20) The width of the distributions is determined by the beam divergence and the vertex contribution, which leads to 0.5% uncertainty on the eigenvector's slope parameter. Single beam correlations The distributions of proton angles and positions measured by the Roman Pots define the ratios of certain elements of the transport matrix T , defined by (1) and (2). First of all, dL y /ds and L y are related by The corresponding estimatorsR 3 andR 4 can be calculated with an uncertainty of 0.5% from the distributions as presented in figure 8. Similarly, we exploit the horizontal dependencies to quantify the relations between dL x /ds and L x . As L x is close to 0, see figure 3, instead of defining the ratio we rather estimate the position s 0 along the beam line (with the uncertainty of about 1 m), for which L x = 0. This is accomplished by resolving for s 0 , where s 1 denotes the coordinate of the Roman Pot station along the beam with respect to IP5. Obviously, dL x (s)/ds is constant along the RP station as no magnetic fields are present at the RP location. The ratios L x (s 1 )/ dLx(s 1 ) ds for Beam 1 and 2, similarly to the vertical constraints R 3 and R 4 , are defined by the proton tracks: which is illustrated in figure 9. In this way two further constraints and the corresponding estimators (for Beam 1 and 2) are obtained: Coupling / rotation In reality the coupling coefficients m 13 , ..., m 42 cannot be always neglected, as it is assumed by (6). RP proton tracks can help to determine the coupling components of the transport matrix T as well, where it is especially important that L x is close to zero at the RP locations. Always based on (1) and (2), four additional constraints (for each of the two LHC beams and for each unit of the RP station) can be defined: The subscripts "near" and "far" indicate the position of the RP along the beam with respect to the IP. Geometrically R 7,...,10 describe the rotation of the RP scoring plane about the beam axis. Analogously to the previous sections, the estimatorsR 7,...,10 are obtained from track distributions as presented in figure 10 and an uncertainty of 3% is achieved. Optical functions estimation The machine imperfections ∆M, leading to the transport matrix change ∆T , are in practice determined with the χ 2 minimization procedure: defined on the basis of the estimatorsR 1 ...R 10 , where the arg min function gives the phase space position where the χ 2 is minimized. As it was discussed in section 4.1, although the overall alteration of the transport matrix ∆T can be determined precisely based on a few optical functions' measurements, the contributions of individual imperfections cannot be established. In terms of optimization, such a problem has no unique solution and additional constraints, defined by the machine tolerance, have to be added. Therefore, the χ 2 function is composed of the part defined by the Roman Pot tracks' distributions and the one reflecting the LHC tolerances: The design part where k i and φ i are the nominal strength and rotation of the ith magnet, respectively. Thus (28) defines the nominal machine (k i , φ i , p i ) as an attractor in the phase space. Both LHC beams are treated simultaneously. Only the relevant subset of machine imperfections ∆M was selected. The obtained 26-dimensional optimization phase space includes the magnet strengths (12 variables), rotations (12 variables) and beam momentum offsets (2 variables). Magnet rotations are included into the phase space, otherwise only the coupling coefficients m 13 , ..., m 42 could induce rotations in the (x, y) plane (25), which could bias the result. The measured part contains the track-based estimatorsR 1 ...R 10 (discussed in detail in section 5) together with their uncertainty. The subscript "MAD-X" defines the corresponding values evaluated with the MAD-X software during the χ 2 minimization. Table 3 presents the results of the optimization procedure for the β * = 3.5 m optics used by LHC in October 2010 at beam energy E = 3.5 TeV. The obtained value of the effective length L y of Beam 1 is close to the nominal one, while Beam 2 shows a significant change. The same pattern applies to the values of dL x /ds. The error estimation of the procedure is discussed in section 7. Table 3. Selected optical functions of both LHC beams for the β * = 3.5 m optics, obtained with the estimation procedure, compared to their nominal values. Monte Carlo validation In order to demonstrate that the proposedR i optics estimators are effective the method was validated with Monte Carlo simulations. In each Monte Carlo simulation the nominal machine settings M were altered with simulated machine imperfections ∆M within their tolerances using Gaussian distributions. The simulated elastic proton tracks were used afterwards to calculate the estimatorsR 1 ...R 10 . The study included the impact of -magnet strengths, -beam momenta, -magnet displacements, rotations and harmonics, -settings of kickers, -measured proton angular distribution. The error distributions of the optical functions ∆T obtained for β * = 3.5 m and E = 3.5 TeV are presented in figure 11 and Table 4. The Monte-Carlo study of the impact of the LHC imperfections ∆M on selected transport matrix elements dL x /ds and L y for β * = 3.5 m at E = 3.5 TeV. The LHC parameters were altered within their tolerances. The relative errors of dL x /ds and L y (mean value and RMS) characterize the optics uncertainty before and after optics estimation. 1.8 · 10 −2 1.5 −7 · 10 −2 0.21 Table 5. The Monte-Carlo study of the impact of the LHC imperfections ∆M on selected transport matrix elements dL x /ds and L y for β * = 90 m at E = 4 TeV. The LHC parameters were altered within their tolerances. The relative errors of dL x /ds and L y (mean value and RMS) characterize the optics uncertainty before and after optics estimation. First of all, the impact of the machine imperfections ∆M on the transport matrix ∆T , as shown by the MC study, is identical to the theoretical prediction presented in table 2. The bias of the simulated optics distributions is due to magnetic field harmonics as reported by the LHC imperfections database [18]. The final value of mean after optics estimation procedure contributes to the total uncertainty of the method. The errors of the reconstructed optical functions are significantly smaller than evaluated theoretically in section 4.2. This results from the larger number of design and measured constraints (27), employed in the numerical estimation procedure of section 6. In particular, the collinearity of elastically scattered protons was exploited in addition. Finally, the achieved uncertainties of dL x /ds and L y are both lower than 2.5 for both beams. Conclusions TOTEM has proposed a novel approach to estimate the optics at LHC. The method, based on the correlations of the transport matrix, consists of the determination of the optical functions, which are strongly correlated to measurable combinations of the transport matrix elements. At low-β * LHC optics, where machine imperfections are more significant, the method allowed us to determine the real optics with a per mil level uncertainty, and also permitted to assess the errors of the transport matrix errors from the tolerances of various machine parameters. In case of high-β * LHC optics, where the machine imperfections have smaller effect on the optical functions, the method remains effective and reduces the uncertainties to the desired per mil level. The method has been validated with the Monte Carlo studies both for high-and low-β * optics and was successfully used in the TOTEM experiment to calibrate the optics of the LHC accelerator directly from data in physics runs for precision TOTEM measurements of the total pp cross-section.
5,108.6
2014-06-02T00:00:00.000
[ "Physics" ]
Ion sputtering as methods for generation of cluster particles : The investigations of emission and fragmentation of silicon oxide clusters sputtered from Si surface have been performed. It has been shown that the processes of formation of these clusters can be qualitatively described within the framework of modern concepts, and the main channels of their formation are determined in accordance with the mechanism of combinatorial synthesis. Introduction The sputtering of targets by beams of accelerated primary ions is one of the most productive methods for generating cluster [1].It is known that some of the sputtered particles correspond to cluster ions.Cluster ions are formed in vibrationally excited states and decay on their way from the target to the detector [2].The discovery of fragmentation of sputtered clusters [2] has significantly complicated the understanding of the cluster emission mechanism, since their decays change the measured mass spectra, kinetic energy spectra and the distribution of their internal energies.Investigation of the processes of cluster decay makes it possible to obtain information both on the emission of clusters and on the chemical and physical properties of these particles.Physical and chemical properties of a cluster, varying from the properties of atoms to the properties of a solid and depending on the size and structure of the cluster, can serve as a basis for the synthesis of crystals with unique properties that promising for solving actual problems of modern nanotechnology [3].Recently studies, considerable attention is paid to clusters of metal oxides, which play an essential role in modern technology of micro-and nanoelectronics, surface chemistry, catalysis processes, as well as the search for methods of their synthesis and study of fundamental properties.The study of silicon oxide, which plays an important role in these fields is especially important. The purpose of this work is an experimental study by secondary ion mass spectrometry (SIMS) of the emission and fragmentation of heteronuclear silicon oxide clusters under ion sputtering and analysis of the obtained results from the point of view of existing theoretical concepts.These results are important to understand the nature of the formation of sputtered clusters.From a practical point of view, the obtained results can be essential for solving the problems of the power industry to obtain cheap and environmentally friendly hydrogen fuel. Experimental research We used the unique capabilities of an ion microanalyzer with double focusing and reverse geometry of Niro -Johnson type (Fig. 1) [4] to study the formation and fundamental properties of stable hydrogenated silicon oxide clusters Si n O m H x -. Fig. The clusters were synthesized by ion sputtering of the silicon surface by O 2 + ions while simultaneously exposing the surface to the atmosphere of water wapour.The energy of primary beams was 18,5 keV.To study the emission of Si n O m H k clusters, we used the technique to one described earlier in [5,6], which consists in introducing oxygen or H 2 O vapor onto the bombarded target.The dissociation of Н 2 О upon interaction with the silicon surface with the formation of atomic hydrogen will make it possible to synthesize the silicon oxide clusters and hydrogen atoms in the ion impact zone and generate stable cluster configurations Si n O m H k -.In our experiments, H 2 O vapors were injected through a specially designed injection system directly into the bombardment area.The pressure range varied from P=2*10 -6 Pa (residual vacuum) to P=4-5*10 -3 Pa (the maximum allowable pressure of our vacuum system) The method of investigation of fragmentation processes of sputtered cluster ions in field-free zones L 1 and L 2 is described in [4]. Research results The results of study of yields of Si n O 2n+1 -clusters sputtered from the Si surface by O 2 + ions (Fig. 2,3) at different pressures of oxygen and water vapor in the bombardment chamber, showed that Si n O 2n and Si n O 2n+1 -have an increased intensity in the mass spectra for all methods of their generation in agreement with [5,6].-and Si n O 2n+1 -clusters.When operating in the dynamic matching mode [4], the time of re-arrival of the primary beam at a given point of the sputtered surface is approximately 0.02 sec.Obviously, at a pressure in the bombardment chamber P=4*10 -3 Pa, this time is quite sufficient for the formation of at least one monolayer of gas molecules on the surface.In the case of water entering the surface, the decomposition of H 2 O molecules on the radiationdamaged target surface into hydrogen and a hydroxyl group is also possible.Later, with the development of the sputtering cascade caused by the incident ion, oxygen atoms located on the surface are captured into the resulting cluster structures, and a sufficient number of them ensures an increase in the yields of Si n O 2n -and Si n O 2n+1 -clusters.It is also interesting to note that the total decrease in the yields of homonuclear Si n -and heteronuclear Si n O m -clusters with m<2n in each cluster series n with an accuracy of several tens of percent (i.e., practically with an accuracy determined by the measurement technique) is equal to the corresponding increase in the yields of Si n O 2n -and Si n O 2n+1 -clusters with the same number of silicon atoms n.Thus, an increase in the yield of Si n O 2n -and Si n O 2n+1 -clusters at an optimal concentration of oxygen atoms on the bombarded surface indicates an increased stability of these clusters.In the case of water being injected onto the bombarded target, hydrogen atoms apparently saturate the existing free bonds in the cluster and stabilize the resulting cluster structures, leading to the effective formation of hydrogenated clusters Si n O 2n+1 H k -(k=1-3) (Fig. 3.). The values of the dissociation energies for the Si n O 2n+1 -cluster with n = (2-7), obtained by us according to the above-described method from the experiment, are in the range 2.8-4.8eV [5].Based on these data, the calculated values of the excitation energies were obtained, which lie in the range 3.68-17.58eV for clusters with n=2-7 and 0.26-0.35eV, respectively, per one oscillator.Analisys of ion yields illustrated that 1) the yields of the Si n O 2n+1 -magic cluster with H 2 O and O 2 inject are increased compared to the spectrum without injecing, 2) hydrated oxides of the type Si n O 2n+1 H -, Si n O 2n+1 H 2 -, Si n O 2n+1 H 2 -, and the latter begins to appear only in magic clusters with n≥2.In magic clusters, the peaks of one hydrated (hydride) oxide are somewhat higher than the previous peak of Si n O 2n+1 -oxide.Hydrogenation is a simple method for stabilizing the silicon surface against oxidation and thus is important in microelectronics.Reaching an inactive silicon surface, ideally ending in hydrogen (hydrogen), has been accomplished in the last 40 years.However, oxidation still takes place even on hydrogen terminated atomistic flat silicon surfaces.Electrochemical hydrogen passivation for easily emitting silicon pores is an essential but unstable process.Recently silicon nanowires have been produced for large scale synthesis.An essential requirement for their widespread use has been fulfilled.This is both technological and scientific importance of finding ways to stabilize them, so as to skillfully avoid the problems of degradation and low photoluminescence.To achieve this goal, the study of hydrogenated silicon clusters in relation to their certain local structural stability should be urgent, but so far, such information is not attainable.In this experiment, we studied the reaction of an aqueous molecule on different hydrogenated silicon clusters in order to relate stability to the local hydrogen configuration, and to shed light on the way to achieve stability, non-reactivity of hydrogenated silicon structures.Since size effects often appear for properties such as energy gap when the size of cluster structures reaches nanometers, we focus our study on the effect of size on oxidation and thus on stability.In [7], models of hydrogenated surfaces Si (001) and Si (111) -Si 9 H 14 -SiH 2 and Si 10 H 15 -SiH 3 were obtained, as well as smaller Si 2 H 6 -SiH 2 and Si 5 H 10 -SiH 2 , SiH 3 -SiH 3 and Si 4 H 9 -SiH 3 .These reactions were studied to develop a dimensional dependence of reactivity with water and the stability of local structures.The reaction starts from both sides and then a transition state (TS) is formed.The reaction is eventually completed by an H 2 molecule that is released and forms OH attached to the silicon cluster.It turns out that the reaction can go through: Si 9 H 14 -SiH 2 + Н 2 О→IC→TS→ Si 9 H 14 -SiHOH + H 2 The appearance of a weak intermediate complex of a similar reaction -IC, which is 1.4 kcal/mol lower than the reaction.In this case, the O-Si1 distance = 4.054 Å.Thus, it is similar to the fact that the forces for this weak chelator are the dipole-dipole interaction.The system then has to go through a transient state TS in order to reach the product.In the case of ТS with the О-Si1 bond, the distance between the attached О of the Н 2 О atom and the attached silicone atom on the surface of 1.855Å is slightly longer than in the product; accordingly, the formed Н 1 -Н 2 bond is much longer than the regular value in the product.Meanwhile, one of the two hydrogen atoms in water, H1, moves away from the oxygen atom, resulting in an H 1 -O distance of 1.110Å, which is drawn longer than IC.The hydrogen atom H 2 is 1.842110Å, away from the silicon atom.The energy barrier to this reaction is 44.5 kcal/mol in relation to the reactants.From the point of view of thermodynamics, this reaction is beneficial, because it is exothermic at 14.9 kcal/mol.For other complexes -similar reactions.It is noticed that each of these reactions includes an intermediate complex with an energy of almost 2.3 kcal/mol.With an expansion in the bunch size of the response of each sort of silicon-hydrogen arrangement, the energy hindrance diminishes, everything shows an increment in the reactivity of the framework.Responses with SiH3 show a more modest energy obstruction than with SiH 2 .To give more positive help for the reactivity pattern, rate constants were determined at 1 atm.for a watery response inside different hydrogenated setups.For reactions on a dihydride, the calculated reaction rate for medium clusters shows a much larger increase in relation to small clusters, but can only increase slightly with increasing cluster size.Confirmation of the above may also appear from the analysis of their boundary orbitals.It has previously been found that the overlap between the highest occupied molecular orbitals (HOMO) of a molecule and the lowest unoccupied molecular orbitals (LUMO) can be determined by the nature of the chemical reaction.A smaller energy difference between HOMO of one molecule (electron donor) and LUMO of another (electron acceptor) might show a more preferable reaction.Thus, the tendency is the close ratio of HOMO and LUMO of individual hydrogen-silicon clusters, for which, with an increase in the cluster size, their HOMO generally moves forward and LUMO falls, as a result of which the energy gap (gap) decreases.Since the energy barrier size effect is well known for silicon clusters when dimensional changes are in the nanometer range, the correlation of reactivity with the well documented energy barrier size effect could provide an important implication for the size dependent reactivity found here.Based on small silicon clusters, it is expected that the reactivity and rate constants for large clusters will also stabilize for a given temperature and pressure. Conclusion The performed experiments indicate that ion sputtering is an effective method for generating heteronuclear oxide clusters Si n O m -and Si n O m H k -of various sizes.Qualitatively, the processes of formation of these heteronuclear clusters can be described in terms of modern concepts [8], and the main channels for their formation are determined in accordance with the mechanism of combinatorial synthesis [6,[9][10][11]. Fig. 2 . Fig.2.Diagram of yields of Si n O m -clusters (n = 1-3, m = 1… 2n + 1) obtained at a current of O 2 + I=100 nA on the surface of Si target; I -residual vacuum P=6.5 * 10 -6 Pa, the surface was cleaned with an ion beam after H 2 O was injected into the chamber; II -inlet H 2 O into the chamber up to P=2.5 * 10 -3 Pa. Fig. 3 . Fig.3.Diagram of yields of Si n O m -and Si n O m H -clusters under bombardment of Si target by O 2 + ions with H 2 O vapor in chamber.Observation of the change in yields of "magic"[5,6] Si n O 2n+1 -clusters showed that with increasing pressure their intensities increase, and at pressure P=4-5*10 -3 Pa the yields of these clusters are maximum.In this case, the absolute values of their intensities when oxygen and water vapor are admitted into the chamber at the same pressure differ little.The most significant difference is that when H 2 O is injected, additional intense peaks of Si n O 2n H k -and Si n O 2n+1 H k -(k=1-3) clusters appear in the mass spectrum of sputtered clusters.Fig.2 shows the overall change in the yield of Si n -and Si n O 2m -clusters (n=1-3) when water vapor is admitted into the bombardment chamber at a current of primary O 2 + ions I 0 =100 nA.As can be seen from Fig.2, the presence of additional oxygen atoms on the sputtered surface significantly changes the mass spectrum of emitted clusters.The peak intensities of both homonuclear Sin-and heteronuclear Si n O m -clusters with m<2n decrease significantly (by one or two orders of magnitude) when H 2 O is injected to the surface.The yields of Si n O 2n -and Si n O 2n+1 -clusters also decrease several times.The increase in the yields of Si n O 2n -and Si n O 2n+1 -with H 2 O vapors can be associated, firstly, with an increase in the concentration of oxygen atoms in the ion bombardment zone, and secondly, with an increased stability of Si n O 2n-and Si n O 2n+1 -clusters.When operating in the dynamic matching mode[4], the time of re-arrival of the primary beam at a given point of the sputtered surface is approximately 0.02 sec.Obviously, at a pressure in the bombardment chamber P=4*10 -3 Pa, this time is quite sufficient for the formation of at least one monolayer of gas molecules on the surface.In the case of water entering the surface, the decomposition of H 2 O molecules on the radiationdamaged target surface into hydrogen and a hydroxyl group is also possible.Later, with the development of the sputtering cascade caused by the incident ion, oxygen atoms located on the surface are captured into
3,446.8
2023-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Structure of the Upper and Lower Surfaces of Human Corpus Callosumg : The corpus callosum in the interval between the cerebral hemispheres is a plate of white matter, uneven in thickness, in which two surfaces are distinguished - the upper and lower ones, bent according to its lateral profile. The objective of the study was to study the individual variability of location of the lateral and medial longitudinal strips on the upper surface of the corpus callosum, as well as structural features of its lower surface. The material was the brain of men and women (10 specimens each) of the second period of adulthood, who died for the causes not related to the pathology of the central nervous system. After two weeks of fixation in a 10% formalin solution, the brain was prepared by separating the cerebral hemispheres and other parts of the brain from the corpus callosum, resulting in exposure of its upper and lower surface, which was photographed using a digital camera. As evidenced by the obtained data, the width of the trunk of the corpus callosum in men varies from 9 to 16 mm, whereas in women the difference between the minimum (11.0 mm) and the maximum (20.0 mm) values is greater than in men, when in fact there is only small difference of the arithmetic mean value. Thus, we offer to consider the lateral longitudinal strips to be the boundaries of the corpus callosum hemispherical part and the distance between them determines the width of this formation, which in average is 13.0 ± 2.5 mm in men and 14.4 ± 2.7 mm in women. In the meantime, the nature of the individual variability of the width of the corpus callosum trunk in women is more diverse than in men. Introduction According to the information contained in the manuals of human anatomy, published at different times, including recent years, the corpus callosum located between the cerebral hemispheres is an uneven (in terms of thickness) plate of white matter, comprised of two surfaces -the upper and lower ones, bent to its side profile (Salvolini et al., 2010;Ardekani et al., 2012;Raybaud, 2010). On the upper surface there are some transverse strips (striae transversalis) that are visible in some places through a thin layer of induseum griseum. They are the outer reflection of the bundles of interhemispheric (cortico-cortical) nerve fibers transiting through the corpus callosum. The induseum griseum, according to the literature, has been studied quite superficially. In addition, the upper surface of the interhemispheric part of the corpus callosum attracts attention due to the presence of longitudinally stretched eminential strips, two of which are medially close (striae longitudinalis medialis) and there is a pair of lateral ones (striae longitudinalis lateralis) bordering the cingulate gyrus (gyrus cinguli). In the anterior part these strips, girdling the genu of corpus callosum, reach the subcallosal gyrus, and in the posterior part, they continue under the splenium of corpus callosum, reaching the hippocampal zone in the form of a dentate gyrus showing the annular arrangement in the limbic brain structures (Luders et al., 2010;Battal et al., 2010). According to the literature, these strips are represented by bundles of nerve fibers that provide associative interactions between distant ancient formations of the pallium. The lower surface of the corpus callosum is remarkable because of the fact that, slightly posteriorly from the middle of corpus callosum trunk, it is coalesced with the body of the fornix, also belonging to the limbic brain. But the consideration of the morphological connections of the corpus callosum does not end at this point. It should be noted that the space between the anterior part of the trunk, the genu and rostrum of the corpus callosum, on the one hand, and the columns of the fornix, on the other, are tightened by two, medially located thin plates of brain matter, which are separated by a narrow space called the septum pellucidum (Prakash et al., 2010). The above brief sketch of the outer structure of the corpus callosum aims to show how it is described in the literature. The objective of the study was to study the individual variability of location of the lateral and medial longitudinal strips on the upper surface of the corpus callosum, as well as structural features of its lower surface. Applying the Dissection of the Corpus Callosum The material obtained at Kharkiv Regional Bureau of Forensic Medical Examination was the brain of men and women (10 specimens each) of the second period of adulthood, who died for the causes not related to the pathology of the central nervous system. After two weeks of fixation in a 10% formalin solution, the brain was prepared by separating the cerebral hemispheres and other parts of the brain from the corpus callosum, resulting in exposure of its upper and lower surface, which was photographed using a digital camera. Morphometric analysis of the upper surface of the corpus callosum was performed by means of Adobe Photoshop CS6 Extended software. Some of the preparations were examined using routine histological methods with Van Gieson staining. Some plates, about 0,5 mm thick, were cut from part of the corpus callosum preparations. Then they were subjected to epoxy resin lamination applying the well-known method (Kostilenko et al., 2008) according to the following scheme: 1 -substitution of alcohol in tissues with acetone; 2substitution of acetone in the tissues with epoxy resin and immersion of the preparations into pure, immediately prepared, epoxy resin. The next step was to remove the preparations from the still unpolymerized epoxy resin and place them on the pre-prepared plastic sheeting, which were covered with the same size sheetings. After that, each such layered block was individually placed between two evenly sized glasses that were tightly clamped. After complete polymerization, polished sections of different thickness were made from the obtained epoxy plates with the preparations of corpus callosum. Then they were painted with 1% solution of methylene blue on 1% borax solution. The study of the obtained preparations, as well as their photographic documentation were carried out with the help of binocular magnifier MBS-9 and a light microscope "Konus, equipped with a digital photo set-top box. The research methods described in the publication were applied in compliance with human rights in accordance with the legislation in force in Ukraine, meet international ethical requirements and do not violate ethical norms in science and standards of biomedical research. Studying the Upper Surface of the Corpus Callosum As a result of the study of the upper surface of the corpus callosum, we can say that the name "strips" can be applied with a bit of a stretch to the longitudinally oriented by the outer surface of the corpus callosum "strips", the pair of which in the contralateral position occupies a boundary position between the free part of the corpus callosum and the medial parts of the gyrus cinguli although in terms of appearance they have the shape of a rounded cord (about 1.5 mm thick) passing across a rough uneven outer surface. Therefore, by retaining this name (lateral longitudinal strips), we must refer it to the string-shaped conducting (i.e., comprised of nerve-fiber bundles) cords, which are separated from the corpus callosum, connecting the opposing distal centers of the limbic brain. It should be noted that in some cases there are individual variants in the form of small branches, which are immersed in the medial direction in the thickness of the corpus callosum. In other cases, the lateral longitudinal strips do not have the form of continuous strands, but resemble a suture line due to the wavy bending of their transverse rollers (Fig. 1, 2). However, these lateral strands (lateral longitudinal strips) can be considered as lateral boundary orientations of the latitudinal boundary of the free (interhemispheric) part of the corpus callosum, and therefore we have the opportunity to determine its width by simply measuring the transverse distance between them. These metrics are indicative only, since only 10 preparations of the corpus callosum of men and women were taken to obtain them. However, by the mean of the random sampling error, they are quite reliable. It should be noted that these results are limited only to the trunk section of the corpus callosum. According to them, the width of the trunk section of the corpus callosum in men varies from 9 to 16 mm (arithmetic mean value is 13.0 ± 2.5 mm), whereas in women the variation between the minimum (11.0 mm) and the maximum (20, 0 mm) values is greater than in men, with a virtually small difference in the arithmetic mean of 14.4 ± 2.7 mm. In other words, the nature of the individual variability of the width of the corpus callosum trunk in women is more diverse compared to men. In the intermediate position between these nerve fibers (longitudinal lateral strips) along the upper surface of the corpus callosum, according to the literature, there is a pair of similar formations called medial longitudinal strips. However, according to our observations, they are not in all cases different in terms of parity of location, that is, their shape varies individually. Often this formation looks like a single longitudinal strand corresponding to the median plane of the brain (Fig. 1, 2), in other cases it splits in some places. But along with this, in our small sample, there were also variants of its relatively wide split. In such cases, on the upper surface of the trunk section of the corpus callosum there were four strands, approximately equally spaced from each other, that is, two medial and two lateral ones (Fig. 2). But in no case we came across such a formation in the classic version, that is, in the form of two parallel medial longitudinal strips. Of course, this June, 2021 Artificial Intelligence and Neuroscience Volume 12, Issue 2 71 does not mean that there are no such options. Probably, they did not get into our sample of preparations, which testifies to the possibility of the existence of many other individual forms of the outer contour of the obligate formations connected with the upper surface of the corpus callosum, which are closely soldered to subjacent part and therefore are inseparable. Additional individual evidence of this is the local immersion of longitudinal strips, which occur periodically along their length, into the thickness of the corpus callosum. This, as noted above, sometimes gives them the appearance of a suture. Besides, their shape depends largely on the relief of the surface where they are longitudinally laid; and this is already the case with another type of upper body surface formation, that is called transverse strips (i.e., at right angles to the longitudinal ones) in the literature. In addition, the literature does not pay much attention to the study of the lower surface of the corpus callosum. It is only known that in its trunk part it is the upper wall of the central department of the lateral ventricles, separated from each other by the median plane, fused with the lower surface of the corpus callosum, a transparent septum supplemented at the back by the body of the fornix. It is pertinent to recall that the fornix begins with columns from the mammilary bodies (corpus mamillare), which then merge into the body, which fuses with the lower surface of the corpus callosum (at the border between the posterior department of the trunk and the splenium). From here they split up, heading for the anterior poles of the temporal lobes, where they continue into the right and left hippocampus. Given that fused part of fornix is adhered from below with the corpus callosum, its indirect role of switching interactions between the limbic brain and the neopallium through the corpus callosum collector system becomes apparent. Not to be overlooked is the fact that the space between the anterior part of the trunk, the genu and the rostrum of the corpus callosum, on the one hand, and the columns of the fornix, on the other, is tightened June, 2021 Artificial Intelligence and Neuroscience Volume 12, Issue 2 by two medially spaced thin plates of the brain matter, which are separated by a narrow space of approximately 1 mm. It is the so-called septum pellucidum. It is easy to be convinced in all of the above during the process of corpus callosum preparation in order to gain access to its lower surface. Thus, when removing the septum pellucidum and fornix, the remnants of the described formations can be clearly seen on its lower surface. In this case, the septum pellucidum looks like a double medially located rim that splits up back to the location of the adhesion of the fornix (Fig. 1). Note that this double rim from the septum pellucidum exactly coincides with the projection of the medial longitudinal strip located on the upper surface of the corpus callosum. It may seem that the septum pellucidum, passing through the corpus callosum, protrudes on its upper surface. Further studies have shown that this is not the case. In terms of our consideration of the outer structure of the corpus callosum itself, its lower surface is interesting because under the ependymus there are quite clearly visible transversely placed roller-like elevations, similar to the same elevations on its upper surface with the only difference being that they are more monotonously wide without any presence of dichotomous division. These elevations form orderly rows on either side of the mid-point attachment of the septum pellucidum (Fig. 1). It is in this order that they extend into the thickness of the white matter of the cerebral hemispheres. Based on the above data, we can conclude that the corpus callosum itself, in the sense that it provides a predominantly commissural connection between the contralateral cortical centers of the neopallium, consists of a certain number of cord formations that are visualized at the macroscopic level (the naked eye). Due to the fact that they are not mentioned in the literature, we propose to call them commissural cords of the corpus callosum or its funicular components, which can be considered as first-order subcutaneous units. The Surface of The Corpus Callosum It is necessary to pay attention to the surface layer of the upper surface of the corpus callosum -the so-called indusium griseum (Fig. 3). To get clearer visualization of the microscopic structure of the surface, we made extremely thin epoxy resin plates of corpus callosum, the thickness of which did not exceed 0.5 mm. As a result, they could be viewed in a passing microscope at relatively high magnifications (with 10 and 20 х lenses). Fig. 4 shows that this indusium griseum generally has a predominantly pileous structure, which is a close set of double-loop fibers extending from the thickness of the commissural cords at right angles to the upper surface of the corpus callosum. On their apical parts in the surface plane there are star-shaped (due to the numerous, radially directed apophyses) cells located in a regular order. Although they are very similar to dendrites of neurons, these nerve cells do not belong to nerve cells, which is confirmed by the literature devoted to the study of the structural organization of the outer surface of the brain. According to these sources, the central nervous system, as a whole, is completely covered with glial cells along the entire surface, both internal (from the ventricles) and outer ones (from the soft, vascular membrane). On the outer surface of the brain (the corpus callosum is no exception) there is a limiting glial membrane (membrana limitans gliae superficialis), represented by lamellar apophyses of astroglia, which overlap each other, and the bodies of astrocytes themselves. This membrane is separated from the superimposed soft, vascular membrane by the basal membrane only. Therefore, our own data concerning the surface coverage of the corpus callosum are consistent with the ideas about the nature of the terminal limitation of the outer surface of the brain substance itself. So, like everywhere, the corpus callosum is covered with the thinnest layer of astroglia. But we still do not know exactly what the main thickness of the indusium griseum located below the corpus callosum is. Let's just mention that, as noted above, it consists of a close set of double-loop fibers coming out of the thickness of the commissural cords, are directed at right angles to the limiting glial membrane. Careful examination of the preparations reveals that in this continuous fibrous coating of the corpus callosum upper surface there are some assemblies having a columnar shape; in Figure 4, they are marked with arcuate brackets. It was found out that the width of these columnar assemblies is commensurate with the width of the fascicular portions, as subunits of commissural cords. Conclusions 1. We propose to consider the lateral longitudinal strips that run along the corpus callosum upper surface to be the marginal boundaries of its free (interhemispheric) part. The distance between them determines the width of this formation, which in males varies from 9 to 16 mm individually (on average -13, 0 ± 2.5 mm), whereas in women the difference between the minimum (11 mm) and the maximum (20 mm) values is slightly larger with a virtually small difference of the mean value that equals to 14.4 ± 2.7 mm. In other words, the nature of the individual variability of the width of corpus callosum trunk section in women is more diverse than in men. 2. Our research results are consistent with the literature and confirm the current knowledge concerning the nature of the terminal limitation of the corpus callosum outer surface, which is covered by a layer of astroglia.
4,093.6
2021-07-19T00:00:00.000
[ "Biology", "Medicine" ]
Research on Properties of X-Ray Detection Film Based on Thallium Doped Cesium Iodide As X-ray detection imaging has a wide range of applications in medicine, industry, public safety, etc., it is of great significance to study its imaging mechanism and improve its imaging performance. Based on the process of X-ray luminescence in the scintillator material, this paper established a simulation model using a microcrystalline column structure to investigate the relationship between the thickness of the detection film and the light conversion efficiency. With the help of the simulation tool MATLAB, the Monte Carlo method was used to simulate the light conversion process of X-ray in the film, and the results were obtained as follows. Under the condition of other parameters unchanged, the luminous efficiency reached the peak value with the increase of the film thickness, and then gradually decreased with the increase of film thickness. The reason why the conversion efficiency in the early stage increases with the increase of the film thickness is that the film is in a saturated state, and increasing the thickness can cause more X-ray particles to be converted. As the film thickness increases, more fluorescent photons are absorbed as they propagate in the film, resulting in a gradual decrease in conversion efficiency. Therefore, an appropriate film thickness can be selected based on the simulation results to obtain the ideal light conversion efficiency. Introduction X-rays have a very strong penetrating ability. When Xrays penetrate an object, each part of the object will absorb X-rays to varying degrees, which can also be regarded as modulating X-rays. The modulated rays carry structural information inside the object, and then convert the information carried by the X-rays into image information to realize X-rays imaging. Key researches on X-ray imaging have been carried out at home and abroad. The Shanghai Institute of Ceramics used non-vacuum technology to successfully prepare thallium-doped cesium iodide crystals in 2004, which overcomes the defects of the traditional international vacuum growth method. In 2009, the Department of Nuclear and Quantum Engineering of Korea Advanced Institute of Technology developed and tested the pixelated CsI:Tl scintillator film for X-ray image sensors. In 2015, Siegen University in Germany reported a high-energy particle detector composed of cesium iodide and silicon. By combining a low-noise fully depleted CCD detector with a CsI:Tl scintillation screen, a high quantum efficiency (QE) energy dispersion area detector can be realized in the range of less than 1 keV to more than 100 keV. With the continuous development of technology, X-ray detection technology has been widely used in industry, medicine and other fields. At present, scintillator detector has attracted much attention for its low cost and excellent performance, and it is a key part of the X-ray detection imaging process. Research on how to improve the light conversion performance of the scintillator is of great significance for improving the imaging quality. This paper established a theoretical model and used Monte Carlo simulation to study the relationship between the thickness of the scintillator material and the light conversion efficiency, which can not only improve the clarity of imaging, but also optimize the structure of imaging equipment on this basis, which has positive significance for the development of imaging technology. Principle of imaging According to quantum theory, the reason why X-rays can excite photons after entering the scintillator is that the inner electrons of the atoms in the scintillator are excited by the X-rays and jump to high-energy electron orbits to generate a hole. The outer electrons of the scintillator atoms fill the holes generated by the excited electrons, release energy, and produce fluorescence. During this whole process, Compton effect and photoelectric effect will occur, as shown in Figure 1. The principle of X-ray detection and imaging is shown in Figure 2. Since the X-ray entering the scintillator is modulated by the object, its intensity is the information of the object's structure, and the number of photons excited by X-rays of different intensities is different, so the light and shade of the image can reflect the information of the object structure. In this way, how to ensure the directional transmission of the excited photons of the scintillator material and the sensitivity of the scintillator material to X-rays is the focus of the next research direction. Since the crystalline pillar formed during the growth of the film material has a function similar to that of an optical fiber, the natural microcrystalline pillar structure can effectively solve the directional transmission of photons, and increasing the light conversion efficiency of the scintillator material can ensure its sensitivity to X-rays. Modeling and simulation Since the area of each unit of the detection film is small, the incident X-ray can be approximated as a continuous ideal layer structure with vertical incidence. Assuming that the emitted photons of the X-ray point source are only monochromatic photons with energy , when the X-ray incident depth is z, the attenuation energy can be expressed as: In formula (1), μ is the absorption coefficient of CsI:Tl crystal for X-ray particles, in / ; ρ is the density of CsI:Tl crystal, in / . Differentiate formula (1) to z to obtain the X-ray energy absorbed by the CsI:Tl crystal per unit length: If the crystalline pillar is divided into a layer thickness of 1 um from top to bottom, the energy absorbed by the ith crystal layer is: Obtained by the empirical formula, every 20 eV of energy is absorbed, and a visible photon is produced. And the direction of the photons generated by each layer of film satisfies free excitation (direction angle obeys random distribution). Based on this, we can get the number and direction of photons generated in each film. Supposing the film thickness is L, the ratio of the ray energy absorbed by the scintillator film due to the photoelectric effect to the incident ray energy in the thickness z~z+dz at the point Q (x, y, z) is: In the formula, is the photoelectric absorption coefficient of fluorescent photons in the scintillator film. Figure 3 Schematic diagram of type A fluorescence propagation In an ideal continuous layer of scintillator film, since fluorescent photons are emitted in all directions with equal probability, the probability of a certain fluorescent photon emission direction in the cone solid angle element ~ can be calculated as: Because the fluorescent photons are partially absorbed by the film during the transmission, the linear absorption coefficient of the film for photons during the process is , and the reflection coefficient of the reflective film Al is . Then the transfer function of the fluorescent photon with the exit angle generated by the X-ray excitation at the depth z can be expressed as: According to the Monte Carlo method, the above ratio is integrated on z, and the probability and transfer function are integrated on the angle π to obtain the mean value of the fluorescence transmittance , , , that is, the solution to the problem with Monte Carlo method. Here L is the thickness of the film. According to the formula of total reflection, the critical angle when the total reflection of the fluorescent photon occurs in the crystalline pillar is about 34°. Therefore, when the incident angle of fluorescent photons is larger than 124° or less than 34°, it will all propagate in the crystalline pillar. As shown in Figure 3, both type A and type B light can be totally reflected in the crystalline pillar. Type B fluorescence is collectively referred to as type A fluorescence. For type A fluorescence, the angle between it and the crystalline pillar does not change when it spreads in the crystalline pillar, and the path length of the fluorescent photon is equal to the path length in the ideal continuous layer, so the transmitted rate of the fluorescence photon in the ideal continuous layer is still applicable in type A fluorescence. When the emission angle of the fluorescent photon on the inner wall of the crystalline pillar cannot satisfy the total reflection, such as the fluorescence in Figure 4, this type of fluorescence should account for 17% of the total fluorescence. We refer to this type of fluorescence as type B fluorescence. Because this type of fluorescence cannot be totally reflected, B fluorescence will pass through the crystal pillars and propagate in the gap between the crystal pillars. Under ideal circumstances, we consider that the gap between the crystal pillars is extremely narrow. In this case, type B fluorescence can be regarded as a relay in a continuous ideal layer, and its light transmittance can still be expressed by formula (7). In the actual process, there will be some semicrystalline pillars. When the fluorescent photons propagate in this kind of pillars, they will be reflected between the pillars and the reflective layer. We refer this type of fluorescence as type C fluorescence, as shown in Figure 5. The transfer function of type C fluorescent photons is: and are the reflection coefficients on the Al substrate and the Al wall, respectively, and is the critical angle of total reflection. Simulation results The simulation results of the microcrystalline pillar structure are shown in Figure 6. Result analysis Results can be seen from the above figure as follows. When the other parameters remain unchanged, the luminous efficiency gradually reaches the peak value with the increase of the film thickness, and then gradually decreases with the increase of the film thickness, and the rising speed is faster, and the falling speed is slower. The reason why the initial luminous efficiency increases with the increase of the film thickness is that the luminous efficiency of the film is in a saturated state at this time, and a large number of X particles have almost excited all the photoelectric effects that can occur. In this case, as the film thickness increases, more X particles have a photoelectric effect, and the luminous efficiency increases rapidly. The reason for the subsequent decrease is that the film thickness continues to increase after reaching the peak, which will cause the length of the path of the excited fluorescent particles to travel in the film, increase the loss, and increase the number of absorbed particles, resulting in a slow decline in luminous efficiency.
2,377.2
2021-01-01T00:00:00.000
[ "Physics" ]
Effect of the Asset Quality on the Bank Profitability This study investigates whether non-performing loans effect the bank’s profitability in Turkey. The study applies a panel regression method to the quarterly data set including 1809 observation belongs to 55 Banks in Turkey during the period from 1 st quarter of 2005 to 3 rd quarter of 2016. It is found that there is a significant, negative relationship between non-performing loans and bank profitability which is measured by return on equity and return on asset. The higher non-performing loans, the lower asset quality, leads to the lower return on equity and return on asset, and the lower non-performing loans, the higher asset quality, leads to the higher return on equity and return on asset. Introduction Although the asset quality is important for all companies, it has significant importance on profitability of banks that are crucial components of financial markets and proper process of the banking operations as well as the financial system and accordingly national economy. Asset quality in banks is related to the quality of loans provided by the bank and the quality of loans can be measured with the non-performing loan (NPL) where NPL consist of overdue loans and follow-up loans. According to Bernanke, Lown, and Friedman (1991), non-performing loans or lower asset quality, in economies that have bank based financial systems which is also known as "credit crunch", may defer economical recovery by decreasing operating profit margin or eroding capital base for new loans. For Klein (2013), non-performing loans will effect profitability of banks which is their main profit source and ultimately financial stability of economy. Lower asset quality or non-performing loans reaching substantial amount may lead to bank bankruptcies and economic slowdown (Adhikary, 2006;Barr & Siems, 1994;Berger & DeYoung, 1997;Demirguc-Kunt, 1989;Whalen, 1991). Considering that one of the main reasons for 2008 global crisis is lower quality assets, which can be defined as toxic assets, measuring non-performing loans, analyzing their effects well and producing required economic policies have significant importance for whole economy as well as the banks themselves. Accordingly, especially within last 25 years, regulations are put in to effect by national and international institutions in order to determine asset quality with regards to the importance of it. In 1995 at the United States of America, United States Federal Reserve Board bring "Standards for safety and soundness" into force which stipulates regular reporting obligation on asset quality for board of directors of banks in order to evaluate the risks on deformation of asset quality and to form asset quality supervision systems by financial institutions in order to define problems that may arise with regards to asset quality (Eze & Ogbulu, 2016). and Gorus (2016), Ozurumba (2016), Sarıtaş, Uyar, and Gökçe (2016) are examples of lower asset quality or non-performing loans affecting profitability of banks negatively. On the other hand, where Adebisi and Matthew (2015), Güneş (2015), Samırkaş, Evci, and Ergün (2014) did not come up with a correlation between Return on Equity (ROE) and NPL; Afiriyie and Akotey (2013) and Bhattarai (2016) found positive correlation between ROE and NPL and Buchory (2015) found positive correlation between Return on Assets (ROA) and NPL. Within the scope of this study, the effect of the asset quality (non-performing loans) on bank profitability (ROE or ROA) is investigated for Turkish banking sector. In this manner, quarterly solo financial statements, prepared in accordance with International Financial Reporting Standards (IFRS), that belong to the period from 1 st quarter of 2005 to 3 rd quarter of 2016, of 55 banks operating in Turkish banking sector is observed. Panel regression method is used to determine the relationship between "the ratio of the follow-up loans to asset" and "ratio of provisions for overdue loans to total loans" which are independent variables and ROA/ROE which are dependent variables. Our study, under which the effect of non-performing loans to bank profitability is investigated, separates from the other studies made for Turkey as it compromises of two different variables at the same time and as it directly measures the effect of non-performing loans to bank profitability and it uses recent and quarterly data. As per the results of bilateral fixed effect panel regression analysis; negative relationship between variable ROA/ROE, indicating the bank profitability, and "the ratio of the follow-up loans to asset" and "ratio of provisions for overdue loans to total loans" measuring the asset quality is identified. Under Turkish banking sector, increase of non-performing loans decreases the bank profitability and decrease of the non-performing loans increases bank profitability. Literature summary is provided hereinafter of this study and under third section information on data and method is provided and within the scope of fourth section, empirical results are defined. The fifth and the last section of this study comprises of result and suggestions. Literature Review As known, the main asset effecting the asset quality negatively in banks is the non-performing loans. Mainly, non-performing loans arises in cases where the principal or interest amounts of loans provided by banks are not paid back as planned. In Basel criteria determined by Basel Committee within the scope of effective supervision of banking sector, asset quality is measured regarding the capital adequacy. In that, within the scope of regulations of Basel on capital adequacy, in order to measure capability of a bank to solvency, the ratio of capital to risk weighted assets is used and this ratio is expected to be at least 8%. Weighting of assets in accordance with the risk means elimination of possible impairment in assets and accordingly increase of the asset quality. While measuring asset quality prudentially, it is possible to use risk weighting approach in order to measure asset quality pertaining to previous periods non-performing loans can be taken into consideration. While the high amounts of non-performing loans indicate the lower asset quality; lower amounts non-performing loans indicates higher asset quality. By Adhikary (2006), inadequate audit and supervision function, lack of required regulations and lack of effective debt improvement strategies are shown with regards to the reasons of non-performing loans. Choudhury and Adhikary (2002) states that non-performing loans are not in one type and they can be categorized under different groups according to the time period elapsed following their due date. Non-performing loans are not basic results of loan providing process but accepted as a typical by-results of financial crisis. However, non-performing loans have substantial potential on increasing severeness and duration of financial crisis and complicating macroeconomic management in cases they are arising from providing loans process coincidentally. Hence, non-performing loans may result with the loss of trust of investors to banking system and accumulation of nonproductive economical sources and collapse of resource allocation process (Woo, 2000) . Non-performing loans which are created by barrowers on purpose and left unsettled may result with contagious financial fragility by alienating good barrowers. According to Muniappan (2002) non-performing loans effects not only the profitability of banks by way of bearing costs of an asset that is not able to provide income but also negatively effects the capital adequacy. Non-performing loans leads bank management to waste too much time and effort. This situation is an indirect cost that the bank has to bear as a result of low asset quality. Non-performing loans not only causes lack of interest income but also effects future profit flows by resulting with the loss of opportunity on realizing some investments with return. Additionally, non-performing loans have risk to damage reputation of banks. Increase of non-performing loans and non-performing loans reached to a substantial amount, limit the opportunities of ijef.ccsenet.org International Journal of Economics and Finance Vol. 9, No. 7;2017 co-financing and syndication of the bank that may be realized with other banks by effecting negatively the credit rating of the bank beside the bank profitability. Although, the relationship between profitability and non-performing loans is not clear (Bhattarai, 2016). Although there are on-going efforts on controlling activities of banks on loan providing, non-performing loans constitutes main concern of both international and national regulatory authorities. As per the report published by IMF on 2007, the ratio of total non-performing loans differ radically between countries especially between developing countries and developed countries (Boudriga, Boulila Taktak, & Jellouli, 2009, p. 287). While countries like Egypt, Nigeria, Philippines, Morocco, Algeria, and Tunisia (more than 15%) have trouble on low quality loans; there are no impressions indicating that the countries like Sweden, Norway, Finland, Australia, and Spain (less than 1%) have trouble arising from the erosion of asset quality. Beside, in recent years, significant number of studies concentrated on key role of asset quality on estimation of bank failure (Barr & Siems, 1994;Berger & DeYoung, 1997;Demirguc-Kunt, 1989;Whalen, 1991) . To our best knowledge, although there is no study investigated directly effect of asset quality on bank profitability under studies conducted with regards to Turkey, there are studies, accepting non-performing loans as one of the explanatory variable, which conducted on investigation of factors determining bank profitability. Under the study of Taşkın (2011) and Akbaş (2012), on factors determining the bank profitability, accepted ratio of loan loss provisions to asset as the measure of non-performing loans where Sarıtaş et al. (2016) accepted ratio of non-performing loans to asset as the measure of it. Within the scope of all these studies, variables of ROE or ROA are taken into consideration as the measure of bank profitability and as a result negative relationship is found between non-performing loans and ROA and ROE. Ozgur and Gorus (2016) accepted the ratio of non-performing loans to total cash loans as the measure of non-performing loans and found negative relationship between ROA and NPL. Within the scope of study conducted by Güneş (2015) and Samırkaş et al. (2014) no relationship is found between non-performing loans beside the other factors and ROE and ROA. Data In this study quarterly solo financial statements prepared in accordance with IFRS, that belong to the period from 1 st quarter of 2005 to 3 rd quarter of 2016, of 55 banks operating in Turkish banking sector is used and there are 1809 observations. The data used is obtained from public database of The Banks Association of Turkey. Within the scope of our study, data of deposit banks are used beside the data of investment and participation (Turkish type of Islamic banking) banks. Accordingly, number of banks used by years are as follows. As it is shown under Table 1 Variables considered within the scope of this study is calculated with regards to balance sheet and income statement accounts stated below. Equity profitability (ROE) and asset profitability (ROA) is used for bank profitability as like with numerous studies (Adebisi & Matthew, 2015;Bhattarai, 2016;Güneş, 2015;Ozgur & Gorus, 2016;Sevim & Eyüboğlu, 2016;Taşkın, 2011). Variable EQ2TA, the ratio of equity total to asset total is used for controlling equity size of the bank where the dependent variable is ROE and used for controlling asset size of the bank where the dependent variable is ROA. Descriptive statistics regarding the data used under our study is indicated under Table 3 as follows. As it is shown under Table 4, correlation coefficient between variables are in acceptable levels. Especially, the parameter of 0.25 between the ratio of non-performing (follow-up) loans to asset (TK2TA) and provisions for non-performing (overdue) loans to total loans (PRO2L) indicates that there will be no multicollinearity problem. These two variables measure loan quality, in other words non-performing loans, from different point of views. The overdue loans are pre-phase of the follow-up loans and they contain possibility to turn into follow-up loans. Methodology Under our study, effect of non-performing loans to bank profitability is investigated for Turkish banking sector. , = + 1 2 , + 2 2 , + 3 2 , + , Here, ROE i,t indicates the return on equity of the bank i in t year, ROA i,t indicates the return on asset of the bank i in t year, PRO2L i,t indicates the ratio of provisions of non-performing (overdue) loans to total loans of the bank i in t year, TK2TA i,t indicates the ratio of non-performing (follow-up) loans to asset of the bank i in t year and EQ2TA i,t indicates the ratio of equity to asset of the bank i in t year. Below stated hypothesis are use under our study in order to test effect of non-performing loans to bank profitability. H1a: Provisions for non-performing (overdue) loans effects bank profitability negatively. Empirical Results Under empirical studies, analysis is made with the assumption that the data are stationary. On the other hand, sometimes, it is known that the data, including the panel data, are not stationary in other words, they have unit root. According to some researchers, data not being stationary or that have unit root, causes variable to have non-constant mean over time. This situation, results with the high autocorrelation problem beside being low Durbin-Watson statistic at the same time (Kutty, 2010). In this study, all variables tested for unit root. In this manner, unit root tests of Levin, Lin, and Chu (2002), Im, Pesaran, and Shin (2003), Phillips and Perron (1988) and Augmented Dickey and Fuller (1979) are used. Results of unit root test is indicated under Table 5. Note. Probability value being lower than 5% shows that the hypothesis H0, of "there is root unit" is rejected. As it is understood from Table 5, variables used within the scope of our study are stationary. Hence, probability value of the test results of at least three from four different tests are statistically significant at the rate of 1%. Results of equation used in order to test effect of non-performing loans to bank profitability is indicated under Table 6. Under Table 6, separate estimations are made for ROA and ROE which are accepted as dependent variables for measuring the effect of non-performing loans to bank profitability. The estimation results under Table 6, variability of ROE and ROA, respectively 30% and 47%, can be explained by the variability of the dependents variables, the ratio of non-performing loans (follow-up) to assets, the ratio of provisions for non-performing (overdue) loans to total loans and the control dependent variable equity to asset. F statistic indicating the overall statistical significance level of the models is 1%. Again, as stated under Table 6, there is negative relationship between TK2TA with ROE and ROA at the 1% significance level. Increase of non-performing (follow-up) loans in the total assets, negatively effects the bank profitability. Decrease of non-performing (follow-up) loans in the total assets, increases the bank profitability. Likewise, there is negative significant relation between PRO2L and ROA at the 1% significance level. While the increase in provisions for non-performing (overdue) loans to total loans effects profitability negatively; decrease in provisions for non-performing (overdue) loans to total loans increases bank profitability. The equity to asset ratio that is included to the equation as a control variable is in positive correlation with ROE at the 5% significance level, is in positive correlation ROA at the 1% significance level. It can be concluded that using equity while financing assets effects profitability positively. The results comprising relationship between non-performing loans and profitability are in line with the results of studies conducted by Abata (2014) Conclusion As the asset quality of banks has significant importance on financial system of the country and accordingly to national economy besides its effects to bank profitability; it is required to measure, oversee, examine effectively of the impacts of non-performing loans and accordingly to initiate effective economic policies. Within this scope, especially during last quarter century, regulations or criteria, aiming to ensure high asset quality, put in force by both national and international organizations and risk models developed with regards to this issue. There are numerous studies in literature investigating directly the effect of non-performing loans to bank profitability beside the studies taking non-performing loans as explanatory variable among the determinants of bank profitability and executing the relationship between NPL and profitability. Despite counter findings, ijef.ccsenet.org International Journal of Economics and Finance Vol. 9, No. 7;2017 throughout the studies, it is revealed that non-performing loans effects bank profitability negatively. Under this study, whether there is relationship or not between non-performing loans (asset quality) and bank profitability (return on equity or return on asset) investigated for Turkish banking sector. In this study, quarterly solo financial statements prepared in accordance with IFRS, that belong to the period from 1 st quarter of 2005 to 3 rd quarter of 2016, of 55 banks operating in Turkish banking sector is used. While investigating the relationship in question, asset quality is measured with the ratio of non-performing (follow-up) loans to total assets and ratio of provision for non-performing (overdue) loans to total loans and these independent variables are used in order to explain bank profitability. In accordance with the results obtained from two-way fixed effect panel regression analysis; with significant negative relationship is found between the ratio of ROA/ROE variable indicating the bank profitability and non-performing loans indicating the asset quality. It is found that in Turkish banking sector, increase of the non-performing loans decreases bank profitability and decrease of the non-performing loans increases bank profitability. As it is mentioned that asset quality may affect economic growth as well as the bank profitability; it will be beneficial to investigate correlation between asset quality and economic growth for Turkey.
4,025
2017-06-02T00:00:00.000
[ "Business", "Economics" ]
Posterior Asymptotic Normality for an Individual Coordinate in High-dimensional Linear Regression We consider the sparse high-dimensional linear regression model $Y=Xb+\epsilon$ where $b$ is a sparse vector. For the Bayesian approach to this problem, many authors have considered the behavior of the posterior distribution when, in truth, $Y=X\beta+\epsilon$ for some given $\beta$. There have been numerous results about the rate at which the posterior distribution concentrates around $\beta$, but few results about the shape of that posterior distribution. We propose a prior distribution for $b$ such that the marginal posterior distribution of an individual coordinate $b_i$ is asymptotically normal centered around an asymptotically efficient estimator, under the truth. Such a result gives Bayesian credible intervals that match with the confidence intervals obtained from an asymptotically efficient estimator for $b_i$. We also discuss ways of obtaining such asymptotically efficient estimators on individual coordinates. We compare the two-step procedure proposed by Zhang and Zhang (2014) and a one-step modified penalization method. Consider the regression model The design matrix X is of dimension n × p.We are particularly interested in the case where p > n, for which b itself is not identifiable.In such a setting identifiability can be attained by adding a sparsity constraint on |b| 0 , the number of nonzero b i 's.That is, the model consists of a family of probability measures {P b : b ∈ R p , |b| 0 ≤ s * }, and the observation Y is distributed N (Xb, I n ) under P b . We are interested in the Bayesian inference on the vector b, when Y is actually distributed N (Xβ, I n ) for some truth β.If p were fixed and X were full rank, classical theorems (the Bernstein-von Mises theorem, as in [8, page 141]) gives conditions under which the posterior distribution of b is asymptotically normal centered at the least squares estimator, with variance (X T X) −1 under P β . The classical theorem fails when p > n.Although sparse priors have been proposed that give good posterior contraction rates [3] [5], posterior normality of b is only obtained under strong signal-to-noise ratio (SNR) conditions, such as the SNR conditions of Castillo el al. [3,Corollary 2], which forced the posterior to eventually have the same support as β.Effectively, their conditions reduce the problem to the classical, fixed dimensional case.However that is not the most interesting scenario.Without the SNR condition, Castillo et al. [3,Theorem 6] pointed out that under the sparse prior, the posterior distribution of b behaves like a mixture of Gaussians. However, there is hope to obtain posterior normality results without the SNR condition if one considers the situation where only one component of b is of interest, say b 1 , without loss of generality.All the other components are viewed as nuisance parameters.As shown by Zhang and Zhang [9] in a non-Bayesian setting, it is possible to construct estimators that are efficient in the classical sense that We will use o p (•) as a short hand for a stochastically small order term under P β throughout this document.Here X i denotes the i'th column of X, and the o p (•) indicates that a term is of stochastically smaller order under P β .Later we also writeX −i to denote the n×(p−1) matrix formed by all columns of X except for X i .The | • | norm on a vector refers to the Euclidean norm. Approximation (2) is useful when |X 1 | is of order √ n, in which the expansion (2) implies weak convergence [6, page 171]: under P β (Such behavior for |X 1 | is obtained with high probability when X is generated i.i.d.from the standard normal distribution).More precisely, Zhang and Zhang [9] proposed a two-step estimator that satisfies (2) under some regularity assumptions on X and no SNR conditions.They required the following behavior for X. There exists a constant c 1 > 0 for which max Assumption 2. (REC(3s * , c 2 )) There exists constants c 2 , c > 0 for which Assumption 3. The model dimension satisfies Remark 1. Assumption 2 is known as the restricted eigenvalue condition [1, page 1710] required for penalized regression estimators such as the LASSO estimator [7, page 1] and the Dantzig selector [2, page 1] to enjoy optimal l 1 and l 2 convergence rates. The goal of this paper is to give a Bayesian analogue for Theorem 1, in the form of a prior distribution on b such that as n, p → ∞, the posterior distribution of b 1 starts to resemble a normal distribution to centered around an estimator in the form of (2).Note that the sparse prior introduced by Castillo et al. [3] does not meet our goal since the marginal posterior distribution of b 1 under the sparse prior converges weakly to a mixture of normal distributions without consistent model selection. Theorem 2. Under assumptions 1, 2 and 3 and the constraint , there exists a prior on b for which the posterior distribution of where β1 is an estimator of β 1 with expansion (2). The measure used here to quantify the discrepancy between probability measures is the bounded-Lipschitz metric [4, page 1].The convergence of a sequence of distributions to a fixed distribution in bounded-Lipschitz metric is equivalent to weak convergence. 2 The prior and its background stories. How does de-biasing work? In sparse linear regression, penalized likelihood estimators such as the LASSO are often used and tend to give good global properties.One desirable property is the following bound on the l 1 loss. where λ n is as defined in assumption 1.For example, Bickel et al. [1,Theorem 7.1] showed that under the REC condition (assumption 2) the LASSO estimator satisfies (5). In general, penalized likelihood estimators introduce bias for the estimation of individual coordinates.To eliminate this bias, Zhang and Zhang [9] proposed a two-step procedure.First find a β, perhaps via a LASSO procedure that satisfies (5).Then define The idea behind this estimator is to penalize the magnitude of all coordinates except the one of interest.Under assumptions 1, 2 and 3, the one-step estimator β(ZZ) is asymptotically unbiased with expansion (2).The same asymptotic behavior can be obtained in a single step, as in the next theorem.The idea of penalizing all coordinates but one to eliminate the bias is seen more clearly here. Choose η n to be a large enough multiple of nλ n .Under assumptions 1, 2 and 3, the one-step de-biasing estimator of β 1 achieves l 1 control (5) and de-biasing simultaneously.The estimator for the first coordinate satisfies Proof.In the proof of theorem 3 we will refer to the one step estimator as β. We will first show that β satisfies (5).We know that when the penalty involves all coordinates of b, then the bound on the l 1 norm is true [1, Theorem 7.1].It turned out that leaving one term out the of penalty does not ruin that property. As in the proof of [1, Theorem 7.1], we compare the evaluation of the penalized likelihood function at β and the truth β using the definition of β. Plug in Y = Xβ + , the above is reduced to , From here we need to discuss two situations.First consider the case where 1 is in the S, the support of β.The expression above is bounded by By choosing η n to be a large enough multiple of nλ n , we have Since the lefthand side is nonnegative, the above implies Therefore under assumption REC(c 0 /c 1 , κ), we can further bound the prediction loss by So far we have shown with high probability, Under the REC assumption, we can go back to bound the l1 loss. Therefore with (7) we have The proof for the other case turned out to be messier.But the general idea remains the same.When 1 ∈ S C , we can bound the RHS of ( 6) by Choosing λ to be a large multiple of √ n log p as in the 1 ∈ S case, we have Again use assumption 2 to deduce the l 1 control (5). Observe that the penalty term does not involve b 1 . We only need to show the second term in ( 9) is of order o p (1/ √ n).Bound the absolute value of that term with by assumption 1 and the l 1 control (5).That is then bounded by Remark 2. With some careful manipulation the REC(3s * , c 2 ) condition as in assumption 2 can be reduced to REC(s * , c 2 ).The proof would require an extra step of bounding The ideas in the proofs for the two de-biasing estimators β(ZZ) 1 and β(one−step) are similar.Ideally we want to run the regression That gives a perfectly efficient and unbiased estimator.However β −1 is not observed.It is natural to replace it with an estimator which is made globally close to the truth β −1 using penalized likelihood approach.As seen in the proof of Theorem 3, most of the work goes into establishing global l 1 control (5).The de-biasing estimator is then obtained by running an ordinary least squares regression like (10), replacing β −1 by some estimator satisfying (5), so that the solution to the least squares optimization is close to the solution of (10) with high probability. Bayesian analogue of de-biasing estimators. We would like to give a Bayesian analogue to the de-biasing estimators discussed above.As pointed out in the last section, it is essential to establish l 1 control on the vector b −1 − β −1 .Castillo et al. [3] and Gao et al. [5] have proposed priors that penalize sparsity of submodel dimension and provided theoretical guarantees such as LASSO-type contraction rates under the posterior distribution.This is the prior construction of Gao et al.Gao et al. [5] gave conditions under which we have a good l 1 posterior contraction rate. Lemma 1. (Corollary 5.4, [5]) If the design matrix X satisfies for some positive constant c, δ, then there is constant c 3 > 0 and large enough D > 0 for which We slightly modify the sparse prior of Gao et al. [5] to give good, asymptotically normal posterior behavior for a single coordinate.As we discussed in the last section, classical approaches to de-biasing exploit the idea of penalizing all coordinates except the one of interest.Our prior construction mimics that idea by putting the sparse prior only on b −1 . The prior. Denote the matrix projecting R n to span(X 1 ) by H.Under the model where Y ∼ N (Xb, I n ), the likelihood function has the factorization The likelihood factorizes into a function of b * 1 and b −1 .Therefore if we make b * 1 and b −1 independent under the prior, they will be independent under the posterior.We put a Gaussian prior on b * 1 to mimic the ordinary least square optimization step in the classical approaches.We put a sparse prior analogue to that of Gao et al. [5, section 3] on b −1 , using W as the design matrix in the prior construction.By lemma 1, b −1 is close to β −1 in l 1 norm with high posterior probability as long as κ o ((2 + δ)s * , W ) is bounded away from 0. We make b * 1 and b −1 independent under the prior distribution.The product distribution corresponds to a prior distribution on the original vector b.Note that under the prior distribution b 1 and b −1 are not necessarily independent.This modified prior also has the effect of eliminating a bias term, in a fashion analogues to that of the two-step procedure β(ZZ) 1 . The joint posterior distribution of b * 1 and b −1 factorizes into two marginals.In the X 1 direction, the posterior distribution of b * 1 is asymptotically Gaussian centered around After we reverse the reparametrization we want the posterior distribution of b 1 to be asymptotically Gaussian centered around an efficient estimator β1 = β 1 + That can be obtained from the l 1 control on b −1 − β −1 under the posterior.In the next section we will give the proof to our main posterior asymptotic normality result (Theorem 2) in detail. Since that prior and the likelihood of b * 1 are both Gaussian, we can work out the exact posterior distribution. Note that without conditioning on b −1 , the posterior distribution of b 1 is not necessarily Gaussian. The goal is to show the bounded-Lipschitz metric between the posterior distribution of b 1 and N ( β1 , 1/|X 1 | 2 ) goes to 0 under the truth.From Jensen's inequality and the definition of the bounded-Lipschitz norm we have For simplicity denote the posterior mean and variance in (12) as ν n and τ 2 n respectively.The bounded-Lipschitz distance between two normals N (µ 1 , σ 2 1 ) and . Therefore to obtain the desired convergence in (4), we only need to show To show (13), notice that the integrand is bounded.Hence it is equivalent to show convergence in probability.Write The first term is no longer random in b, and it can be made as small as we with now that it is decreasing in For the second term, we will apply lemma 1 to deduce that this term also goes to 0 in P β µ b −1 Y probability.To apply the posterior contraction result we need to establish the compatibility assumption (11) on W . Lemma 2. Under assumption 1, 2, 3 and the constraint for some c, δ > 0. We will prove the lemma after the proof of Theorem 2. To show (14), Note that the integrand is not a random quantity.It suffices to show That is certainly true for a {σ n } sequence chosen large enough.Combine (13), ( 14) and the bound on the bounded Lipschitz distance, we have shown Proof of lemma 2. We will justify the compatibility assumption on W in two steps.First we will show that the compatibility assumption of the X matrix follows from the REC assumption 2. Then we will show that the compatibility constant of X and W are not very far apart. Let us first show that under assumption 2, there exist constants 0 < δ < 1 and c > 0, for which κ 0 ((2 + δ)s * , X) = inf Denote the support of g as S. We have The second term if order o(1) under assumption 3. 1 . 2 . The size s of the dimension of the sub-model in the direction orthogonal to X 1 has probability mass function π(s) ∝ exp(−Ds log p).S|s ∼ U nif (Z s := {S ⊂ {1, ..., p} : |S| = s, X S is full rank}).3. Given the subset selection S, the coefficients b S has density f S (b S ) ∝ exp(−η n |X S b S |) for suitably chosen η n . n. Since b * 1 and b −1 are independent under the posterior distribution, the above is also the distribution of b * 1 given Y and b −1 .That implies the distribution of |X 1 |(b 1 − β1 ) given Y and b −1 is
3,581.2
2017-04-09T00:00:00.000
[ "Mathematics", "Computer Science" ]
SymmSketch: Creating symmetric 3D free-form shapes from 2D sketches This paper presents SymmSketch—a system for creating symmetric 3D free-form shapes from 2D sketches. The reconstruction task usually separates a 3D symmetric shape into two types of shape components, that is, the self-symmetric shape component and the mutual-symmetric shape components. Each type can be created in an intuitive manner. Using a uniform symmetry plane, the user first draws 2D sketch lines for each shape component on a sketching plane. The z-depth information of the hand-drawn input sketches can be calculated using their property of mirror symmetry to generate 3D construction curves. In order to provide more freedom for controlling the local geometric features of the reconstructed free-form shapes (e.g., non-circular cross-sections), our modeling system creates each shape component from four construction curves. Using one pair of symmetric curves and one pair of general curves, an improved cross-sectional surface blending scheme is applied to generate a parametric surface for each component. The final symmetric free-form shape is progressively created, and is represented by 3D triangular mesh. Experimental results illustrate that our system can generate complex symmetric free-form shapes effectively and conveniently. Introduction In computer graphics and digital entertainment, sketching, which was an important art genre in the early community [1], is still a common way to convey ideas quickly. Sketch-based modeling is a popular research topic for the creation of 3D models due to its natural and straightforward manner to represent real world objects [2][3][4]. From the viewpoint of practical applications, sketch-based modeling provides a very popular approach for interactively generating 3D shapes [2]. It can offer the user a simple way to access and interpret 3D objects, and thus can effectively avoid the tedious processes associated with professional 3D modeling software. Moreover, due to the symmetry properties of many real-world objects, it is useful to provide the user with a sketchbased reconstruction system which works for 3D symmetric shapes [15][16][17][18]. Traditional techniques for modeling symmetric objects from two construction lines are limited to mirror-symmetric shapes with circular cross-sections [19,20]. In order to provide more freedom for controlling the local geometric features of reconstructed free-form shapes (e.g., noncircular cross-sections), we present SymmSketcha novel system for creating symmetric complex 3D shapes from 2D sketches. Using the symmetry information in the 2D input sketches (see Fig. 1(a)), 3D symmetric construction curves can first be computed. 3D asymmetric general construction curves can then be calculated from the symmetric ones (see Figs. 1(b) and 1(c)). Each shape component can be generated from a pair of symmetric curves and a pair of general curves, and the complex symmetric 3D free-form shapes can finally be created (see Figs. 1(d), 1(e), and 1(f)). The main contributions of our work can be summarized as follows: • A progressive method to create symmetric 3D free-form shapes that consist of two types of shape components: self-symmetric shape components, and shape components that are symmetrically related to another shape with respect to a symmetry plane. • A computational theory to recover z-depth information for 3D construction curves which combine a pair of symmetric construction curves and a pair of asymmetric (general) construction curves. • An improved cross-sectional surface blending scheme to generate each shape component of a symmetric 3D shape; the cross sections need not be circular. The rest of the paper is organized as follows. Related work is reviewed in Section 2. Section 3 explains our z-depth computation approach for generating 3D construction curves. A description of our free-form objects modeling system is given in Section 4. Section 5 shows experimental results and comparisons with existing methods. Section 6 concludes the paper and suggests some future work. Related work To create 3D free-form shapes, designers tend to directly express their design ideas in 2D sketches, and the system should correctly interpret the input sketches to generate the final 3D shapes [21,22]. Based on a single input 2D line drawing, many 3D object reconstruction algorithms have been proposed. Here, we only review previous work concerning sketch-based modeling approaches for 3D free-from shapes. The readers may refer to the surveys [2] and [5] and the references therein for other related techniques. Many researches consider 3D free-from object reconstruction starting from visible and hidden details in 2D sketches of the underlying shape [6,7]. Given an interactive input of 2D free-form strokes, the Teddy system [7] provides the user a sketching interface for easily designing free-form objects and constructing plausible 3D polygonal meshes. Using sketched 2D outlines of 3D objects, Schmidt et al. [8] developed the ShapeShop system that can create the solid models using hierarchical implicit volume models, BlobTrees. Karpenko and Hughes [9] proposed SmoothSketch, a system for inferring plausible complex free-form shapes from a wide class of visible-contour sketches. The FiberMesh system [10] presented by Nealen et al. is also a free-form shape design system which can reconstruct 3D objects from a collection of input curves. Using a non-linear optimization framework, FiberMesh can automatically generate a smooth surface with sharp creases and darts that are controlled by the user input strokes. Bae et al. [11] presented the 3D curve sketching system ILoveSketch which allows professional designers to design conceptual 3D curved models in an easy pencil-and-paper way. By unifying the modeling and rigging stages of the 3D character animation pipeline, the RigMesh system [12] provides an easy-to-use interface for generating complex 3D rig characters. By performing sketch-based co-retrieval and co-placement of 3D relevant models, Xu et al. [13] presented a Sketch2Scene system which can automatically turn input 2D freehand sketches into semantically valid and well arranged 3D scenes. The ArtiSketch system [14] can reconstruct articulated 3D objects from several articulated 2D sketches, and novel poses of the 3D model can be generated by manipulating the model skeletons. By employing a Laplacian framework for sketch-based editing, Nealen et al. [23] provided an intuitive interface for the user to edit shape silhouettes or create sharp features on 3D triangular meshes. However, due to the lack of depth information in the input 2D sketches, the object reconstruction process is non-deterministic and the final generated 3D object cannot be unique [24]. To overcome these difficulties, many recent solutions add constraints to the original sketches [19,25] or match sketches to existing 3D models using an evocative system [26][27][28]. Li et al. [29] proposed a modeling system for generating piecewise planar 3D objects from object edges drawn on the input image. Using optimization of 3D information, Wang et al. [30] developed an approach for creating curved objects from single 2D line drawings. Xue et al. [31] presented a 3D modeling approach for recovering regular geometry from a single image of a symmetric object. Chen et al. [32] also introduced an interactive 3-sweep technique for extracting simple man-made editable objects from an input image. The 3D shapes reconstructed by these methods are comparatively simple and are restricted to some specific class of objects. Andre et al. [20] presented a simple 3D reconstruction scheme in which each object part can be constructed from two construction lines. However, much user interaction is needed to reconstruct complex objects correctly because their method has no information of the relative depths of multiple parts. Cordier et al. [19] exploited mirror symmetry to create 3D models; extra operations are not needed to assemble shape components. Actually, the symmetry assumption is one of the least restrictive for 2D sketches due to the fact that many realworld 3D objects, such as animals, buildings, and many other organic structures, exhibit certain kinds of symmetry [15][16][17]. Under orthographic or perspective projection, the 3D structure of a shape with certain kinds of symmetry can fully be reconstructed as long as one can determine the symmetric pairs [18]. Using hand-drawn input sketches of bilaterally symmetric objects,Öztireli et al. [33] presented a 3D modeling algorithm based on a set of planar curves. Using a set of mirrorsymmetric curves, Cordier et al. [34] also introduced a 3D reconstruction method for generating some particular types of wire-frame objects. The aim of our modeling system is to use natural modeling techniques to generate symmetric complex free-form objects from user input sketches. Traditional techniques for modeling symmetric objects are limited to generating simple 3D shapes with circular cross-sections due to the reconstruction scheme which uses only two construction lines [19]. Compared with such traditional modeling systems, the most important difference of our SymmSketch system is that each shape component is generated from four construction curves, a pair of symmetric curves and a pair of asymmetric general curves which provide more freedom to control the local geometric features of the final 3D free-form shapes, including non-circular cross-sections. Definitions and assumptions To clarify the use of hand-drawn input sketches for generating 3D construction curves, some definitions commonly used in practical applications are given and some assumptions are also presented to simplify the 3D reconstruction process. The reader should refer to Fig. 2. The sketching plane is a plane on which the user can draw 2D sketch lines interactively. Without loss of generality, the sketching plane can be selected as the XOY plane z = 0. The user input 2D sketches on the sketching plane can thus be considered as the orthogonal projections of the construction curves of a 3D shape. In our modeling system, only one sketching plane is adopted because it is difficult and inconvenient to draw several overlapping strokes belonging to different components, projected onto The symmetry plane is a special plane with respect to which the mirror-symmetric shape components exhibit their property of symmetry. Symmetric free-form shapes are invariant under reflection in their symmetry plane. Our modeling system is designed to use only a single symmetry plane for reconstructing the whole 3D shape. Symmetric curves are user input sketches whose corresponding 3D construction curves are symmetric with respect to the symmetry plane. Each point sampled on these sketched curves is called a symmetric point. Here, the property of symmetry holds for the curves in 3D space rather than for the projected ones on the sketching plane. This is because a curve which is mirror-symmetric in 3D space may not be mirror-symmetric if projected to a 2D plane. General curves are user input asymmetric general sketches for representing shape components. Each point sampled on these general curves is called a general point. Construction curves are 3D curves in 3D space that are recovered from the symmetric or general input sketches using our z-depth computation approach. We assume that each component of the reconstructed complex shapes are generated from four construction curves: two symmetric and two asymmetric general construction curves. To effectively reconstruct complex free-form shapes, unlike traditional methods [19,20], our modeling system generates each component of a free-form object by taking four sketches on the sketching plane as input. When the user draws the 2D sketches on the sketching plane, each sketch curve is uniformly sampled with an equal number of sample points and lifted up to a 3D construction curve after determining the z-coordinates of these sample points. To reconstruct a complex 3D freeform shape, the four corresponding construction curves are processed simultaneously, and the zdepth information of the construction curves can be calculated using the property of mirror symmetry. In particular, two corresponding recovered symmetry points of two symmetric construction curves must be mirror-symmetric with respect to the symmetry plane, whilst the vector connecting the two recovered symmetric points is perpendicular to the vector connecting the two corresponding recovered general points. For simplicity, the input sketches can be considered to be the orthogonal projections of 3D construction curves onto the XOY sketching plane. The x and y coordinates of vertices on the input sketches can simply be taken as the coordinates of the corresponding reconstructed 3D vertices (see Fig. 2). Thus the main remaining problem of 3D object reconstruction is the estimation of the z coordinates of the object vertices. Z-depth computation approach In order to create symmetric complex free-form objects, we suppose that each 3D shape can be separated into two types of components: selfsymmetric shape components, and mutual pairs of symmetric shape components with respect to a unique predefined symmetry plane. For each point on the surface of a self-symmetric component, there is a symmetric corresponding point locating on the same component. For each point on one of the mutuallysymmetric components, there is a corresponding symmetric point on the other component. Using the user input sketches for these two types of shape components, the z-coordinate information of the construction curves can be determined by two different schemes, based on the property of mirror symmetry. Z-depth computation for selfsymmetric components Due to the property of mirror symmetry, the z coordinates of sample points on the symmetric curves and general curves of a self-symmetric shape component can be calculated as follows. Without loss of generality, the symmetry plane Π s is simply assumed to pass through the origin of the coordinate system with normal direction , v g n−1 } be another two sets of n points sampled from the recovered general sketches respectively. Each pair of two different points (2) the line determined by v g i and v g i intersects the middle axis of the symmetric curves. Given the symmetry plane Π s as shown in Fig. 2, we choose one pair of symmetric vertices v s i , v s i on the symmetric curves and one pair of general vertices v g i , v g i on the general curves located on the symmetry plane Π s . The dashed lines connecting v s i to v s i , and v g i to v g i , intersect at one point. Thus, the following equations link these four points: ( where N g is a vector perpendicular to the vector N s . Using the coordinates of the vertices, these four equations can be expressed as follows: By combining Eqs. (1) and (2), the z-coordinates z s i , z s i of the symmetric points v s i , v s i can be calculated as follows: Finally, by combining Eqs. (3)-(6), the z-coordinates z g i , z g i of the general points v g i , v g i can be determined as follows: Z -depth computation for mutuallysymmetric components If one shape component is related by mirror symmetry to another component of the same object (see Fig. 3), the z coordinates of the sample points on two pairs of symmetric curves can be easily obtained using Eqs. (5) and (6). Thus, the z-depth information of the sample points on only one pair of the general curves needs to be computed using Eqs. (7) and (8), and that of the other pair of general curves can be easily determined using the property of mirror symmetry. For example, for a general point v g i (x g i , y g i , z g i ) sampled from the general curves (on the blue curve in Fig. 3), the symmetrically related one v g i (x g i , y g i , z g i ) on the related shape component with respect to the symmetry plane (the red curve) can thus be determined as follows: ( By combining Eqs. (9)-(11), the coordinate of the general point x g i , y g i , z g i can be calculated as follows: Next, we show how the above z-depth computation approach is used by our SymmSketch system to create symmetric complex free-form shapes. 3D reconstruction of symmetric free-form shapes To reconstruct symmetric 3D shapes, Cordier et al. [19] only used two symmetric curves which result in the final shape consisting of cylinderlike components with circular cross-sections. Their method can only generate relatively simple 3D shapes due to its limitation to working from only two symmetric curves. The motivation of our work is to provide more freedom to control the reconstructed shape to meet specific geometric requirements and artistic wishes. By adopting a pair of symmetric construction curves and a pair of general construction curves, our modeling system can create complex free-form shapes with relatively flat or more curved components. In this section, we discuss how to classify the user's input sketches and determine the four construction curves for each shape component. Furthermore, using the 3D construction curves, we also describe how to generate a complex mirror-symmetric 3D free-form shape. Overview of our object reconstruction algorithm Taking the planar sketches that contain symmetric and general curves as input, 3D coordinates of the sample points can be recovered by using the zdepth computation in Section 3. The final output 3D symmetric free-form shapes are created and represented as triangular meshes. The high-level framework of our 3D reconstruction algorithm can be summarized as follows. Step 1: Discretization of 2D sketching lines. After the user draws the 2D sketches on the sketching plane, our modeling system automatically discretizes the input sketches as polygons whose vertices are uniformly sampled on the smooth quadratic B-spline curves that interpolate the input sketch points. Each pair of symmetric or general curves is stored successively according to the order in which the user draws them. Step 2: Calculation of 3D construction curves. In our modeling system, 3D curves are constructed from two types of curve, curves Π s = (c s 0 , c s 0 , · · · , c s n−1 , c s n−1 ) symmetric with respect to a unique symmetry plane (such as the purple curves in Fig. 2), and asymmetric general curves Π g = (c g 0 , c g 0 , · · · , c g n−1 , c g n−1 ) (such as the blue curves in Fig. 2). Our reconstruction algorithm first processes all symmetric curves. 3D coordinate information for these symmetric construction curves can be computed directly with respect to the symmetry plane. 3D coordinate information for each pair of general construction curves is calculated together with the symmetric curves for different types of shape components. Step 3: Generation of parametric surfaces using a cross-sectional blending scheme. Given one pair of 3D symmetric construction curves and one pair of 3D general construction curves, an improved cross-sectional surface blending scheme is applied to generate a parametric surface for each shape component. The final complex free-form shapes are progressively generated and may comprise several shape components. Discretization of 2D sketching lines As shown in Fig. 2, under orthogonal projection, the user interactively sketches 2D symmetric or general curves for each shape component on the sketching plane. Following Blair's scheme [1] for illustrating shapes using several profiles, for each selfsymmetric component, the user should sketch a pair of symmetric curves, and then draw a pair of general curves (see Figs. 4(a)-4(d)). These four curves usually connect at their endpoints. For two mutualsymmetric shape components, the user sketches two pairs of symmetric curves for both components and then draws the pairs of general curves for each component respectively (see Figs. 4(e)-4(h)). Each pair of mutually-symmetric curves on these two components usually have no common connection at their endpoints. As they are input, the sketches are stored with a unique index denoting the order in which they are drawn. Thus, each pair of symmetric or general curves is handled together. Each pair of general curves is coupled with the corresponding symmetric curves. Sketch vertices are automatically uniformly sampled from the sketched curves which are represented as quadratic B-spline curves interpolating the sketch input data [35]. For simplicity, each pair of sketch curves is sampled using the same number of sample points. How we then perform z-depth recovery is outlined in the next section. Calculation of 3D construction curves The input sketch lines are stored in the order drawn. The system first groups these input sketches into nodes L = {I 0 , I 1 , · · · , I i , · · · , I n−1 }. Each set of four curves drawn by the user are stored in the same node. The original nodes in L are divided into two types of nodes L p = {I p 0 , I p 1 , · · · , I p i , · · · , I p n−1 } Fig. 4 Interactively sketching shape components step by step. and L q = {I q 0 , I q 1 , · · · , I q i , I q i+1 , · · · , I q n−2 , I q n−1 }. Each node I p i in L p contains the sketch curves for a single self-symmetric shape component; each pair of symmetric curves always meets at their endpoints (see Figs. 4(a)-4(d)). The nodes I q 2j and I q 2j+1 in L q contain the sketch curves of a pair of mutuallysymmetric shape components belonging to the same model (for 0 j n − 1 2 ) (see Figs. 4(e)-4(h)). In general, to be able to reconstruct free-form objects effectively, there must be an even number of nodes stored in L q . Node I q 2j contains two pairs of symmetry curves depicting two shape components, whilst node I q 2j+1 stores two pairs of general curves representing two shape components. If this requirement is not met, the modeling system will ask the user to redraw the sketched curves. The different types of nodes in L p and L q require different computational schemes to recover the zdepth information of the vertices sampled on their sketch curves. • To create the self-symmetric shape component from a node I p i ∈ L p , the system first calculates the z coordinates of the symmetric points using Eqs. (5) and (6). The z coordinates of the general points are then computed from the 3D symmetric points using Eqs. (7) and (8). • To create two mutually-symmetric shape components from nodes I q 2j , I q 2j+1 ∈ L q , we note that node I q 2j contains two pairs of symmetric curves whose corresponding construction curves may be computed using Eqs. (5) and (6). The generated symmetric construction curves are re-paired so that two pairs of symmetric curves are re-inserted into I q 2j and I q 2j+1 respectively (see Fig. 5). Each node I q 2j and I q 2j+1 contains one pair of symmetric construction curves and one pair of general curves. The z coordinates of the general points of I q 2j can be calculated from the symmetric ones in the same node using Eqs. (7) and (8). The general curves of I q 2j+1 are finally treated as a symmetric image of the general curves of I q 2j , and their z-depth information is computed using Eqs. (12)- (14). An example of our procedure for re-pairing and re-inserting different sketches is given in Fig. 5. The head, body, and tail of the doll are self-symmetric, so is each represented by four sketch curves in I p i respectively. For the mutually-symmetric ear parts, following the drawing order, initially the four symmetric curves for depicting the two ears are stored in one node and the four general curves are stored in another node (see the top right figure of Fig. 5). After the re-pairing step, each pair of symmetric curves is separated into two nodes respectively, and the symmetric curves for the two ears are included in I q 2j and I q 2j+1 respectively (see the down right figure of Fig. 5). As a result, all 3D coordinate information for the input sketches can be determined correctly. The input 2D sketches are thus lifted to provide 3D construction curves for subsequent free-form surface generation. Generation of parametric surfaces For each node I i , the z-depth information of its sketch curves has been recovered and the 3D construction curves have been obtained. The next step is to generate a smooth surface to fit these construction curves for each node. To reconstruct 3D free-form objects, Severn et al. [36] generated a parametric blending surface by sweeping a variable sized circle along a medial axis of two planar sketches. Here, given four 3D construction curves, our cross-sectional surface blending scheme sweeps a ring of two semi-ellipses in such a way that the final parametric surface passes through these construction curves (see Fig. 6(a)). As noted in Section 3.1, each of the four construction curves are sampled using the same number of points, and corresponding sets of four points sampled on different curves are used to generate a closed planar sweeping curve. However, the selected sets of four points are not in general located on a single plane. To overcome this issue, our improved cross-sectional blending scheme first determines a plane passing through the center of the line segment connecting two symmetric points. Its normal is determined by the cross product of the two directions connecting the general points and the symmetric points respectively. Then, the two general points are updated to be the intersection points of this plane with the two general curves. As a result, the four points are now located in the same plane and can be interpolated by two parameterized semiellipses (see Fig. 6(a)) as follows. Let C s i (u), C s i (u) be the symmetric curves and C g i (u), C g i (u) be the general curves. For each fixed parameter u, the generated sweep contour is generated by two semi-ellipses t u (v) as follows: one semi-ellipse passes through t u (0) = C s i (u), t u π 2 = C g i (u), t u (π) = C s i (u), and the other passes through Overall, the reconstructed parametric surface S(u, v) is created by translating t u (v) along the medial axis of C s i (u) and C s i (u). The final shape component is the parametric blending surface S(u, v) = t u (v); it is discretized as a triangle mesh (see Fig. 6(b)). Building up complex symmetric freeform shapes To create complex symmetric free-form shapes, our The parametric surface generated for one shape component. Four sample points determine a sweep contour made up of two semi-ellipses (a) that are smoothly connected at the symmetric points. (b) shows the final generated shape component. SymmSketch system separates the whole object into several components. Each shape component is created successively to build up the final shape. Once the user has finished the sketches for one component, our modeling system creates the corresponding 3D shape component interactively. This progressive modeling process provides the user with an intuitive design approach. The user can continue sketching after they have finished inputting the previous shape component. They can also remove some existing inaccurate sketches to allow them to be redrawn. The user can also add extra sketches to add further details to the object, increasing its complexity. Figure 7 illustrates an example of progressively creating a symmetric 3D free-form shape using our SymmSketch system. Experimental results and discussion All algorithms in this paper have been implemented in C++ using OSG (Open Scene Graph) for graphics display, running on a 2.6 GHz Pentium(R) Dual-Core PC. From the user input 2D sketches, the main steps of our reconstruction approach include the computation of 3D construction curves for each shape component, and the generation of smooth parametric surfaces using an improved crosssectional blending scheme. The experimental results show its effectiveness for building up various types of 3D free-form symmetric shapes. Creation of symmetric 3D free-form shapes Our SymmSketch system is suitable for various types of 3D free-form shapes, particularly mirrorsymmetric complex shapes comprising several components. Our modeling system has been tested by several novice student users which shows its effectiveness. After about 5 min training in the use of 2D sketching approach, user can create common objects and simple cartoon characters interactively (see Fig. 8). A user's freehand sketch may not very accurately represent a real symmetric object. However, our reconstruction approach is insensitive to small human errors in the input drawings as our approach for recovering z-depth information for construction curves intrinsically maintains the property of mirror symmetry. Alternatively, in order to accurately create 3D free-form shapes, user can include existing 2D line drawings to assist sketching tasks. Designers can simply depict the sketch lines on the sketching plane according to the provided line drawings. To create complex 3D free-form objects, our SymmSketch system can effectively control shape components to produce desired artistic effects, such as the body of a duck, the head of an ostrich or dog, and the plumage of a bird as shown in Fig. 9. Here, column (a) in Fig. 9 shows line drawings taken from real papers, column (b) gives the user drawn sketches based on these drawings, and columns (c) and (d) show the final generated 3D free-form objects from two different view directions. We can see that local sharp features can be generated by our modeling algorithm, such as the beak of the bird in the last row of Fig. 9. However, we can also see that the reconstructed 3D shape may be a little thicker than the real object. This is because the construction curves may not be the silhouettes of the reconstructed objects. Comparison with other 3D object reconstruction algorithms Given single view 2D sketches, our reconstruction approach can generate complex free-form shapes effectively and conveniently. Our reconstruction approach relies only on the 2D sketch content itself, and thus avoids any fuzzy or complicated operations to assemble multiple shape components into a single object. Moreover, an important advantage of our SymmSketch system is that it can provide a good degree of freedom for effectively controlling the final reconstructed shapes by employing four construction curves. Unlike the most closely related approach presented by Cordier et al. [19], our 3D object reconstruction method can create complex free-form shapes with specific geometric features, such as the sharp jagged tail of a woman's hair clasp, a flat wine bottle, and a peaked cap as shown in Fig. 10. Limitations of our algorithm Our symmetric free-form object reconstruction approach always creates each shape component from four construction curves, whether it is a selfsymmetric shape component or a shape component symmetrically related to another one. One limitation of our approach is that to correctly recognize and reconstruct complex 3D shapes, the input sketches should be drawn on a sketching plane in a specific order to allow inference of the free-form shape (see Fig. 4). Furthermore, our modeling system allows only a single symmetry plane for creating the whole shape. Of course, it could generate more general free-form objects if multiple symmetry planes were allowed. However, such a scheme would make the user input task more difficult to permit handling several symmetry planes on one sketching plane, and would need some new user interactions for sketching different shape components. Another limitation of our modeling method is that some fork-like shapes can be difficult to recover correctly. For example, Fig. 11 shows an unexpected reconstruction result of a funnel like shape using our method for a fork-like structure-the whole symmetric object is recovered as a single component. In future, some flexible mechanism for generating such shapes should be investigated. Conclusions and future work This paper has presented SymmSketch, a system for creating symmetric 3D free-form shapes which consist of two types of shape components, self-symmetric components and ones that are symmetrically related to another component with respect to a symmetry plane. The user needs draw only a few strokes, after which our reconstruction method can automatically infer the relative depth of different shape components due to the presence of mirror symmetry. Our experimental results illustrate the effectiveness of our system for generating various types of free-form symmetric shapes. Our future work will consider the following issues. The current system prevents the user from drawing the sketches in an arbitrary order. An algorithm for matching the symmetric curves automatically would help to overcome this issue. Our modeling system can only reconstruct free-form shapes with symmetry with respect to a unique symmetry plane. We hope to develop a 3D modeling system for generating complex shapes which may include (a) (b) (c) (d) Fig. 11 An unexpected reconstruction: the intent was to create a forked structure (a), but the result was a funnel-like shape (c) and (d). several local mirror symmetries with respect to different symmetry planes. Renato Pajarola received a Dr. Sc. Techn. in computer science in 1998 from the Swiss Federal Institute of Technology (ETH) Zürich. After a postdoc in the Graphics, Visualization & Usability Center at Georgia Tech., he joined the University of California Irvine in 1999 as an assistant professor where he founded the Computer Graphics Lab. He is a now full professor in the Department of Informatics, University of Zürich, Switzerland. His research interests include real-time 3D graphics, scientific visualization, and interactive 3D multimedia.
7,652.4
2015-03-01T00:00:00.000
[ "Computer Science" ]
Controlling factors of benthic macroinvertebrates distribution in a small tropical pond , lateral to the Paranapanema River ( São Paulo , Brazil ) Aim: The aim of the present study was to examine the benthic fauna in a marginal pond lateral to the Paranapanema River and to identify the main controlling factors of its distribution. Considering the small size of the lacustrine ecosystem, we expected that seasonal variations of the benthic community attributes are more important than spatial variations; Methods: Two samplings, one in March and another in August, were carried out at nine sites in the pond. Sediment samples were obtained through a Van Veen grab for invertebrate sorting, granulometric analysis, and for quantification of organic matter in sediment. Other abiotic factors were measured, such as water transparency, dissolved oxygen, pH, electric conductivity, temperature, and depth of sediment sampling sites. Regarding the comparative analysis at spatial scale, no significant variations in density of the benthic invertebrate community were found. Results: In relation to the studied abiotic factors, only depth presented significant differences among sampling sites; All the measured environmental parameters presented significant differences among sampling months, except depth and the physical and chemical characteristics of the sediment. The abundance of Chaoboridae and Chironomidae was the unique attribute with a significant difference in comparing the two months. A higher abundance of taxa occurred in August, especially for Oligochaeta, Nematoda, Chaoboridae, and Chironomidae; Conclusions: Because of the low structural complexity of the studied pond, we concluded that the changes in benthic macroinvertebrate community attributes were mainly due to seasonal effects. temporal variation, as compared with the organisms in environments with high structural complexity, which can persist in these sites for a longer time.According to Formigo (1997), composition and density of macroinvertebrate communities are relatively stable from one year to the next in nonperturbed systems.However, seasonal fluctuations linked to the dynamics of vital cycles of each species can result in extreme variations in community structure in some environments. In lakes marginal to tropical rivers, as those located in flood plains, aquatic biota is mainly influenced by the regime of flood pulses, as it was observed in the High Paraná River (Higuti and Takeda, 2002) and in the High Paranapanema River (Davanso and Henry, 2006). The aim of the present study was to examine composition and diversity of benthic macroinvertebrates in different sites of a pond which is marginal to a river.Relationships between the different taxa of benthic macroinvertebrates and environmental factors were determined.Because it is a pond with small size, we expected seasonal variations in benthic community to be more important than spatial ones, due to the structural homogeneity of the environment. Material and Methods The environment selected for the study (Mian Pond, Figure 1) is located near the mouth zone of the Paranapanema into the Jurumirim Reservoir (São Paulo, Brazil).It is a small pond (perimeter: 1,592 m; mean depth: 3.7 m; length: 749 m; maximum width: 115 m).The Mian pond presents permanent connectivity to the river and one of the margins is formed by a riparian forest while the other is a non-preserved area next to a soybean field, being partially protected from the wind.Nine sites distributed in three transects were selected for the samplings (Figure 1). In each site, three sediment samples were collected in March and August 2009 with a 0.064 m 2 Van Veen grab for the analysis of benthic fauna and an additional three samples for the determination of granulometric composition and quantification of organic matter in sediment.Depth profiles of the lake bottom at sampling site transects are shown in Figure 2. In the field, sampled sediment for benthic fauna analysis was washed in 250 µm mesh net and fixed with 4% formaldehyde.Afterwards, the organisms were sorted, identified, and counted under a stereoscopic microscope.They were identified up Introduction Comparative studies of small water bodies have been used to examine the relationships between environmental factors and the structure of aquatic communities (Heino, 2000).Benthic macroinvertebrates present great importance in some ecological processes, such as energy fluxes and nutrient cycling (Leal et al., 2003;Henry and Santos, 2008).Bioturbation of sediment surface and fragmentation of leaves from riparian vegetation are some of the processes of nutrient release to water carried out by benthic organisms (Caliman et al., 2007;Callisto et al., 2009). Considering the benthic macroinvertebrates, insects are the predominant taxonomic group in abundance and biomass in the majority of tropical lakes (França and Callisto, 2007).In all these environments, a predominance of Diptera, Chironomidae (Stenert et al., 2004;Roque et al., 2004), and Chaoboridae (Fukuhara et al., 1987) has been recorded. According to Jonasson (1996), the distribution, composition, and diversity of macroinvertebrates, as well as of other aquatic communities, are affected by abiotic and biotic factors and also by mutual interactions among the organisms.Thus, benthic macroinvertebrate communities clearly indicate the ecological conditions of inhabited aquatic ecosystems.According to Kownacki et al. (2000), benthic fauna composition in aquatic environments depends mainly on factors such as substratum type, water trophic status, and hydro-period. Oxygen and depth also constitute essential factors to macroinvertebrate distribution (Santos and Henry, 2001).Density of these organisms is remarkably lower at great depths, but the existence of some species tolerant to low oxygen concentrations is evidenced at these sites (Hirabayashi and Hayashi, 1994).Other important factors for benthic species distribution are the availability of food resources (Sanseverino et al., 1998) and the interspecific trophic interactions, such as competition and predation (Walker, 1998). Habitat complexity can determine the composition of a local community.According to Barreto (1999), diversity of biological communities in more complex sites tends to increase due to the presence of environments with minor stress, ample availability of shelter against predators, and protection against physical disturbances, which serve to assist in survival, recovery, and persistence of the organisms.Therefore communities in habitats with low complexity usually present great 1969), electrical conductivity (through a Hach Mod.2511 conductivimeter) and pH (through a Micronal Mod.322 pHmeter).All of them, except the transparency, were measured at the surface and 10 cm of the bottom.Sediment granulometric composition was determined according to the Wentworth scale (Suguio, 1973) and organic matter content, through calcination loss (in a furnace at 550 °C for 1 hour). The following water physical and chemical variables were determined: temperature (by mercury thermometer), transparency (by Secchi disk), dissolved oxygen (through the Winkler method, modified by azyde addition, Golterman and Clymo, and sediment organic matter and granulometric composition). All the statistical analyses were carried out using the Statistica 6.0 software (Statsoft, 2002). Results A small variation in rainfall (<3 mm) was observed, between the two sampling months (Figure 3).Considering that in July there was a high level of rainfall, atypical for the dry season, no significant difference in depth was found in comparing the two studied periods (Figure 4). In almost all the sites, silt/clay was dominant in the sediment, followed by very fine and fine sands the measured environmental variables and for the benthic groups (Chaoboridae, Chironomidae, Nematoda and Oligochaeta) that presented great total abundance of organisms (>4%).Next, data were log (x + 1) transformed and submitted to a two way variance analysis (ANOVA) involving sampling sites (n = 9) and periods (n = 2).Normality and variance homogeneity were obtained.A 0.1 level was used for testing the significance of ANOVA. Pearson correlations were computed to assess significant relationships (p > 0.03) between abundance of benthic taxa and environmental data (water temperature, depth, transparency, pH, dissolved oxygen and electrical conductivity, by site 5 (19.76%) in August, and the lowest value (1.99%) was found at site 9 in August. The variation in density of taxonomic groups is shown in Figure 7. Groups with the highest density were Chaoboridae, Oligochaeta, Nematoda, and Chironomidae.Chaoboridae (exclusively composed of Chaoborus sp.) was the predominant group (relative abundance corresponding to 44% of total abundance) in relation to the other organisms.Chaoboridae constituted more than 50% of the organisms in four sites (sites 1 and 6, in both months and sites 3 and 7, only in March) and was absent only at site 9 in August.The maximum density (344 individuals.m - ) was observed at site 4 in March.Oligochaeta predominated in three sites (site 2 in March and sites 8 and 9 in August).The highest densities were recorded in August (maximum of 240 individuals.m - ) and rose to around 21% of all the organisms in the entire study.Nematoda occurred in almost all the sampled sites and presented the highest relative abundance (50% of the total density of organisms) at site 9 in March.The highest density (286 individuals.m - ) occurred at site 2 in August.Considering both months, Nematoda density corresponded to approximately 17% of the organisms. Chironomidae was a group with low abundance (≈2%) in the first month of the study and appeared only in two sampling sites.However, in August, Chironomidae specimens were observed in almost all the sites (except in site 9) and the highest density (260 individuals.m - ) was recorded in site 3. Only depth presented a significant difference among sampling sites (F = 3.30; p = 0.04).Comparing the two periods of the year, ANOVA showed a significant difference for water temperature (highest value in March), water electrical conductivity at surface and bottom (highest values in August), water pH at surface and bottom (highest values in August), dissolved oxygen in water at bottom (highest concentration in August), and water transparency (highest value in March) (Figure 8).Regarding the abundance of benthic macroinvertebrates, significant differences were recorded for Chaoboridae (highest value in March) and Chironomidae (highest value in August). Significant Pearson correlations between biological and environmental variables are presented in Table 1.Chaoboridae density showed positive correlation with water temperature and silt/clay (the finest fraction of sediment) and negative correlation with larger particles in sediment (very coarse, (Figure 5).Higher values of silt/clay were observed in March.In site 4, sediment was composed of different fractions at the same levels as in March and August (Figure 5).In site 9, it was observed the greatest change in sediment composition from March to August.No variation pattern, regarding organic matter content in the sediment, was evidenced between sites when comparing the sampling months (Figure 6).The highest value (21.46%) was recorded in March at site 3, followed were not detected on a spatial scale, since depth was the only factor, considering all chemical and physical variables, with significant difference among sampling sites.Nevertheless, it was not likely to cause significant differences in ecological attributes of the community in these local.These observations evidenced great influence of seasonality on organization of the macroinvertebrate community. In relation to benthic macroinvertebrates recorded in lakes of the same study region (Davanso andHenry, 2006, 2007), Mian Pond was the only that didn't show significance difference in spatial distribution of the fauna.This finding can be explained by the reduced size, simple shape (no dendritic), and its relatively homogeneous sediment composition, morphological characteristics which offer low habitat complexity for the colonization and permanence of new species.Therefore, the simple structure of the pond must be considered an important factor in the homogeneity of spatial composition registered in the water body, resulting in few local alterations in abiotic factors and, consequently, in benthic community.coarse, and mean sand).Chironomidae density was associated negatively with water temperature and transparency and positively with water electrical conductivity, pH (of both the depths), and dissolved oxygen at bottom. Discussion In this study, low rainfall variation was observed in the two sampling months, selected to be representative of the rainy and dry seasons.This observation can characterize an "atypical" year, considering that the precipitation in August was different from the previous occurrence in the region (Martins and Henry, 2004;Henry, 2003Henry, , 2005;;Davanso andHenry, 2006, 2007).This certainly affected the environmental variables of the aquatic ecosystem, except water temperature. Despite effects of precipitation regime on benthic fauna were slightly detectable, significant seasonal alterations in community structure were observed, mainly due to variation in temperature, dissolved oxygen and sediment composition between the seasons.Alterations in benthic fauna In addition to the morphometric characteristics of the pond, another aspect that contributed to the high homogeneity of the environment, especially in sediment composition, is the absence of macrophytes.According to Corbi andTrivinho-Strixino, (2002), andBeckett et al. (1992), the highest densities of benthic macroinvertebrates are recorded in environments rich in macrophytes.maintenance of Chaoboridae and Chironomidae during the entire year.However, in our study, Chironomidae showed a significant negative correlation with water temperature, similarly to that observed by Davanso and Henry (2006) in a pond near the Mian Pond.On the other hand, a significant association between Chaoboridae organisms and high temperature was recorded in the present study, a condition also verified by Cleto-Filho and Arcifa ( 2006) who found highest densities of Chaoborus in hot periods, in the Monte Alegre Lake. According to Roque et al. (2004) andBrito Junior et al. (2005), Chironomidae larvae are the most representative and abundant group of the benthic macroinvertebrates due to their high capacity to adapt to different environmental conditions which many other groups cannot.Therefore, anoxic conditions in March may have caused the absence of these organisms, since Chironomidae densities present significant positive correlation with bottom oxygen concentrations.With the increase in oxygen concentration in August, these aquatic insects were recorded. Low pH values in March, significantly different from those observed in August, were probably related to the intense degradation of organic matter, since low water acidity is derived from ions released during decomposition.This fact seems to cause some disadvantaged to some taxa in the month of March, such as Chironomidae, that showed positive correlation with water pH. Transparency was a secondary factor responsible for seasonal variation in density of these two benthic groups as it depends on the rain intensity.The rainfall peak in July, one month before sampling, was the factor responsible for re-suspension of fine material from the bottom to the water column through the continuous mixing of water.Thus, significantly higher water transparency in March, when compared to August, seems to have negatively affected the fauna.Again, Chironomidae was most affected, which was negatively correlated to the variable.Leech and Johnsen (2009) concluded that changes in transparency of freshwaters may cause alteration in species depth distribution and affect predator-prey behavior, then, increases in transparency may benefits visual predators.Possibly in this study, high transparency facilitates the visualization of Chironomidae by predators, such as fishes. Regarding the granulometric composition of pond sediment, silt/clay was dominant in almost Macrophytes can increase the structural complexity of the environment, as the patches produces high amount of organic matter in sediment that lies just below them and in their surrounding area.Through this, some spatial differences may be produced in benthic composition since the patches are unequally distributed along the ponds. Low values of dissolved oxygen in the pond bottom were found in both sampling months, especially in March (in all the sampling sites <3 mg.L -1 ).According to Hepp (2002), values <4 mg.L -1 are enough to cause the mortality of many invertebrates not adapted to hypoxic conditions near sediment.The variation in pond depth appears not to directly influence the observed alterations in oxygen concentration, since the Mian Pond is a shallow environment.Differences in oxygen among sites were insignificant and appear to be unimportant for spatial distribution of the organisms.Oxygen concentrations at the bottom were lower in the first month sampled.This fact can be explained by the high amount of allochthonous matter introduced in the pond due to the high level of water flux and discharge of the Paranapanema River in January and February 2009.Organic matter deposition at the pond bottom increased biological processes, especially decomposition, producing high oxygen consumption.Then oxygen was probably the determining factor for the lower values of benthic fauna density in March (except for Choboridae). Chaoboridae, Oligochaeta, Nematoda and Chironomidae were predominant in benthic fauna.According to Pamplin et al. (2006) and Higuti and Takeda (2002), the Diptera order, including the Chaoboridae and Chironomidae families, and the Oligochaeta class, represent the most noticeable and relevant organisms of macroinvertebrate benthic assemblies. Considering the two sampling periods, there were significant alterations in the densities of both groups, with the Chaoboridae being the most significant in total abundance in March and Chironomidae in August.These modifications in densities can be due to significant environmental variations among the months of the study, especially in the water temperature.This observation is evidence of the powerful influence of seasonality on the distribution of these organisms.Strixino and Trivinho-Strixino (1980) observed that aquatic ecosystems presenting low mean depths (approximately 3 m) are able to maintain a high water temperature, thus enabling a proliferation and all the sampled sites, followed by very fine and fine sand fractions.This characteristic is another factor that enhanced the great Chaoboridae development in Mian Pond, especially in March when fine fractions was higher, as densities presented positive correlations with silt and negative correlations with coarse sand fractions. Considering the associations between benthic macroinvertebrate distribution in the aquatic environment and abiotic factors described previously, we concluded that reduced spatial variation of benthos occurred in the Mian Pond.Despite the fact that in 2009, no evident hydrologic variations were observed between rainy and dry seasons due to "atypical" rainfall in July, the seasonality effects on benthic fauna were more expressive than spatial ones, mainly because of this community response to some environmental factors, like water temperature, dissolved oxygen at deep zones, and characteristics of bottom sediment.Even though there was a wide variation in depth among the sampled sites, it was not able to change the structure of benthic community, reinforcing the idea that in environments with low structural complexity, physical and chemical factors of water and sediment are more homogeneous, having smaller effect on fauna. Figure 2 . Figure 2. Bathymetric profile of Mian Pond bottom, with the respective depths and sampling sites at the transects (the numbers 54, 64, and 68 correspond to pond widths at the site of the three transects). Figure 4 . Figure 4. Depth values at the nine sampling sites of Mian Pond in March and August 2009. Figure 6 .Figure 5 . Figure 6.Mean (n = 3) organic matter content in the sediment in sampled sites and months. Figure 8 . Figure 8. Mean (± standard error) values of environmental variables and Chironomidae and Chaoboridae abundance in March and August in the Mian Pond(**significance level: 0.1). Table 1 . Significant Pearson correlations (p < 0.03) between biological and water abiotic and sediment variables (Temp: temperature; Con/B: electrical conductivity at bottom; pH/B: pH at bottom; O 2 /B: oxygen at bottom; Tr: transparency; VCS: very coarse sand; CS: coarse sand; MS: mean sand; S+C: silt and clay).
4,286.8
2011-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Phosphoregulation of Twist1 Provides a Mechanism of Cell Fate Control Basic Helix-loop-Helix (bHLH) factors play a significant role in both development and disease. bHLH factors function as protein dimers where two bHLH factors compose an active transcriptional complex. In various species, the bHLH factor Twist has been shown to play critical roles in diverse developmental systems such as mesoderm formation, neurogenesis, myogenesis, and neural crest cell migration and differentiation. Pathologically, Twist1 is a master regulator of epithelial-to-mesenchymal transition (EMT) and is causative of the autosomal-dominant human disease Saethre Chotzen Syndrome (SCS). Given the wide spectrum of Twist1 expression in the developing embryo and the diverse roles it plays within these forming tissues, the question of how Twist1 fills some of these specific roles has been largely unanswered. Recent work has shown that Twist’s biological function can be regulated by its partner choice within a given cell. Our work has identified a phosphoregulatory circuit where phosphorylation of key residues within the bHLH domain alters partner affinities for Twist1; and more recently, we show that the DNA binding affinity of the complexes that do form is affected in a cis-element dependent manner. Such perturbations are complex as they not only affect direct transcriptional programs of Twist1, but they indirectly affect the transcriptional outcomes of any bHLH factor that can dimerize with Twist1. Thus, the resulting lineage-restricted cell fate defects are a combination of loss-of-function and gain-of-function events. Relating the observed phenotypes of defective Twist function with this complex regulatory mechanism will add insight into our understanding of the critical functions of this complex transcription factor. THE BASIC HELIX LOOP HELIX PROTEIN The bHLH domain is an evolutionarily conserved motif that is well represented from humans to flatworms. The bHLH domain consists of a short stretch of basic amino acids followed by an amphipathic -helix, a loop of varying length and then another amphipathic -helix (for detailed review see [1]). Each of the -helices allows for proteinprotein interactions with other bHLH proteins. The result of dimerization is the juxtaposition of the basic domains creating a combined DNA binding motif that in the majority of proteins allows for binding to a canonical sequence termed an E-box (CANNTG) [1]. Although HLH proteins can be classified into 5-subclasses, it is convenient to generalize categorization into 3 major classes: ubiquitously expressed bHLH factors (E-proteins Class A); tissue specific/restricted bHLH factors (Class B); and the negative regulatory HLH Id factors, which lack a basic DNA binding domain thereby sequestering E-proteins from forming functional transcriptional complexes [1]. Through the study of the Class B myogenic bHLH factors, it was established that these proteins could drive skeletal muscle specification and differentiation via heterodimer formation with bHLH factors from Class A [2][3][4]. Moreover, Id class HLH factors could compete for Eproteins as dimer partners adding a critical regulatory input to the system. As additional class B proteins were discovered, this regulatory model was initially applied; however, it became clear that not all Class B bHLH factors fit this simple paradigm. TWIST A bHLH FACTOR REQUIRED FOR MESO-DERM FORMATION In the fly, Twist was identified as a critical factor for the onset of gastrulation and the formation of mesoderm [5][6][7]. Regulated in part by Dorsal, Twist and the Zn-finger factor Snail coordinate with Dorsal to specify mesoderm in the fly. Mechanistically, it was presumed that Twist required a dimer partner from Class A to regulate gene expression [7]; however, in contrast to the established mechanism for the myogenic bHLH factors, Twist appeared capable of functioning as a homodimer. In elegant work from the Baylies laboratory, they showed that Twist conveyed different biological functions depending on the dimer partner choice. Using a tethered dimer approach to link Twist to itself or to Daughterless (the Class A E-protein in fly) via a short glycine linker sequence, the function of specific Twist dimer complexes were assayed. Expression of Twist-Twist homodimers in the fly resulted in mesoderm specification such that ectopic expression led to the formation of somatic muscle in inappropriate locations [7]. Moreover homodimer expression can rescue the early gastrulation defects in Twist mutant files. In contrast, Twist-Daughterless heterodimers antagonize mesoderm gene expression and genetic interactions show a complex gene dosage relationship [7]. These studies were the first to demonstrate that Class B bHLH factors could partner with a non-E-protein partner and facilitated a better understanding of the role played by one the vertebrate orthologs of Twist: Twist1. These studies also beg the question, how is dimer choice controlled? TWIST1 REGULATES MESENCHYMAL CELLS POPULATIONS IN MICE Evolutionary conservation of critical proteins is well established between species. Given the importance of Twist in the fly, it seems logical that Twist orthologs would play equally important roles in higher organisms. Indeed, the identification of Twist-related factors shows the representation in higher species as well as in early organisms such as C. elegans and in mammals there are six Twist orthologs (Twist1, Twist2, Hand1, Hand2, Paraxis, and Scleraxis) [8][9][10][11][12][13][14] (Fig. 1). In mouse, Twist1 function was directly assessed by gene deletion [15]. Twist1 null embryos die around E11.5 and display a number of defects that reflect a functional role in mesenchymal cell populations. Major phenotypes include exencephaly, hypoplastic limb buds, and vascular defects [15] (Fig. 1). These defects correlate to tissues that require cranial neural crest cells (NCC) to emigrate and contribute to the effected tissue [15,16]. Our own data further shows that Twist1 also plays a role in mediating outflow track (OFT) cushion formation within the developing heart and that the defects observed in Twist1 null OFTs result from defects in cardiac NCC cell behavior [17]. Recently, a conditional null Twist1 allele has been reported and the use of this mouse model in looking at tissue-specific Cre deletions will shed additional light on all of the lineages that contribute to these phenotypes [18]. Twist1 heterozygote null mice display a number of phenotypes including dysmorphic facial features and preaxial polydactyly in a partially penetrant fashion. Presentation of these phenotypes is dependent on mouse background and fits the gene dosage model established in the study of drosophila Fig. (1). Regulatory conservation of Twist-family bHLH factors . Top shows amino acid alignment of human TWIST1 with murine protein family members Twist2, Hand1 and 2, Paraxis and Scleraxis. The conservation of the phosphoregulated threonine (T) and serine (S) is noted by black shading. Conservation is maintained back to invertebrates [25]. Red-bolded residues shown in the human sequence identify specific point mutations found within SCS patients. Middle panels show a wildtype and Twist1 null embryo at time of death E11.5. Note the pronounced exencephaly (white arrowhead), hypoplastic limb buds (lb), and reduced lateral mesoderm (lm). Bottom shows the phosphoregulatory circuit that governs Twist-family dimer control and DNA binding. PKA is capable of phosphorylation Twist1 whereas only PP2A complexes containing B56 can specifically dephosphorylate the helix I resides. Twist. Interestingly, these haploinsufficient phenotypes are similar to an autosomal dominant, haploinsufficient disease in humans called Saethre Chotzen Syndrome (SCS). Not coincidently, a high percentage of SCS patients have null, mis-sense or non-sense mutations in TWIST1 (see section below). TWIST1 AND SAETHRE CHOTZEN SYNDROME SCS (OMIM101400) affects between 1-25,000 to 1-65,000 live births (for detailed review [19]). Amongst the phenotypic traits of SCS patients are craniosynostosis, low frontal hairline, facial asymmetry, and eyelid ptosis. Limb defects are also observed and include polydactyly, brachydactyly and syndactyly [20]. Although SCS can result from gene mutations in other factors, such as Snail [21,22], the majority of documented SCS cases show a loss-of-function mutation in the human TWIST gene. Identification of TWIST was facilitated by the observations made in regards to the phenotypic similarities between SCS and Twist1 heterozygous null mice as well as the fact that data shows SCS maps to 7p21-p22, which is homologous to mouse chromosome 12 region BC1, the location of Twist1 [19]. To date, 73 known mutations in TWIST have been identified in SCS patients and although a number of these mutations involve large deletions, a number of mutations are point mutations that cluster near the basic DNA-binding domain. Initial presumption was that these mutants would affect DNA binding; however, DNA binding of this subset of TWIST1 SCS alleles was subsequently established [11]. In the study of the Twist1-related proteins Hand1 and Hand2, it was also observed that these factors could form and function as non-E-protein dimers [23,24]. Given that it was well established that Hand1 and Hand2 could and did function as heterodimers with E-proteins, the idea that homodimers could also convey biological function requires that dimer choice must be a regulated process. In an effort to determine how Hand dimer regulation was controlled, we uncovered a phosphoregulatory circuit involving protein kinase A (PKA) or PKC and the trimetric protein phosphatase 2A (PP2A) containing the B56 regulatory subunit which could phosphorylate-dephosphorylate both Hand1 and Hand2 on a serine and threonine just carboxy to the basic domain [25] (Fig. 1). Studies using phospho-deficient and phosphorylation mimic forms of Hand1 showed that changing the charge of helix 1 was sufficient to alter Hand1 affinities for its possible bHLH dimer partners. Moreover, when these Hand1 point mutants were ectopically expressed in vivo, distinct limb phenotypes were obtained [25] (Fig. 2). Upon closer examination of the evolutionary conservation of these residues within the Twist-family, it was quickly determined that these residues were conserved in all Twist family members as far back as Drosophila [26]. When TWIST SCS alleles displaying point mutations within the basic domain were compared to the wild type TWIST allele, it was found that these mutants did disrupt the consensus PKA site. Moreover, we noted that a TWIST1 mutation at S123 (relative to the human sequence) was sufficient to cause SCS and this residue was identical to the phosphoregulated serine in both Hand1 and Hand2 [26] (Fig. 1). ALTERED PHOSPHOREGULATION OF TWIST1 CAN CAUSE SCS Work done by a number of groups showed that ectopic expression of Hand2 within the developing limbs in both mice and chick results in preaxial polydactyly [27,28]. Hand2 is expressed within the developing limb buds and is associated with an auto-regulation loop with the morphogen Sonic hedgehog (shh). Shh expression within the limb in part defines the zone of polarizing activity (ZPA), which imparts positional identity to the forming hand. Hand2 over expression expands expression of Shh resulting in ectopic ZPA formation and thus extra digits [27,28]. Interestingly, Twist1 haploinsufficiency phenocopies the Hand2 gain-of-function phenotype suggesting that gene dosage and possible functional interactions between Twist1 and Hand2 are critical for modulating digit positional identity. Indeed validating this hypothesis, dimer interactions between Twist1 and Hand2 can occur in vivo and partial coexpression within the developing limb, confirms biological relevance to the observed Twist1-Hand2 dimer formation [26]. To directly investigate if phosphoregulation of Twist1 modulated Twist1 dimer choice, Fluorescence Resonance Energy Transfer (FRET) [29] was used to assay dimer interaction strength of Twist1 with itself, ubiquitous E12, and Hand2 [26]. Results of these studies show that wild type and phosphorylation mimic Twist1 displayed similar affinities for itself, E12 and Hand2 albeit at altered interaction strengths [26]. In contrast, the Twist1 hypophosphorylation mutant (which models an established SCS TWIST1 allele) showed a distinct dimer affinity profile from the wild type protein, suggesting that TWIST1 dimer choice within a cell would be different dependent upon phosphorylation state [26]. Given that hypophosphorylated Twist1 displayed altered dimerization characteristics from wild type Twist1, phosphorylation analysis of the basic domain TWIST1 SCS alleles was undertaken. As predicted, these mutations showed a decreased ability to be phosphorylated by PKA in vivo supporting the idea that phosphoregulation of these evolutionarily conserved threonine and serine residues can modulate the biological activity of Twist1 [26]. Considering that 5 independent Twist1 SCS point mutations encode proteins with a reduced ability to be phosphorylated and that hypophosphorylated Twist1 displays distinct preferences for various bHLH partners, the idea that this molecular switch modulates Twist1 function is appealing. TWIST1 AND HAND2 DISPLAY ANTAGONISTIC FUNCTION IN THE LIMB In examining the Twist1 FRET interaction data, the interactions with Hand2 are most divergent. For instance, wildtype Twist1 has the highest interaction affinity for Hand2, whereas the SCS helix 1 hypophosphorylation Twist1 mutant has the lowest affinity for Hand2 dimerization [26]. This observation, in addition to the observation that Twist1 lossof-function phenocopies Hand2 gain-of-function in regards to polydactyly, led us to conduct a genetic test of this intriguing biochemical model. The experiment was a simple intercross of a Hand2 null allele onto a Twist1 haploinsufficient background, thus taking what was effectively a Hand2 gain-of-function (2 Hand2 alleles to 1 Twist1 allele) and rebalancing the gene dosage to one copy of each bHLH partner. The results of this experiment show a complete rescue of polydactyly on the Twist1 heterozygous background [26]. In similar studies in the chick using retrovirus over expression, Hand2 expression results in polydactyly, which can be partially rescued via coexpression of retrovirus expressing wildtype Twist1. In contrast, coexpression of a SCS helix 1 hypophosphorylation Twist1 mutant retrovirus fails to rescue Hand2 generated polydactyly [26]. These findings support the hypothesis that Twist1 dimer choice is regulated by the actions of PKA and B56 -containing PP2A and can convey a distinct biological function to Twist1. As these residues are also conserved in Drosophila, Twist phosphoregulation likely controls dimer choice in this genetic model system. Interpretations are complicated when partner choice has many inputs: how do you interpret results and what is the best experiment? What is still not clear from this data is the identity of the specific dimer pairs that are regulating specific molecular programs. Within a given cell, multiple bHLH and HLH factors are coexpressed temporally and in a dynamic fashion. The obvious changes in stoichiometry by altering ratios of any bHLH protein will affect the availability of E12 and other factors that can find each other and dimerize. The expression of Id factors further complicates this relationship as Id factors can titrate available E-proteins levels directly. By this logic, over expression of bHLH factors must be viewed in a different light. Swamping a cell with many more copies of one factor will undoubtedly result in E-protein titration, unintended bHLH heterodimers, and over expressed homodimers that will collectively orchestrate many of the resultant phenotypes. Even in "simple" gene knockout studies, the removal of a bHLH transcription factor will clearly result in the loss of regulation of downstream target genes; additionally, the dimer pools within the cell will be altered allowing for the formation of a new dimer pool that will contain bHLH complexes that would not normally form and thus modulate gene expression in unintended ways. Simply put, any gene knockout of a factor that requires a partner for biological activity is very likely to exhibit phenotypes that include direct loss-of-function and deleterious gain-offunction mechanisms. This is exemplified by the observation that Twist1-Hand2 double heterozygous null mice are more phenotypically normal than mice heterozygote for only Twist1. As Fig. (3) schematizes, the balance between Twist1 and Hand2 within the developing limb is critical for normal morphogenesis and that phosphoregulation of Twist1 influ- Fig. (2). Model of Twist-family bHLH protein dimer regulation. Twist-family proteins have been shown to exhibit promiscuous dimerization characteristics that allow for multiple functional partners. In addition to expression levels of bHLH proteins within a cell as well as Eprotein titration via Id factors, the phosphorylation state modulates Twist-family protein dimer affinities for its available partners thereby driving biological function. Expression of hypophosphorylation or phosphorylation mimic forms of the protein conveys distinct phenotypes in vivo. (Fig. (2) ences this relationship, thus an increase in Hand2 relative to Twist1 results in polydactyly. Would the gene dosage manipulation work in the opposite direction? That is, would having more Twist1 relative to Hand2 also produce abnormal development? In gain-of-function experiments, wild type, hypophosphorylation and phosphorylation mimic forms of Twist1 were expressed within the developing limbs of mice using the limb specific Prx1 promoter [30]. Results show that Twist1 gain-of-function resulted in medial defects within both the fore-and hindlimbs; however, as predicted by the gene dosage model, no polydactyly was observed. What was observed is that the phosphorylation mutants display unique phenotypes. Consistent with Twist1T125; S127A being an SCS allele, it shows a less severe phenotype then wildtype Twist1 [30]. Given that hypophosphorylated Twist1 shows a reduced antagonism for Hand2, this data fits the model well. Interestingly, the Twist1 phosphorylation mimic shows the most dramatic limb phenotypes including a severe reduction in ossification and medial limb structures; but again, no polydactyly was observed. Clearly, Twist1 gain-of function is mediating limb defects that are distinct from those of Hand2 gain-of-function. One must consider that these are gross over expression experiments and given that the presumed mechanism is dimer formation, we cannot account for the deleterious titration of endogenous bHLH factors that would result in their altered function. One obvious solution to decoding the mechanism underlying these observed phenotypes is to employ the experimental approach used in the study of drosophila twist and employ tethered dimers to look at direct downstream effects. Mouse Twist1 tethered proteins bind DNA and transactivate promoters in a manner similar to when Twist and E12 are expressed as separate polypeptides [30,31]. When expressed in the developing limb, distinct phenotypes for Twist1-Twist1 homodimers, Twist1-E12, and Twist1-Hand2 heterodimers are observed and those phenotypes correlate well with the phenotypes observed by the expression of the monomeric wild type and mutant Twist1 proteins [30]. Interestingly, the expression of Twist1 homodimers displayed similar limb phenotypes to those observed by the expression of the Twist1 phosphorylation mimic. Twist1-E12 tethered dimers show similar defects as those exhibited by the expression of wildtype Twist1. Most surprisingly, Twist1-Hand2 tethered complexes showed polydactyly and a mild loss of some medial structure; a combinatorial effect supporting the possibility of more then antagonistic functions in the limb program. Although the phenotypes are clearly not identical, the differences observed between the monomeric and tethered dimer data likely reflect the effect of endogenous bHLH factor titration from monomer over expression that will not occur when using a tethered dimer pair. To complicate the mechanism still further, it has also been shown in the monomeric analysis that phosphoregulation of Twist1 influences its affinity for E-boxes in a ciselement dependent manner [30]. Thus in addition to dimer choice, phosphorylation influences which E-box elements that the Twist1-containing bHLH complexes will bind. In combination with chromatin remodeling, which is the ultimate dictator of transcription factor accessibility, a highly regulated scheme emerges where the overall level of bHLH expression within a cell, combined with the phosphoregulation of the Twist bHLH family members will define a Twistfamily dimer pool within that cell. This dimer pool will then Fig. (3). Gene balance model between Twist1 and Hand2 in the developing limb. Left shows genotypes that convey Twist1 haploinsufficiency resulting in polydactyly where as genotypes to the right convey normal limb development. Of note, point mutations that disrupt phosphorylation (Twist1T125;S127A: TW1AA) of Twist1 result in phenotypes indistinguishable from a genetic imbalance with Hand2. Below is an E17.5 day transgenic mouse embryo expressing Hand2 via the Prx1-limb-specific promoter. Obvious is right forepaw polydactyly with left forepaw showing normal digit formation. Given that Prx1-expression via this promoter fragment is not asymmetric [31], this example shows the critical balance of Twist-Hand2 gene dosage as subtle differences in expression between left and right limbs within the same animal can result in different phenotypes. drive transcriptional programs based on the ability of the Twist dimers formed to access compatible cis-elements available for interaction. Id factors, which will independently influence the amount of E-protein available, also convey dimer choice by a simple swing of mass action. Thus, amphipathic protein structures need to interact to be stable in an aqueous environment and a dramatic change in the access of one will greatly influence the interactions of the others. It is interesting to consider how sensitive biological programs are to this elaborate regulatory mechanism. How many molecules of one factor vs. another will tip the balance between modulating normal vs. abnormal gene expression? How much do post-translational modifications modulate this critical dosage? Although we cannot yet answer these questions, we can see examples within the same animal where such issues must be at play. For example, the Hand2 transgenic shown in Fig. (3) displays asymmetrical polydactyly despite the observation that the Prx1-promoter does not show asymmetrical expression levels between left and right [32]. Does this result reflect a threshold of Hand2 expression that was reached in one but not the opposing limb and/or a variation in phosphorylation state of either Twist1 or Hand2 at a critical point in development? Addressing these questions would require more elegant in vivo and in vitro experimental systems and analysis. To avoid issues of over expression, direct helix I point mutant knockins for both Twist1 and Hand2 would allow for a better assessment of gene dosage within the tissues that need to specifically express these factors. Although tethered dimer knockin animal models would be more artificial, the use of a conditional activation allele expressing such a tethered complex could add valuable insight within specific developmental windows that would lead to a better understanding of the role that Twist1 plays within the mesenchymal cell populations that allow for the complex body structure in multi-cellular organisms. TWIST AND CANCER In addition to its essential role in modulating the behavior of mesenchymal cell populations critical for development, Twist1 is also an oncogene and is associated with a number of aggressive neoplasias including gastric, liver and most notably breast cancers [33][34][35][36][37][38]. The oncogenic role of Twist1 is not in facilitating cell transformation but rather it facilitates the ability of the cells within a primary tumor to undergo a pathological EMT similar to its function in development. EMT allows tumor cells to migrate away from the primary tumor, enter the lymphatic system, and settle into secondary tumor sites or metastasis [37]. Using a mouse mammary tumor model, Yang and colleagues made use of 4 tumor cell lines isolated from the same mammary tumor that displayed distinct abilities to promote metastasis in mice. Subtractive screens identified Twist1 expression as being a predictor of metastaic behavior and the study goes on to show that the most aggressive metastaic cell line could be rendered non-metastaic by siRNA knockdown of Twist1 expression [37]. Conversely, using a gain-of-function approach they show that expression of Twist1 in epithelial cell lines drives EMT making the cells mesenchymal in phenotype [37]. Taken together, this data suggests Twist1 as a master regulator of EMT. In the developing embryo it allows for cell migration programs critical for normal body patterning; whereas in cancer, it allows for secondary tumor formation, which is the ultimate cause of mortality. FUTURE DIRECTIONS The pivotal role that Twist1 plays in both embryonic development and disease is well established. In both of these roles, the biological function of Twist1 within mesenchymal cell populations is obvious. In comparison to the remaining Twist-family members, other then loss-of-function phenotypes resulting from targeted gene deletion, Twist1 is the only protein within the family that displays dominant disease phenotypes. It is likely that other family members play critical roles in utero and the lack of evidence for these factors contributing to postnatal disease may reflect phenotypes that result in early embryonic death. Suspiciously, all family members are expressed within tissues that undergo morphology changes. Has gene expansion through evolution allowed for more specialized functions regulating cell shape and behavior? Currently this is our favorite hypothesis, which we are in the process of testing. Point mutant knockins for the various Twist-family members are underway and should shed insight into such cell behavior. Of note, when considering the role of Twist1 in cancer progression, is the observation that although Twist1 appears necessary for metastasis in the mouse breast cancer model, it is probably not sufficient given that 3 of the 4 cell lines express comparable levels of Twist1 protein yet 2 of the 3 cell lines are largely non-metastatic [36]. Given that Twist1 protein levels are similar yet metastatic behavior is different, an additional component to Twist1 functional regulation must be required for metastasis. It will be interesting to investigate the role of Twist1 phosphoregulation in the process of tumor progression thus linking the elaborate control of dimer choice and DNA binding preferences to neoplastic disease. In support of this hypothesis, PP2A, has recently been identified as a tumor suppressor [39] and B56 containing PP2A complexes could play a role in regulating Twist1 function in cancer via control of the phosphorylation state of Twist1. If Twist1 regulation via phosphorylation is indeed a critical component of tumor progression, it will provide a potential therapeutic target to inhibit EMT thereby reducing the incidence of lethal pathologies. Further investigations into gaining a better understanding of the Twist-family functional mechanism will likely add valuable insights into the roles that this transcription factor family plays in development and disease.
5,776.4
2008-09-30T00:00:00.000
[ "Biology", "Medicine" ]
Synchrotron Mössbauer spectroscopy using high-speed shutters A new method of performing Mössbauer spectroscopy using a fast shutter in combination with microfocused synchrotron radiation is demonstrated. Introduction Synchrotron Mö ssbauer spectroscopy (SMS) provides information on atomic environments by measuring the interaction between nuclear moments and the local electric and magnetic fields. It does this by exciting low-energy ($ 10-100 keV) nuclear resonances with synchrotron radiation and detecting the coherent emission as a function of time (time-domain measurements) or as a function of incident X-ray energy (energy-domain measurements). Time-domain measurements are much more common and have relatively short data collection times (typically < 1 h), while the latter require the production of an ultra-high-energy-resolution X-ray beam ($ 10 neV) (Chumakov et al., 1990;Smirnov et al., 1997;Mitsui et al., 2007) and data collection can take many hours per spectrum. There is also a mixed class of measurements employing time discrimination in a manner that yields an energy spectrum without requiring an ultra-high-resolution X-ray beam. These alternative procedures may be direct or rely on algorithms to reconstruct an energy spectrum from a multitude of time spectra (Coussement et al., 1996;L'abbé et al., 2000;Sturhahn et al., 2003;Callens et al., 2003). Data collection times to acquire adequate statistics for these methods are also typically hours. It would be a significant advance if one could improve signal rates and perform both time-domain and energy-domain measurements in the same set-up with the source characteristics of synchrotron radiation: a polarized beam with a small cross-sectional size and high spectral-brightness. This would allow, for instance, nuclear resonant diffraction from select nuclear transitions to obtain useful site-selective structural information (Stephens & Fultz, 1997) with much greater practicality. Obtaining complementary energy spectra along with time spectra can also help to unambiguously determine hyperfine parameters for complex materials with multiple sites and complicated field distributions. Performing both types of measurements efficiently would require one to detect the nuclear resonant scattering with minimal losses while completely suppressing the electronic charge scattering. Ever since the suggestion to use synchrotron radiation as a source to perform Mö ssbauer spectroscopy (Ruby, 1974), experimentalists have considered various means to tackle this primary technical problem of observing an extremely narrow energy resonance using a broadband X-ray source. Nuclear resonant scattering is distinct from non-resonant electronic charge scattering by way of its sensitivity to hyperfine interactions, which exhibits a strong polarization dependence, and its long excited-state lifetime ($ 100 ns). Numerous methods to separate the nuclear resonant signal from the vast amount of electronic scattering have been employed including, for example, pure nuclear Bragg reflections (Gerdau et al., 1985;Smirnov, 2000), anti-reflecting nuclear-resonant films (Rö hlsberger et al., 1992;Rö hlsberger, 1999), crossed linear-polarizers (Siddons et al., 1993;Toellner et al., 1995) or the nuclearlighthouse effect (Rö hlsberger et al., 2000). The most common method has been to take advantage of the long excited-state lifetime to separate the delayed nuclear resonant scattering from the prompt electronic charge scattering using a fasttiming detector and time-filtering methods. The best detectors for this purpose are avalanche photodiode detectors (APD) (Kishimoto, 1992;Baron et al., 2006), but they are unable to operate under the enormous X-ray load of eV-bandwidth synchrotron radiation. One typically reduces the X-ray load by orders of magnitude by using a high-resolution monochromator (HRM) to reduce the bandwidth of the synchrotron radiation to around 1 meV (Toellner, 2000). For an efficient 1 meV monochromator at a third-generation synchrotron source, this results in an X-ray load (10 10 photons s À1 ) that still overwhelms fast-timing detection systems. One usually relies on sample absorption, additional detectors, HRM inefficiency and additional absorbers to reduce the X-ray load further so that the detection system can operate properly. HRMs have spectral efficiencies that are typically 5-50%, but can be lower. Theoretically, reducing the bandwidth of an HRM could improve the signal-to-background ratio, but this becomes increasingly more difficult and usually with less efficiency in practice. A method that could suppress more electronic scattering and circumvent HRM inefficiency could potentially improve signal rates by one to two orders of magnitude and open up new possibilities in measurements using SMS. Here we suggest a new method to improve signal rates significantly and suppress electronic scattering by orders of magnitude more than a HRM by using a very fast shutter combined with a microfocused synchrotron beam. By placing a shutter after a sample or material containing the nuclear resonant isotope, one can protect the detection system during the synchrotron excitation pulse by having a closed shutter. Then, having the shutter open in a time that is small compared with the nuclear level lifetime would allow one to detect the nuclear resonant emission without any non-resonant scattering. The difficulty in doing this is that a shutter would have to have a very fast transition time, i.e. the time from fully closed to fully open would have to be of the order of 10 ns. In addition, the shutter would have to sustain a repetition rate that matches the synchrotron pulse frequency. Depending on the time structure of the X-ray pulses delivered by a synchrotron source, this implies a repetition rate in the range 10 5 -10 7 Hz. Also, the attenuation would have to be sufficient for the detector to withstand the prompt transmitted nonresonant radiation along with the delayed nuclear resonant radiation. In practice, this implies an attenuation of at least six orders of magnitude for present-day synchrotron beamlines. There are also benefits to increasing the attenuation significantly further. Specifically, complete attenuation of all the electronic scattering would allow, in addition to time-domain measurements, the production of a pure beam of Mö ssbauer photons that can be used for SMS in the energy domain or for other ultra-high-energy-resolution measurements with X-rays. Applications of SMS using X-ray free-electron lasers will produce a prompt X-ray flash owing to electronic charge scattering that is orders of magnitude larger than what is currently dealt with at third-generation synchrotron sources. This will overwhelm many of the current detection methods, but a high-speed shutter in combination with a microfocused beam has the potential to mitigate this problem. Feasibility test We performed a feasibility test at the BioCARS 14-ID beamline of the Advanced Photon Source. The storage ring was operated with a 'hybrid fill pattern', which has a single X-ray pulse followed 1.594 ms later by a 493 ns-long segmented pulse-train (http://www.aps.anl.gov/Facility/Storage _Ring_Parameters). This pattern repeats with a frequency of 271.554 kHz (period of 3.6825 ms). The single X-ray pulse originates from an electron bunch in the storage ring that produces 16 mA of stored current. The synchrotron radiation was filtered to a bandwidth of 1.9 eV (FWHM) at 14.4125 keV (corresponding to the first nuclear level in 57 Fe) using a cryogenically cooled silicon double-crystal monochromator. The beam passed through a Kirkpatrick-Baez (K-B) mirror system to focus the beam and remove higher spectral harmonics. After the mirror system, but before the focal spot, the beam passed through a nuclear resonant material: 3 mm -Fe or 12 mm stainless steel (composition 55% Fe, 25% Cr, 20% Ni), both enriched with 95% 57 Fe. The K-B mirror system was operated so that the focal spot of 30 mm (vertical) by 100 mm (horizontal) was located approximately at the midpoint of a fast shutter system (Cammarata et al., 2009). The rotation axis of the periodic shutter was orthogonal to the X-ray beam. This produced a vertical shutter speed transverse to the beam of 520 m s À1 and a duty cycle of 987.5 openings per second (once per rotation) of a tunnel in a spinning rotor, which closes/opens about the beam from top and bottom. Vertical and horizontal clean-up slits after the shutter were used to restrict the beam size to values smaller than the focal spot size. An APD with time-filtering electronics was used for time-differential measurements, producing time spectra with zero representing the arrival time of the incident X-ray pulse. Fig. 1 shows a schematic of the measurement set-up. Adjusting the phase of the rotating shutter such that its transmission window began after the arrival of the single X-ray pulse allowed a direct measure of the nuclear resonant signal from the -Fe foil. With a 20 mm (vertical) slit after the shutter, we detected 54 counts s À1 in a time window of 60-330 ns after the excitation pulse. The expected energy-integrated transmission for this time window is 7.9À 0 as obtained from the CONUSS software package , where Measurement set-up. K-B mirror system (A), nuclear resonant foil (B), high-speed shutter (C), clean-up slits (D), APD timing detector (E). The X-ray beam was focused to the centre of the tunnel in the rotating shutter. À 0 = 4.67 neV is the width of the nuclear excited state. A time spectrum of the nuclear resonant emission in the forward direction for the -Fe foil is shown in Fig. 2. Hyperfine fields at the nucleus produce nuclear level splittings that result in different resonant transition energies interfering and producing temporal beating that dominates the time spectrum. At our placement of the transmission window the shutter suppressed the excitation pulse by approximately 2 Â 10 À9 resulting in a counting rate of non-resonant scattering (at zero time) of 8.5 counts s À1 . A very similar measurement of the stainless steel foil produced a time spectrum that is also shown in Fig. 2. The time spectrum also shows temporal beating, but this is due to multiple scattering that occurs in thick materials. We detected 11.5 counts s À1 in a time window of 60-330 ns after the excitation pulse, while the counting rate at zero time was 3 counts s À1 . The expected energy-integrated transmission for this stainless steel foil is 1.7À 0 . During initial testing we obtained a time spectrum without any nuclear resonant material to measure the background owing to spurious X-ray pulses. This was with a slightly larger slit opening (30 mm). This is shown in Fig. 3 and produced an integrated counting rate of 0.8 counts s À1 within a time window that starts 20 ns after the excitation pulse. These spurious pulses arrive at integer multiples of the storage ring's RF period (2.8 ns) after the main X-ray pulse, and are due to electrons in the storage ring that occupy stable orbital positions other than those of the main electron bunches. This measurement of the spurious pulses cannot substitute for a proper measurement of the background but is representative of the nature and magnitude of the background lying within the transmission window. The actual background to the time spectra of Fig. 2 would be somewhat less owing to smaller slits and absorption in the foil; we estimate it to be approximately 0.2 counts s À1 in a time window of 60-330 ns after the excitation pulse. For SMS measurements the shutter's performance depends critically on its attenuation, overall transition time and fullopen duration. The nuclear resonant measurements demonstrate excellent attenuation (10 À9 ) of the electronic scattering owing to the excitation pulse, and a full-open duration of approximately 270 ns. The overall transition time can be estimated from the time spectra as that time after the excitation pulse when the nuclear resonant signal is no longer suppressed by the shutter. The overall transition time is approximately 60 ns and has two principal contributions: phase instability of the transmission window and the time for the shutter's edge to traverse the microfocused beam. We measured the phase instability of the shutter's transmission window using the 493 ns-long segmented pulse-train. We reduced the beam size to approximately 1 mm and collected time spectra of the pulse-train. By this we effectively made the one-shot transition time small (< 1 ns) and were able to assess the long-term (20 min) phase instability from the closed-toopen time duration reflected in those time spectra. From this procedure we estimated the phase instability as reflected in the movement of the transmission window to be approximately AE 20 ns. The measured phase instability has a fast component (jitter) that is reportedly 2 ns r.m.s. (Lindenau et al., 2004) and a slow component that clearly dominates. In addition to the phase instability, the time to traverse a microfocused beam contributes to the overall transition time and is estimated to be approximately 20 ns for a 20 mm beam size. Measurement of contamination within the transmission window owing to spurious X-ray pulses from the storage ring. Intensities are relative to the X-ray excitation pulse. Spurious pulses are suppressed during the transition time (0-60 ns) of the shutter. The inset shows a magnified region of raw data in counts and clearly displays the 2.8 ns period of the storage ring's RF. Data collection time was 15 h. Discussion The measurements demonstrate the clear potential of using a fast shutter to perform SMS. There are two losses in the current set-up that could be removed in a future implementation: low duty cycle of the shutter and large focal spot. Operating at a higher duty cycle with an alternate shutter design would give a signal rate increase of as much as 275. This could be achieved by using a multi-slotted disc with its rotation axis parallel to the X-ray beam as shown in Fig. 4. Operating with a smaller focal spot would contribute by eliminating the loss at the slit (a factor of 1.5) and allow access to earlier time after the excitation pulse where more of the signal resides (a factor of 2). These factors alone would increase the measured signal rates for our two test materials by 825, which would lead to signal rates that are much higher than has ever been demonstrated for comparable resonant foils. This still assumes that one uses only 16 mA of the 100 mA of electron current in the storage ring. Further gains would be possible by using other storage-ring fill patterns and by improving high-speed shutter technology to achieve shorter transition times by reducing both phase instability and shutter traversal-time. As shown in the insets of Fig. 2, spurious X-ray pulses contaminate the measurement and limit the usefulness of the technique for low-signal-rate applications. From the observed counting rates we estimate that the electron bunches responsible for the spurious X-ray pulses contain an average of approximately ten electrons. This translates to a bunch purity of 10 À11 for the individual spurious pulses relative to the 16 mA excitation pulse. As this is already very good, it is unlikely that one will be able to suppress the background adequately for low-signal applications by improving the bunch purity within the storage ring. This gives considerable impetus for employing a second shutter upstream (but still near a focal spot) of any nuclear resonant medium to suppress spurious X-ray pulses by being closed when the downstream shutter is open: an anti-shutter. Such an anti-shutter could also serve to alter the time period between excitation pulses and thus allow measurements to be performed using different timing modes that are implemented at various synchrotrons. The attenuation required for suppression of spurious pulses is quite moderate (15 attenuation lengths), while that for altering the excitation period would be substantially more (35 attenuation lengths). Note that a rotating shutter with greater attenuation will be more massive (thicker) and thus necessitate slower shutter speeds so as not to exceed maximum allowed tensile stresses. Substantial suppression of both excitation and spurious pulses would allow one to produce a new source for Mö ssbauer studies. By removing the prompt electronic scattering from a thin nuclear resonant absorber one can produce a polarized X-ray beam with a Lorentzian spectral profile that approaches that of the nuclear level. In the case of 57 Fe, a nearsingle-line resonant absorber, such as potassium ferrocyanide [K 4 57 Fe(CN) 6 ], placed on a velocity transducer between a shutter-anti-shutter pair would result in a new Mö ssbauer source that could be used to perform SMS in the energy domain. The velocity transducer would allow scanning the incident beam in energy through Doppler shifting in the same manner as in traditional Mö ssbauer spectroscopy. The shutter will truncate the time response of the nuclear resonant emission, and this will modify the spectral composition of the X-ray beam after the shutter. This must be considered in order to produce a useful Mö ssbauer source. Assuming the shutter suppresses the electronic charge scattering completely, the spectral distribution I(!) of the X-ray beam after the shutter owing to the transmission window (assuming a single Lorentzian-shaped resonance of width À in the thin resonant absorber limit) is given by where t 1 and Á are the beginning and duration of the time window, respectively, while !, ! o and = h -=À are the spectral frequency, resonant frequency and mean lifetime, respectively. In the limit of a thin resonant absorber, the spectral shape is affected by the width of the time window, but not the starting time. The starting time of the transmission window only affects the intensity. This would not be true for a thick resonant absorber owing to multiple scattering. Therefore, in order to produce a source with an acceptable spectral profile, it is important that both a thin resonant absorber be used and a time window of three to six mean lifetimes be available to avoid excessive spectral sidebands owing to an artificially truncated time response. Fig. 4 shows a two-shutter set-up suitable for SMS measurements in either the time domain or energy domain. For energy-domain measurements one could enhance the detection rate by using a much more efficient X-ray detector in place of a fast-timing detector, which is typically less efficient. This modified set-up would allow the production of a pure Mö ssbauer beam with a spectral width similar to that of a traditional radioactive source, but without the unwanted spectral components that emanate from radioactive materials originating from electronic and nuclear fluorescences. Also, it would allow polarization control and produce many orders of magnitude more spectral brightness owing to the much greater collimation and much smaller beam size. A near-single-line synchrotron Mö ssbauer source has been produced previously using pure nuclear Bragg diffraction from a 57 Fe-enriched single crystal of iron borate ( 57 FeBO 3 ) (Smirnov et al., 1997) and continues to be improved upon with good results , but high-speed shuttering offers the potential for a larger source strength and the possibility for use with other resonant isotopes. This would allow nuclear Bragg/Laue diffraction from crystalline or polycrystalline samples to be performed with greater practicality. Also, measurements of microscopic samples would benefit enormously over traditional Mö ssbauer spectroscopy owing to the enhanced intensity associated with a focused beam. The primary restriction of the proposed scheme is the need for a microfocused beam in at least one dimension, along the direction of shutter motion. The position of this one-dimensional focus dictates the location of the shutter-anti-shutter pair. In a time-domain set-up where one measures time spectra as in Fig. 2, the sample could be placed at the focal spot (position C in Fig. 4) with the shutter and anti-shutter immediately downstream and upstream, respectively, of the sample environment. In the energy-domain set-up there are multiple possibilities. One option would be to place a nearsingle-line resonant absorber on a velocity transducer at the focal spot (position C), while the sample environment would be located immediately after the downstream shutter (position F), where the beam size will be larger (in one dimension) but with otherwise little restriction on the sample environment. Alternatively, the sample could be placed between the shutter-anti-shutter pair with a near-single-line resonant absorber placed after the downstream shutter on a velocity transducer. This has the advantage of a small beam at the sample and the possibility to perform both time-domain and energy-domain measurements coincidently, but has the drawback of a lower 'resonant absorption effect' (i.e. signalto-background ratio) in the energy spectra. Note that the location of the focus associated with the orthogonal beam dimension is not significantly restricted and may be dedicated to the sample environment in either the time-domain set-up or the energy-domain set-up. Although two-dimensional microfocusing is unnecessary in principle, it presents significant advantages in practice. In particular, a small beam (in both transverse dimensions) eases the restrictions on high-speed shutter design. For thirdgeneration synchrotron sources, a high-speed shutter composed of a metal disc with a rotation axis parallel to the X-ray beam will need openings with a periodic spacing of 100 mm to 1 mm. The actual spacing will depend on the precise time structure of the synchrotron pulses and operating parameters for the rotating disc. For high-repetition-rate operation, the periodic shutter spacing can become sufficiently small (e.g. 100 mm) that one might need micromachining methods to fabricate the shutter openings. In this case, two-dimensional focusing would allow acceptable aspect ratios for the micromachined features. Currently, fast shutters used for X-ray beams that use rotating metal discs (0.5 mm-thick titanium alloy) with periodically spaced openings have demonstrated maximum tangential speeds in excess of 1000 m s À1 (Lindenau et al., 2008). The time for an ideal shutter edge of such a device to traverse a microfocused beam of 10 mm would result in a traversal time of 10 ns. Smaller focal spots would produce even shorter traversal times. Also, one has to combine this traversal time with the phase-instability time to obtain the actual transition time from fully closed to fully open. Various X-ray switching techniques for other applications have demonstrated very fast transition times from closed to open, but are inadequate for SMS owing to low transmission when open and inadequate suppression when closed (Grigoriev et al., 2006;Tanaka et al., 2002). Improving high-speed shutter technology towards greater phase stability of the transmission window (for multi-slotted discs) will produce even higher signal rates and increase its suitability for short-lived resonant isotopes. A high-speed shutter is best suited for low-repetition-rate sources involving very high instantaneous pulse intensities. Consequently, this method can be employed for applications of SMS using an X-ray free-electron laser to suppress the enormous quantity of electronic scattering that would otherwise overwhelm conventional methods. TST thanks A. I. Chumakov for helpful comments. Use of the Advanced Photon Source was supported by the US Department of Energy, Basic Energy Sciences, Office of Science, under Contract No. DE-AC02-06CH11357. Use of the BioCARS Sector 14 was supported by the National Institutes of Health, National Center for Research Resources, under grant number RR007707. The time-resolved set-up at Sector 14 was funded in part through a collaboration with Philip Anfinrud (NIH/NIDDK).
5,231.8
2010-11-23T00:00:00.000
[ "Physics" ]
Generation of extended plasma channels in air using femtosecond Bessel beams Extending the longitudinal range of plasma channels created by ultrashort laser pulses in atmosphere is important in practical applications of laser-induced plasma such as remote spectroscopy and lightning control. Weakly focused femtosecond Gaussian beams that are commonly used for generating plasma channels offer only a limited control of filamentation. Increasing the pulse energy in this case typically results in creation of multiple filaments and does not appreciably extend the longitudinal range of filamentation. Bessel beams with their extended linear foci intuitively appear to be better suited for generation of long plasma channels. We report experimental results on creating extended filaments in air using femtosecond Bessel beams. By probing the linear plasma density along the filament, we show that apertured Bessel beams produce stable single plasma channels that span the entire extent of the linear focus of the beam. We further show that by temporally chirping the pulse, the plasma channel can be longitudinally shifted beyond the linear-focus zone, an important effect that may potentially offer additional means of controlling filament formation. © 2008 Optical Society of America OCIS codes: (320.2250) Femtosecond phenomena; (320.7110) Ultrafast nonlinear optics; (350.5400) Plasmas References and links 1. A. Braun, G. Korn, X. Liu, D. Du, J. Squier, G. Mourou, ”Self-channeling of high-peak-power femtosecond laser pulses in air,” Opt. Lett. 20, 73–75 (1995). 2. Q. Luo, H. Xu, S. Husseini, J.-F. Daigle, F. Théberge, M. Sharifi, S. Chin, ”Remote sensing of pollutants using femtosecond laser pulse fluorescence spectroscopy,” Appl. Phys. B 82, 105–109 (2006). 3. C. Hauri, W. Kornelis, F. Helbing, A. Heinrich, A. Couairon, A. Mysyrowicz, J. Biegert, U. Keller, ”Generation of intense, carrier-envelope phase-locked few-cycle laser pulses through filamentation,” Appl. Phys. B 79, 673–677 (2004). #100088 $15.00 USD Received 12 Aug 2008; revised 17 Sep 2008; accepted 18 Sep 2008; published 19 Sep 2008 (C) 2008 OSA 29 September 2008 / Vol. 16, No. 20 / OPTICS EXPRESS 15733 4. J. Kasparian, R. Ackermann, Y. André, G. Méchain, G. Méjean, B. Prade, P. Rohwetter, E. Salmon, K. Stelmaszczyk, J. Yu, A. Mysyrowicz, R. Sauerbrey, L. Wöste, J. Wolf, ”Electric events synchronized with laser filaments in thunderclouds,” Opt. Express 16, 5757–5763 (2008). 5. C. DAmico, A. Houard, M. Franco, B. Prade, A. Mysyrowicz, A. Couairon, V. Tikhonchuk, ”Conical Forward THz Emission from Femtosecond-Laser-Beam Filamentation in Air,” Phys. Rev. Lett. 98, 235002 (2007). 6. A. Couairon, A. Mysyrowitz, ”Femtosecond filamentation in transparent media,” Phys. Rep. 441, 47–189 (2007). 7. M. Mleinek, E. Wright, J. Moloney, ”Dynamic spatial replenishment of femtosecond pulses propagating in air,” Opt. Lett. 23, 382–384 (1998). 8. W. Liu, F. Théberge, E. Arévalo, J.-F. Gravel, A. Becker, S. Chin, ”Experiment and simulations on the energy reservoir effect in femtosecond light filaments,” Opt. Lett. 30, 2602-2604 (2005). 9. G. Fibich, S. Eisenmann, B. Ilan, Z. Zigler, ”Control of multiple filaments in air,” Opt. Lett. 29, 1772–1774 (2004). 10. J. Durnin, J. Miceli, J. Eberly, ”Diffraction-free Beams,” Phys. Rev. Lett. 58, 1499–1501 (1987). 11. J. H. McLeod, ”Axicons and their uses,” J. Opt. Soc. Am. 50, 166–169 (1960). 12. G. Druart, J. Taboury, N. Guérineau, R. Haı̈dar, H. Sauer, A. Kattnig, J. Primot, ”Demonstration of imagezooming capability for diffractive axicons,” Opt. Lett. 33, 366–368 (2008). 13. V. Garcés-Chávez, D. McGloin, H. Melville, W. Sibbett, K. Dholakia, ”Simultaneous micromanipulation in multiple planes using a self-reconstructing light beam,” Nature 419, 145–147 (2002). 14. Y.-F. Xiao, H.-H. Chu, H.-E. Tsai, C.-H. Lin, J. Wang, S.-Y. Chen, ”Efficient generation of extended plasma waveguides with the axicon ignitor-heater scheme,” Phys. Plasmas 11, L21–L24 (2004). 15. P. Polesana, A. Dubietis, M. Porras, E. Kučinskas, D. Faccio, A. Couairon, P. Di Trapani, ”Near-field dynamics of ultrashort pulsed Bessel beams in media with Kerr nonlinearity,” Phys. Rev. E 73, 056612 (2006). 16. A. Dubietis, P. Polesana, G. Valiulis, A. Stabinis, P. D. Trapani, A. Piskarkas, ”Axial emission and spectral broadening in self-focusing of femtosecond Bessel beams,” Opt. Express 15, 4168–4175 (2007). 17. P. Polesana, A. Couairon, D. Faccio, A. Parola, M. Porras, A. Dubietis, A. Piskarskas, P. Di Trapani, ”Observation of Conical Waves in Focusing, Dispersive, and Dissipative Kerr Media,” Phys. Rev. Lett. 99, 223902 (2007). 18. S. Akturk, B. Zhou, B. Pasquiou, A. Houard, M. Franco, A. Couairon, A. Mysyrowicz, ”Generation of long plasma channels in air by using axicon-generated Bessel beams,” in Proc. CLEO 2008, San Jose, California, May 4-9, 2008, Paper CWI7. 19. A. Couairon, ”Filamentation length of powerful laser pulses,” Appl. Phys. B 76, 789-792 (2003). 20. M. Kolesik, J. Moloney, ”Unidirectional Optical Pulse Propagation Equation,” Phys. Rev. Lett. 89, 283902 (2002). 21. M. Kolesik, J. Moloney, ”Nonlinear optical pulse propagation simulations: From Maxwell’s to unidirectional equations,” Phys. Rev. E 70, 036604 (2004). 22. R. Gadonas, V. Jarutis, R. Paškauskas, V. Smilgevicius, A. Stabinis, V. Vaičaitis, ”Self-action of Bessel beam in nonlinear medium,” Opt. Commun. 196, 309–316 (2001). Introduction Since the original report on generation of extended plasma channels by intense femtosecond laser pulses in air [1] this phenomenon has been the subject of active research motivated by various potential applications such as remote spectroscopy [2], generation of few-cycle optical pulses [3], lightning control [4], and generation of THz radiation [5].The fundamental mechanisms responsible for the stable self-guided propagation of the ultrafast high-intensity laser pulses in Kerr media are now well understood, although particular details can still be puzzling due to the richness and complexity of the highly nonlinear physics involved [6]. It has been found by numerical simulations and later confirmed experimentally that only a small fraction of the intensity of the ultrafast laser beam is confined in the plasma channel, while the remaining portion of the beam is propagating in the close to linear regime and thus is subjected to ordinary diffraction.However, this linear photon bath is instrumental to the selfguided propagation of the plasma channel, as it continuously supplies energy expended into the plasma generation and heating [7,8]. Of particular practical interest in remote spectroscopy and lightning control is the creation of extended filaments.In theoretical and experimental studies, it is common to use fundamental Gaussian beams for the initiation of plasma channels, but Gaussian beams allow for only limited control of filamentation.In particular, in order to create a longer filament with a Gaussian beam the focusing of the beam has to be weakened, but then the wavefront distortions that are inevitably present in the beam cause spontaneous creation of filaments in the hot spots of the beam and not on the geometrical beam axis.Increasing the energy of the laser pulse in this case only leads to creation of multiple filaments instead of extending the propagation of a single filament.Furthermore, the multiple filaments are randomly distributed within the laser beam and their locations fluctuate on the pulse-to-pulse basis.The fluctuating multi-filament pattern can be stabilized by introducing aberrations, e.g. by weakly focusing the beam with a tilted lens [9]. It has been known for quite some time that optical beams with transverse profiles in the form of a Bessel function propagate in free space in a diffraction-free manner [10].Instead of having a localized longitudinal range where the optical intensity is high (such that the Raleigh range for a Gaussian beam), Bessel beams have an extended linear focus.The extent of the linear focus is determined by the size of the (typically truncated) input Bessel beam. The diffraction-free nature of Bessel beams has been utilized in diverse applications of linear optics such as illumination and imaging [11,12] and optical trapping [13].The use of Bessel beams in nonlinear optics in general and in the light string science in particular has been explored to a much lesser extent.Such beams have been previously used for creating few centimeter-long high-density plasma channels for particle acceleration and X-ray generation [14].Various experiments on filamentation in condensed media (fused silica and water) with ultrafast Bessel beams have also been reported [15,16,17].It has been pointed out to us by one of the reviewers of this paper that an experiment on filamentation in air using a femtosecond Bessel beam has been very recently reported in a conference presentation [18].In [18], a 50 fslong pulse with 8 mJ of energy was focused with an axicon lens in air and created a ∼1 m-long plasma channel. In this paper, we report experiments on generating plasma channels in air by femtosecond Bessel beams at 800 nm center wavelength, and with various pulse energies and durations.In our experiments, the extended linear focus of the Bessel beam is 2.25 m-long.We found that for 50 fs-long pulses with energies of up to 14.5 mJ, the created plasma channel spans the entire linear focus zone of the beam.In the range of pulse energies attainable in the experiments, only the central peak of the beam has sufficient intensity to create a filament while the peripheral rings are not strong enough to initiate filamentation on their own.As a result, the single stable filament is pinned to the geometrical axis of the beam and its location experiences negligible pulse-to-pulse fluctuations.This behavior is compared with the case of filamentation of weakly focused Gaussian beams, in which increased pulse energy is shown to create multiple filaments. A particularly interesting outcome of our experiments is an observed longitudinal shift of the filamentation region beyond the linear focus zone that occurs for a certain value of the temporal chirp introduced into the laser pulse.The effect is found to be independent of the sign of the chirp.At the optimum pulse length (found to be in the 500 fs range) the filament is shifted beyond the linear Bessel zone by as much as 50 cm.A similar effect in the case of a Gaussian beam was previously described in [19], where existence of an optimum pulse duration that maximizes the length of the filament was theoretically predicted.The experimentally observed longitudinal extension of the plasma channel by pulse chirping may offer additional means of control over filament formation. Experimental setup The experimental setup is shown schematically in Fig. 1.The high-energy femtosecond pulses are generated by a commercial Ti:Sapphire laser system that operates at a pulse repetition rate of 10 Hz and delivers up to 25 mJ of energy in a sub-50 femtosecond pulse, at 800 nm wavelength.The output beam has a diameter of 11 mm (1/e 2 intensity), with a beam-quality factor, M 2 , of 1.5 as specified by the manufacturer.The nearly Gaussian output beam is transformed into a Bessel beam using an axicon lens with an apex angle of 179.48 • .The axicon is preceded by an iris with a diameter of 9.2 mm.Using the iris is necessary in our case in order to fit the beam into the finite-aperture axicon without using a telescope, as well as to confine the filamentation inside the laboratory space. The approximate extent of linear focus for an apertured beam focused by an axicon lens is given by the following expression: where r 0 is the radius of the circular aperture limiting the transverse dimension of the incident laser beam, n is the index of refraction of the lens material, and α is the tip angle of the axicon.In our experimental geometry z 0 equals 2.25 m.In the linear regime, at propagation distances larger than z 0 the beam diffracts in a form of an expanding ring with a central dark region so that the on-axis intensity beyond z 0 is close to zero.The energy of the linearly polarized laser pulses can be continuously varied using an attenuator based on a half-wave plate followed by a polarizer.After the aperture and the axicon, the maximum pulse energy attainable from our system is 14.5 mJ, and the duration of the pulses is (50±5) fs as derived from a measurement with a single-shot intensity autocorrelator. In order to access the local charge density along the filamentation path we use a simple setup shown in the bottom part of Fig. 1.In this system, two flat 3.75 cm-long electrodes are charged to 1 kV from a DC-voltage source.The distance between the electrodes is 1.5 mm.In the absence of the plasma channel between the electrodes no current flows in the system, thus the voltage drop across the 1 MΩ load resistor connected in series with the plates is zero.As the femtosecond laser pulse creates a filament between the electrodes, the freed electric charges in the filament are accelerated by the DC electric field.The majority of the freed charges recombine.However, a small fraction of the charges reaches the electrodes causing a spike of electric current through the circuit.The amplitude of this impulse of current is measured by recording the impulse voltage drop across the load resistor with a self-triggered storage oscilloscope. Direct exposure of the electrodes to the laser light is prevented by placing a flat metal screen with a 1 mm-wide slit in front of the electrodes.The screen is positioned immediately before the electrodes so that the filament passing through the slit reaches the gap between the electrodes undisturbed by the screen. The above technique yields a direct measure of the total linear charge density in the plasma channel that is spatially averaged along the 3.75 cm-long electrodes.We experimentally verified that the amplitude of the electrical signal recorded by this system is linear in applied electric field (i.e. it is linearly proportional to the applied DC voltage and inversely proportional to the distance between the electrodes).In addition, the measurement is relatively insensitive to the exact location of the filament in the transverse plane between the electrodes.To reduce the uncertainty associated with the pulse-to-pulse fluctuations of the laser intensity, at each data point the measurement was averaged over ∼100 pulses. To compare filamentation of the Bessel beam with that of a Gaussian beam, the conductivity measurements were first performed on a filament created by focusing the beam with an ordinary fused-silica lens.The focal length of the lens was 1.3 m which was chosen such that the locations of the maximum intensity for the lens and for the axicon approximately coincided in the linear propagation regime.The experimental results obtained by focusing with the lens are shown in Fig. 2. In Fig. 2(a), the linear on-axis intensity is plotted as a function of the longitudinal position along the beam path.The linear intensity was measured using a photodetector with a 100μm pinhole in front of the detector.To ensure a linear regime of propagation, the laser beam was strongly attenuated using the polarization-based attenuator and several neutral density filters placed in the beam path.The results of this measurement are in good agreement with the calculation based on the Kirchhoff diffraction integral. Results and discussion In Fig. 2(b), we plot the results of the conductivity measurements obtained with the setup described above, in the case of focusing with the ordinary lens.The measurements were per- In Fig. 3 we show the results for the case of focusing with the axicon.The approximate extent of the linear focus as given by equation (1) (z 0 = 2.25 m) is indicated by the dashed vertical line.Fig. 3(a) is a plot of the on-axis intensity in the linear propagation regime.The experimental data is in close agreement with the calculation based on the Kirchhoff diffraction integral. The experimental data for the plasma density measured with the flat-electrode setup is shown in Fig. 3(b), for four different values of pulse energy.Note that the scale of the vertical axis in Fig. 3(b) is the same as that in Fig. 2(b).From the data, the plasma density in this case is lower than that in the case of focusing with the lens; the created continuous filament is longer and spans the entire extent of the linear focus.A single stable filament is produced up to the highest pulse energy attainable from the laser system, contrary to the case of the lens focusing in which three filaments were observed at the highest pulse energy. In Fig. 3(c) we show the results of numerical simulations of the experiment based on the Unidirectional Pulse Propagation Equation (UPPE) [20] and a phenomenological model of air [6,21].In the figure, the linear plasma density (i.e. the integral of the total number of the generated electrons over the entire cross-section of the beam) is plotted against the propagation distance.The simulations qualitatively support our experimental observations, although there are differences.In particular, the simulations show oscillatory behavior of the linear plasma density along the entire filamentation region.Such oscillations are present in the experimental data, but only in the beginning of the filament and at low pulse energies.The discrepancy may be attributed to the non-ideal profile of the input beam used in the experiments.The results described so far were obtained using the shortest pulses attainable from our system (50 fs).In what follows, we will discuss filamentation with temporally chirped Bessel beams.We found that by chirping the pulses the filamentation region can be longitudinally shifted beyond the linear-focus zone. The experimental results with chirped femtosecond Bessel beams are summarized in Fig. 4. The dashed vertical line indicates the approximate extent of the linear focus (1).In all cases shown, the on-axis intensity in the linear propagation regime is the same as that shown in Fig. 3(a).In Fig. 4(a), we show the linear plasma density for the highest pulse energy of 14.5 mJ, but at different durations of the chirped pulse.From the data, chirping the pulse reduces the amount of generated plasma and gradually shifts the filamentation region in the propagation direction.The longitudinal extent of filamentation is at a maximum when the pulse length equals 500 fs, and the filamentation rapidly disappears for longer pulses.We experimentally confirmed that this effect is independent of the sign of the chirp.Furthermore, the extended filamentation shows a threshold-like behavior with respect to the pulse energy, as shown in Fig. 4(b). Shifting the filamentation zone by pulse chirping is a practically important effect as it may potentially offer additional means of controlling filament formation.A similar phenomenon has been predicted in [19] for the case of Gaussian beams.In [19], the existence of the optimum pulse duration that maximized the length of the plasma channel resulted from the interplay between the multi-photon absorption that is higher for shorter pulses and avalanche ionization that kicks in once the pulse duration exceeds the electron collision time in air (τ c ∼ 350 fs) [6]. In the case of a Bessel beam, additional effects may be responsible for extension and longitudinal shift of the filament.In particular, the shift may be related to the formation of a strong on-axis wave component at the pump wavelength, an effect that has been previously reported in condensed Kerr medium [22].If the energy in this on-axis component reaches the critical threshold for self-focusing, it will initiate filamentation that may prolong the plasma channel formed by the primary Bessel beam. Conclusion We reported experimental results on filamentation of truncated femtosecond Bessel beams in air.Our experiments show that the use of Bessel beams allows for the creation of extended and stable plasma channels, thus it may be beneficial in various practical applications of filaments such as remote spectroscopy and lightning control.Additional spatial control of filamentation is possible by chirping the pulses. Fig. 1 . Fig. 1.Schematic of the experiment.Top: Filamentation with apertured femtosecond Bessel beam.Bottom: Setup for probing local charge density in the plasma channel. Fig. 2 . Fig. 2. Case of focusing with a lens with focal length of 1.3 m. a: On-axis intensity in the linear propagation regime (low intensity).Experimental data is shown with circles, solid line is a calculation based on the Kirchhoff diffraction integral.b: Plasma density along the filament probed with the flat-electrode setup.Different curves correspond to four different values of the pulse energy as specified in the inset. Fig. 3 . Fig. 3. Filamentation initiated by the apertured Bessel beam with pulse duration of 50 fs.a: Linear on-axis intensity fitted by calculation based on the Kirchhoff diffraction integral.b: Plasma density measured for different values of pulse energy.Units on the vertical axis are same as in Fig. 2(b).c: Results of the numerical simulations for the total charge integrated over the entire cross-section of the beam, for different input pulse energy.The linear plasma density is shown in units of the number of electrons per centimeter. Fig. 4 . Fig. 4. a: Linear plasma density for temporally chirped femtosecond Bessel beams.Pulse energy is 14.5 mJ in all cases.Different curves correspond to different pulse widths as specified in the inset.b: Same for 500 fs-long pulse.Different curves correspond to three different values of the pulse energy as specified in the inset.Units on the vertical axes of both graphs are same as in Figs.2(b) and 3(b).
4,765.6
2008-09-29T00:00:00.000
[ "Physics" ]
Grey Rutile TiO2 with Long-Term Photocatalytic Activity Synthesized Via Two-Step Calcination Colored titanium oxides are usually unstable in the atmosphere. Herein, a gray rutile titanium dioxide is synthesized by two-step calcination successively in a high-temperature reduction atmosphere and in a lower-temperature air atmosphere. The as-synthesized gray rutile TiO2 exhibits higher photocatalytic activity than that of white rutile TiO2 and shows high chemical stability. This is attributed to interior oxygen vacancies, which can improve the separation and transmission efficiency of the photogenerated carriers. Most notably, a formed surface passivation layer will protect the interior oxygen vacancies and provide long-term photocatalytic activity. Introduction Among titanium oxides, TiO 2 is well investigated in the photocatalysis research field because of its high chemical stability, low cost, and nontoxicity [1]. However, it can only absorb ultraviolet light, resulting in low photocatalytic efficiency. To expand its light absorbance range and enhance the separation efficiency of the photogenerated carriers, many efforts, such as doping with other elements, sensitizing with dyes, and coupling with metal or nonmetal nanoparticles or different semiconductor materials, have been made to solve the aforementioned problems [2][3][4][5][6][7]. Very recently, TiO 2 nanotubes synthesized via the electrochemical anodization of titanium foil exhibited visible light response characteristics for the photodecomposition of formaldehyde [8]. It has been reported that when TiO 2 is partially reduced by H 2 or CO, or bombarded by high-energy particles (laser, electron, or Ar + ), the obtained colored TiO 2 powers show visible light photocatalytic activity. In 2010, a blue titanium dioxide with a mixture of anatase and rutile phase was synthesized via hydrolysis and the reduction of isopropyl titanium, showing a higher photocatalytic activity than that of commercial anatase TiO 2 . The higher photocatalytic activity was attributed to the presence of Ti 3+ in the interior of the titanium dioxide crystal [9]. In 2011, Giamello et al. used an isotope labeling method to study the existence of Ti 3+ in rutile titanium dioxide in detail [10]. In the same year, a black TiO 2 with a strong absorption of visible light was synthesized via a high-temperature hydrogenation reduction of P25 TiO 2 by Chen et al. The obtained higher photoactivity of the black TiO 2 was attributed to the reduced band gap of titanium dioxide caused by the generation of the surface disordered structure [11]. Another kind of simple method to produce colored TiO 2 is the addition of fluorine species during TiO 2 preparation [12,13]. In 2014, Xu et al. synthesized stable blue TiO 2 nanoparticles with a non-stoichiometric TiO 2−x core and stoichiometric TiO 2 shell structure for the photodecomposition of methylene blue (MB) dyes under visible light irradiation [12]. In addition to Materials Hexanoic acid (HA), tetrabutyl titanate (TBOT), methylene blue (MB) dye, and glucose were purchased from Sinopharm Chemical Reagent Company. All chemicals were of AR grade. The ultrapure water used in the experiment was obtained from a Mill-Q (electric resistivity 18.2 MΩ·cm) water purification system. Synthesis of TiO 2 -GR and TiO 2 -WR First, uniform spherical anatase TiO 2 particles (Figure 1a and Figure S1a) with a diameter of 200-300 nm were synthesized via a previous reported method [33]. In a typical process, hexanoic acid (0.46 g) dissolved in ethanol (230.0 mL), and TBOT (1.70 g, 10% ethanol solution) was mixed by stirring at room temperature. Then, 35.0 mL H 2 O was dropped into the mixture with vigorous stirring for 12 h at room temperature. The products were obtained after centrifugal separation and were then ready for use for the next two-step calcination procedure. Second, the as-prepared TiO 2 nanosphere was firstly calcinated in a tubular high-temperature furnace with continuous argon flow at 900 • C for 3 h. Then, it was further calcined at 500 • C in air atmosphere for 10 h, and gray rutile TiO 2 particles with polyhedron morphology were obtained (TiO 2 -GR, Figure 1b and Figure S1b). A white rutile TiO 2 used as a reference sample (TiO 2 -WR, Figure 1c and Figure S1c) was prepared by the calcination of the as-prepared TiO 2 nanosphere at 900 • C for 3 h in an air atmosphere. The as-prepared photocatalysts were stored in an air atmosphere at room temperature. Photocurrent Measurements The photocurrent measurements were carried out on an electrochemical analyzer (CHI660D Instruments, Shanghai Chenhua Instrument Co., Ltd., Shanghai, China) using a standard three-electrode system. The as-prepared samples, a commercial Pt gauze electrode (Gaoss Union Technology Co., Ltd., Wuhan, China, 2 cm × 2 cm, 60 mesh), and saturated calomel electrode were used as working electrodes, counter electrode, and reference electrode, respectively. The working electrode was prepared as follows: 0.05 g of the sample was ground with 0.10 g terpinol for 10 min to make uniform slurry. Then, the slurry was evenly dripped onto a 4.0 cm × 1.0 cm indium tin oxide-coated glass (ITO glass) electrode masked by an adhesive tape with thickness of 0.5 mm and smoothed by a doctor's blade. Therefore, the formed film about had a thickness of 0.5 mm. Next, these electrodes were dried in an oven and were calcined at 350 • C for 30 min in an air atmosphere. The electrode was immersed in a 0.10 M NaClO 4 aqueous solution to measure the transient photocurrent under a 300 W Xe arc lamp irradiation with an incident light power density of 130 mW/cm 2 at 0.4 V vs. the saturated calomel electrode. exposed to 1000 W Xe lamp irradiation with or without the light cutoff filters (λ > 420 nm), under ambient conditions and magnetic stirring. At given time intervals, the reaction solution was sampled and analyzed by a UV-visible spectrophotometer (UV 2250, Shimadzu, SHIMADZU (CHINA) Co., Ltd., Shanghai, China). Results and Discussion A spherical anatase TiO2 (Figure 1a and Figure S1a) in a white color was fabricated as the raw material for the gray TiO2-GR via the hydrolysis of TBOT in the presence of alkyl chain carboxylic acids [33]. It was determined that alkylchain carboxylic acids remained on the surface of the TiO2 nanospheres, which were used as a reductant for the subsequent high-temperature reduction of titanium dioxide [4]. Both the gray rutile TiO2 and the reference sample (TiO2-WR) exhibited polyhedron morphology (TiO2-GR, Figure 1b and Figure S1b). After calcination at 900 °C in an Ar atmosphere, it can be seen from the High Resolution Transmission Electron Microscope (HRTEM) pattern that a surface layer composed of a large number of microcrystals surrounded by a disordered structure formed on the obtained gray TiO2 ( Figure S2a). The disordered structure is believed to be mainly caused by the presence of oxygen vacancies, which are response for the black color of TiO2 [14,30]. Then, after further calcination at 500 °C in an air Photoactivity Measurements The photocatalytic discoloration of MB dyes was performed on a reformative XPA-7 photocatalytic reaction instrument(Xujiang Electromechanical Plant, Nanjing, China). The incident light power was 162 mW/cm 2 , which was measured by a handheld Optical Power Meter (Newport 1916-R, Newport Corporation, California, CA, USA). The light exposure area of the quartz bottle was about 19.1 cm 2 . The discoloration effect was measured using the absorption spectroscopic technique. In the typical process, an aqueous solution of the MB dyes (10.0 mg/L and 30.0 mL) and 20.0 mg of the as-prepared photocatalysts were mixed in a 50 mL cylindrical quartz tube and left overnight in darkness to reach the adsorption equilibrium for the MB dyes. Then, the mixture was exposed to 1000 W Xe lamp irradiation with or without the light cutoff filters (λ > 420 nm), under ambient conditions and magnetic stirring. At given time intervals, the reaction solution was sampled and analyzed by a UV-visible spectrophotometer (UV 2250, Shimadzu, SHIMADZU (CHINA) Co., Ltd., Shanghai, China). Results and Discussion A spherical anatase TiO 2 (Figure 1a and Figure S1a) in a white color was fabricated as the raw material for the gray TiO 2 -GR via the hydrolysis of TBOT in the presence of alkyl chain carboxylic acids [33]. It was determined that alkylchain carboxylic acids remained on the surface of the TiO 2 nanospheres, which were used as a reductant for the subsequent high-temperature reduction of titanium dioxide [4]. Both the gray rutile TiO 2 and the reference sample (TiO 2 -WR) exhibited polyhedron morphology (TiO 2 -GR, Figure 1b and Figure S1b). After calcination at 900 • C in an Ar atmosphere, it can be seen from the High Resolution Transmission Electron Microscope (HRTEM) pattern that a surface layer composed of a large number of microcrystals surrounded by a disordered structure formed on the obtained gray TiO 2 ( Figure S2a). The disordered structure is believed to be mainly caused by the presence of oxygen vacancies, which are response for the black color of TiO 2 [14,30]. Then, after further calcination at 500 • C in an air atmosphere, a dense layer with ordered lattice was formed by the refilling of oxygen atoms into the oxygen vacancies on the outmost layer of TiO 2 particles ( Figure S2b). The formed dense layer with the size of 2-5 nm is on the outermost layer of the TiO 2 -GR particle, which would act as a surface passivation layer to hinder the further diffusion and infiltration of oxygen molecules into the interior oxygen vacancies. As a result, the interior lattice disordered structure would be retained. The lattice width of the surface passivation layer is 0.21 nm, which is ascribed to the (210) crystal faces of the rutile TiO 2 (JCPDS 21-1276; Figure S2b). However, no such surface layer structures were observed on the surface of the TiO 2 -WR particles ( Figure S2c). The lattice widths are 0.32 nm and 0.25 nm, which belong to the (110) and (101) . The Rietveld analysis (TOPAS V 6.0) of the XRD patterns shows that TiO 2 -GR has an average particle size of 48.5 nm. This result is different from that intuitively observed from the HRTEM patterns, which is attributed to the different detection areas between XRD and HRTEM. Herein, the HRTEM patterns are mainly afforded the surface layer crystal structure of TiO 2 particles. Therefore, it can be inferred that the as-synthesized TiO 2 -GR nanoparticles are mainly composed of microcrystals with an average size of about 48.5 nm, while their surface layers are composed of smaller microcrystals. Additionally, the analysis results indicate that the broadening of the diffraction peak is mainly due to grain refinement, and there is no existing microstrain. However, compared with the cell parameters of the standard rutile TiO 2 (PDF# 87-0710), both TiO 2 -GR and TiO 2 -WR show a lattice expansion, with average lattice distortions of 0.11% and 0.13%, respectively. It is proposed that this is mainly caused by the different treatments during the high-temperature calcination process. No peaks centered at 2θ = 25.9 • , ascribed to carbon (JCPDS 26-1079), were observed in the XRD patterns of gray TiO 2 [34]. atmosphere, a dense layer with ordered lattice was formed by the refilling of oxygen atoms into the oxygen vacancies on the outmost layer of TiO2 particles ( Figure S2b). The formed dense layer with the size of 2-5 nm is on the outermost layer of the TiO2-GR particle, which would act as a surface passivation layer to hinder the further diffusion and infiltration of oxygen molecules into the interior oxygen vacancies. As a result, the interior lattice disordered structure would be retained. The lattice width of the surface passivation layer is 0.21 nm, which is ascribed to the (210) crystal faces of the rutile TiO2 (JCPDS 21-1276; Figure S2b). However, no such surface layer structures were observed on the surface of the TiO2-WR particles ( Figure S2c). The lattice widths are 0.32 nm and 0.25 nm, which belong to the (110) and (101) . The Rietveld analysis (TOPAS V 6.0) of the XRD patterns shows that TiO2-GR has an average particle size of 48.5 nm. This result is different from that intuitively observed from the HRTEM patterns, which is attributed to the different detection areas between XRD and HRTEM. Herein, the HRTEM patterns are mainly afforded the surface layer crystal structure of TiO2 particles. Therefore, it can be inferred that the as-synthesized TiO2-GR nanoparticles are mainly composed of microcrystals with an average size of about 48.5 nm, while their surface layers are composed of smaller microcrystals. Additionally, the analysis results indicate that the broadening of the diffraction peak is mainly due to grain refinement, and there is no existing microstrain. However, compared with the cell parameters of the standard rutile TiO2 (PDF# 87-0710), both TiO2-GR and TiO2-WR show a lattice expansion, with average lattice distortions of 0.11% and 0.13%, respectively. It is proposed that this is mainly caused by the different treatments during the high-temperature calcination process. No peaks centered at 2θ = 25.9°, ascribed to carbon (JCPDS 26-1079), were observed in the XRD patterns of gray TiO2 [34]. The chemical state of the surface species of TiO2-GR and TiO2-WR was determined by X-ray photoelectron spectroscopy (XPS), which was further analyzed by an XPS peak-fitting program (version 4.0, Hong Kong, China). The C1s XPS peaks of TiO2-GR centered at 284.6 eV (FWHM = 4.55 eV) and 282.9 eV (FWHM = 1.96 eV), which is similar to that of TiO2-WR, were ascribed to the *C and (*CO)Ti species caused by the carbon contaminant (Figure 3a,b) [35,36]. It was reported that if a carbon atom was doped in the crystal lattice of TiO2, a bonding energy peak ascribed to C* or Ti*-C emerged at 281.6 eV or 454.90 eV, respectively [37,38]. However, no such carbon bonding energy peaks were observed for either TiO2-GR or TiO2-WR. This indicates that there were no carbon atoms doped in the crystal lattice of the prepared gray TiO2, which means that the gray color did not originate from the carbon residues. As shown in Figure 3c,d, the Ti2p XPS peaks of TiO2-GR were The chemical state of the surface species of TiO 2 -GR and TiO 2 -WR was determined by X-ray photoelectron spectroscopy (XPS), which was further analyzed by an XPS peak-fitting program (version 4.0, Hong Kong, China). The C1s XPS peaks of TiO 2 -GR centered at 284.6 eV (FWHM = 4.55 eV) and 282.9 eV (FWHM = 1.96 eV), which is similar to that of TiO 2 -WR, were ascribed to the *C and (*CO)Ti species caused by the carbon contaminant (Figure 3a,b) [35,36]. It was reported that if a carbon atom was doped in the crystal lattice of TiO 2 , a bonding energy peak ascribed to C* or Ti*-C emerged at 281.6 eV or 454.90 eV, respectively [37,38]. However, no such carbon bonding energy peaks were observed for either TiO 2 -GR or TiO 2 -WR. This indicates that there were no carbon atoms doped Nanomaterials 2020, 10, 920 5 of 11 in the crystal lattice of the prepared gray TiO 2 , which means that the gray color did not originate from the carbon residues. As shown in Figure 3c,d, the Ti2p XPS peaks of TiO 2 -GR were centered at 464.53 eV (FWHM = 2.84 eV) and 458.20 eV (FWHM = 3.82 eV), which are ascribed to Ti 4+ 2p 1/2 and Ti 4+ 2P 3/2 of TiO 2 , respectively [39], which is similar to that of TiO 2 -WR. Two reduced titanium ion XPS peaks centered at 462.43 eV (FWHM = 3.91 eV) and 455.92 eV (FWHM = 2.63 eV) were observed for TiO 2 -GR, which could be ascribed to the low valence state titanium of nonstoichiometric TiO 2−x (0 < X < 2), mainly including Ti 3+ 2p 1/2 of Ti 2 O 3 and Ti 2+ 2p 3/2 of TiO [40,41], which is consistent with the O 1s XPS peak results. However, the TiO 2 -WR showed no reduced titanium ion XPS peaks. The O 1s XPS peaks mainly consisted of three components (Figure 3e,f). The two peaks centered at 529.47 eV (FWHM = 3.30 eV) and 531.58 eV (FWHM = 4.48 eV) were ascribed to the lattice oxygen of the stoichiometric TiO 2 [42] and nonstoichiometric TiO 2−x (0 < X < 2) [43,44], respectively, and the latter may also include some hydroxyl oxygen species [45]. The small O 1s peak centered at 527.51 eV (FWHM = 2.21 eV) could be attributed to the attached ionic oxygen of CO or O 2 [46]. , which are ascribed to Ti 4+ 2p1/2 and Ti 4+ 2P3/2 of TiO2, respectively [39], which is similar to that of TiO2-WR . Two reduced titanium ion XPS peaks centered at 462.43 eV (FWHM = 3.91 eV) and 455.92 eV (FWHM = 2.63 eV) were observed for TiO2-GR, which could be ascribed to the low valence state titanium of nonstoichiometric TiO2−x (0 < X < 2), mainly including Ti 3+ 2p1/2 of Ti2O3 and Ti 2+ 2p3/2 of TiO [40,41], which is consistent with the O 1s XPS peak results. However, the TiO2-WR showed no reduced titanium ion XPS peaks. The O 1s XPS peaks mainly consisted of three components (Figure 3e,f). The two peaks centered at 529.47 eV (FWHM = 3.30 eV) and 531.58 eV (FWHM = 4.48 eV) were ascribed to the lattice oxygen of the stoichiometric TiO2 [42] and nonstoichiometric TiO2−x (0 < X < 2) [43,44], respectively, and the latter may also include some hydroxyl oxygen species [45]. The small O 1s peak centered at 527.51 eV (FWHM = 2.21 eV) could be attributed to the attached ionic oxygen of CO or O2 [46]. It has been reported that the produced Ti 3+ originated from the oxygen vacancies on the surface of the gray TiO2. The removed oxygen atoms left behind two excess electrons per oxygen vacancy, which could be harvested by the neighboring Ti atoms, and induce the formation of Ti 3+ ions showing EPR signals [47]. Therefore, Electron Paramagnetic Resonance (EPR) is one powerful method for identifying the presence of oxygen vacancies in solid materials. A low-field signal with a g-value close to the free-electron value (g = 2.0023) is generally attributed to an unpaired electron trapped on an oxygen vacancy site [11]. Herein, as shown in Figure 4, an EPR signal with a g-value of 1.997 is attributed to the Ti 3+ centers in the rutile phase environment. As a comparison, there were no EPR peaks at the same position observed from the TiO2-WR EPR signals. It is believed that the surface Ti 3+ would tend to adsorb atmospheric O2, which would be reduced to O2 − , and shows an EPR signal at g ≈ 2.02 [11]. The absence of such a peak in the TiO2-GR EPR signals indicates that after long calcination in an air atmosphere, the surface oxygen vacancies are refilled by oxygen atoms and the Ti 3+ is mainly present under the formed surface passivation layer, which is proposed as a key factor for the observed excellent stability of TiO2-GR. It has been reported that the produced Ti 3+ originated from the oxygen vacancies on the surface of the gray TiO 2 . The removed oxygen atoms left behind two excess electrons per oxygen vacancy, which could be harvested by the neighboring Ti atoms, and induce the formation of Ti 3+ ions showing EPR signals [47]. Therefore, Electron Paramagnetic Resonance (EPR) is one powerful method for identifying the presence of oxygen vacancies in solid materials. A low-field signal with a g-value close to the free-electron value (g = 2.0023) is generally attributed to an unpaired electron trapped on an oxygen vacancy site [11]. Herein, as shown in Figure 4, an EPR signal with a g-value of 1.997 is attributed to the Ti 3+ centers in the rutile phase environment. As a comparison, there were no EPR peaks at the same position observed from the TiO 2 -WR EPR signals. It is believed that the surface Ti 3+ would tend to adsorb atmospheric O 2 , which would be reduced to O 2 − , and shows an EPR signal at g ≈ 2.02 [11]. The absence of such a peak in the TiO 2 -GR EPR signals indicates that after long calcination in an air atmosphere, the surface oxygen vacancies are refilled by oxygen atoms and the Ti 3+ is mainly present under the formed surface passivation layer, which is proposed as a key factor for the observed excellent stability of TiO 2 -GR. Nanomaterials 2020, 10, 920 6 of 11 Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 11 The photocatalytic activity of the as-prepared TiO2-GR was determined by the photocatalytic discoloration of the MB dyes. As shown in Figure 5a,b, the gray TiO2 showed a higher photoactivity than that of TiO2-WR under visible light or full-spectrum light irradiation. Herein, different from most of the previously reported colored TiO2 materials, the as-synthesized gray TiO2 showed a longlife photocatalytic activity (Figure 5c,d). The TiO2-GR samples retained their color and properties even after six months of storage, and did not exhibit any reduction in their photocatalytic activity after six photocatalysis cycles. This indicates that the gray TiO2 has an excellent chemical stability, which is ascribed to the special nanostructure caused by the two-step calcination treatment. As shown in Figure S2b, the formed surface passivation layer will protect the interior oxygen vacancies. As a result, the T 3+ on the most superficial layer will disappear, which has been confirmed by the aforementioned EPR results. The thermal stability of TiO2-GR was further studied by Thermogravimetric Analyzer (TGA) in open air, as shown in Figure S3. The sample was thermally stable up to 650 °C in open air, with negligible weight variation. The slight weight gain and loss wave before 125 °C are ascribed to the adsorption and desorption of O2, CO2, or H2O on the surface of TiO2-GR in the air. A distinguishable weight loss from 380 °C is ascribed to the dissociation of the surface -OH. Above 650 °C, the obvious weight increase is ascribed to the refilled interior oxygen vacancies, indicating that the surface passivation layer would be destroyed at this temperature. This shows that the as-prepared TiO2-GR has high thermal stability. The photocatalytic activity of the as-prepared TiO 2 -GR was determined by the photocatalytic discoloration of the MB dyes. As shown in Figure 5a,b, the gray TiO 2 showed a higher photoactivity than that of TiO 2 -WR under visible light or full-spectrum light irradiation. Herein, different from most of the previously reported colored TiO 2 materials, the as-synthesized gray TiO 2 showed a long-life photocatalytic activity (Figure 5c,d). The TiO 2 -GR samples retained their color and properties even after six months of storage, and did not exhibit any reduction in their photocatalytic activity after six photocatalysis cycles. This indicates that the gray TiO 2 has an excellent chemical stability, which is ascribed to the special nanostructure caused by the two-step calcination treatment. As shown in Figure S2b, the formed surface passivation layer will protect the interior oxygen vacancies. As a result, the T 3+ on the most superficial layer will disappear, which has been confirmed by the aforementioned EPR results. The thermal stability of TiO 2 -GR was further studied by Thermogravimetric Analyzer (TGA) in open air, as shown in Figure S3. The sample was thermally stable up to 650 • C in open air, with negligible weight variation. The slight weight gain and loss wave before 125 • C are ascribed to the adsorption and desorption of O 2 , CO 2 , or H 2 O on the surface of TiO 2 -GR in the air. A distinguishable weight loss from 380 • C is ascribed to the dissociation of the surface -OH. Above 650 • C, the obvious weight increase is ascribed to the refilled interior oxygen vacancies, indicating that the surface passivation layer would be destroyed at this temperature. This shows that the as-prepared TiO 2 -GR has high thermal stability. A higher photocatalytic activity is attributed to the presence of oxygen vacancies that can create a higher light absorbance and improve the separation and transmission efficiency of photogenerated carriers, which is preliminarily confirmed by the UV-vis spectra, photocurrent, and photoluminescence spectra. The suggested photocatalysis mechanism is shown in Figure 6a. It is proposed that the photocatalysis of TiO 2 -GR may undergo two different photogenerated carrier transfer pathways when pumped by UV light and visible light separately. It can be seen, in both pathways, that the oxygen vacancies all play a vital role. Compared with TiO 2 -WR, the as-prepared TiO 2 -GR exhibits a broad spectral absorption in the visible light region (Figure 6b). This can be attributed to the transitions from the TiO 2 valence band to the oxygen vacancy levels, or from the oxygen vacancies to the TiO 2 conduction band pumped by visible light [30], which is responsible for the distinguishable higher photoactivity of TiO 2 -GR than that of TiO 2 -WR. These results are consistent with the photocurrent density results under visible light irradiation (Figure 6c). This indicates that, in this case, the visible light absorption of TiO 2 -GR does lead to charge carrier generation and contributes directly to the photocurrent. However, as shown in Figure 6c,d, the photocurrent density under a full-spectrum light condition is about 100 times that under a visible light condition, which indicates that the contribution of visible light to the improvement of the photocatalytic activity is very limited. This is consistent with previously reported results [48]. Therefore, it is proposed that the main factor for the higher photocatalytic activity of TiO 2 -GR is the photogenerated carrier transfer path pumped by UV light. [49] In this process, the Vo is still proposed to be a key factor for the improvement of the separation efficiency of the photogenerated carriers. First, the Vo can act as a trap site for the temporary storage of electrons, which can be further pumped to the conduction band to react with the substrates, resulting in suppressed recombination of photogenerated carriers. [50]. Herein, the suppressed recombination of photogenerated carriers is preliminarily confirmed by the photoluminescence spectra. If the recombination of carriers was suppressed, the photoluminescence of semiconductor materials would be quenched to some degree [51,52]. As shown in Figure 6e, compared with the reference TiO 2 -WR sample, the TiO 2 -GR samples show a much lower photoluminescence intensity. This indicates that the as-prepared gray TiO 2 -GR has a much higher photoinduced charge separation efficiency than that of the white TiO 2 -GR materials. In addition, because of the presence of free electrons bound loosely to the titanium atom in the oxygen vacancies [47], the surface electric conductivity of TiO 2 -GR will be improved, as a result of improving the carriers' transmission efficiency [26], which is also helpful for improving the photocatalytic activity. A higher photocatalytic activity is attributed to the presence of oxygen vacancies that can create a higher light absorbance and improve the separation and transmission efficiency of photogenerated carriers, which is preliminarily confirmed by the UV-vis spectra, photocurrent, and photoluminescence spectra. The suggested photocatalysis mechanism is shown in Figure 6a. It is proposed that the photocatalysis of TiO2-GR may undergo two different photogenerated carrier transfer pathways when pumped by UV light and visible light separately. It can be seen, in both pathways, that the oxygen vacancies all play a vital role. Compared with TiO2-WR, the as-prepared TiO2-GR exhibits a broad spectral absorption in the visible light region (Figure 6b). This can be attributed to the transitions from the TiO2 valence band to the oxygen vacancy levels, or from the oxygen vacancies to the TiO2 conduction band pumped by visible light [30], which is responsible for the distinguishable higher photoactivity of TiO2-GR than that of TiO2-WR. These results are consistent with the photocurrent density results under visible light irradiation (Figure 6c). This indicates that, in this case, the visible light absorption of TiO2-GR does lead to charge carrier generation and contributes directly to the photocurrent. However, as shown in Figure 6c,d, the photocurrent density compared with the reference TiO2-WR sample, the TiO2-GR samples show a much lower photoluminescence intensity. This indicates that the as-prepared gray TiO2-GR has a much higher photoinduced charge separation efficiency than that of the white TiO2-GR materials. In addition, because of the presence of free electrons bound loosely to the titanium atom in the oxygen vacancies [47], the surface electric conductivity of TiO2-GR will be improved, as a result of improving the carriers' transmission efficiency [26], which is also helpful for improving the photocatalytic activity. Conclusion In summary, gray rutile titanium dioxide was synthesized via two-step calcination, performed successively in a high-temperature reduction atmosphere and in a lower-temperature air atmosphere. The results indicate that, compared with the white rutile titanium dioxide, the asprepared gray titanium dioxide exhibits the typical characteristics of black-or blue-color TiO2, such as the presence of Ti ions in a low valence state, surface disorder structure, and oxygen vacancies, which are caused by the loss of oxygen atoms under reduction reaction conditions. According to previous reports [14], it is proposed that the presence of Ti 3+ or a surface disorder structure is mainly induced by oxygen vacancies. The as-synthesized gray titanium dioxide exhibits a higher photocatalytic activity than does white rutile TiO2. This is attributed to the interior vacancies, which can create a higher light absorbance and improve the separation and transmission efficiency of photogenerated carriers. Most notably, it is proposed that the two-step calcination can produce a surface passivation layer on the surface of gray titanium dioxide particles, as a result of protecting the interior oxygen vacancies, which provides long-term photocatalytic activity. This study provides a considerable reference for the design and synthesis of other semiconductor photocatalysts rich in oxygen vacancies, with high activity and high stability. Conclusions In summary, gray rutile titanium dioxide was synthesized via two-step calcination, performed successively in a high-temperature reduction atmosphere and in a lower-temperature air atmosphere. The results indicate that, compared with the white rutile titanium dioxide, the as-prepared gray titanium dioxide exhibits the typical characteristics of black-or blue-color TiO 2 , such as the presence of Ti ions in a low valence state, surface disorder structure, and oxygen vacancies, which are caused by the loss of oxygen atoms under reduction reaction conditions. According to previous reports [14], it is proposed that the presence of Ti 3+ or a surface disorder structure is mainly induced by oxygen vacancies. The as-synthesized gray titanium dioxide exhibits a higher photocatalytic activity than does white rutile TiO 2 . This is attributed to the interior vacancies, which can create a higher light absorbance and improve the separation and transmission efficiency of photogenerated carriers. Most notably, it is proposed that the two-step calcination can produce a surface passivation layer on the surface of gray titanium dioxide particles, as a result of protecting the interior oxygen vacancies, which provides long-term photocatalytic activity. This study provides a considerable reference for the design and synthesis of other semiconductor photocatalysts rich in oxygen vacancies, with high activity and high stability.
7,232.4
2020-05-01T00:00:00.000
[ "Materials Science" ]
Impact of intelligent agents on the avoidance of spontaneous traffic jams on two-lane motorways This paper approaches the evaluation of intelligent agents for the reduction and avoidance of spontaneous traffic jams, which arise without evident reason. Individual vehicles are regarded as intelligent agents that act autonomously. The basis of this work is the Nagel-Schreckenberg (NaSch) model. Its extensions by the velocity-dependent randomization (VDR) model and multiple lanes allow us to simulate realistic traffic and congestion situations on two-lane motorways. Our concept is applied to the model and analyzed by fundamental diagrams and the average velocity, for example. The results of this paper reveal that traffic congestions are avoided when using swarm intelligence in all vehicles since human behavior, especially misbehavior, is eliminated and the velocities determined by the intelligent vehicle are directly realized. Moreover, an amount of 30% of intelligent vehicles has a significantly positive impact on traffic flow. Introduction Climate change is a defining keyword of our time. The reduction of carbon dioxide emissions plays a fundamental role here. Compared to pre-industrial times before 1750, CO2 concentration has increased by 40% due to human activity [1]. The main cause of greenhouse gas is the combustion of fossil fuels from which a significant amount is attributable to road traffic [2]. The reduction of vehicle emissions is, therefore, essential for climate protection. A high proportion of CO2 emissions are caused by traffic congestions. In addition to traffic jams caused by accidents or construction sites, so-called phantom traffic jams often occur on motorways. The main reasons for them are overloaded roads and the resulting high traffic density. Then small disturbances, such as overreaction, sudden overtaking or dawdling, are sufficient to form a spontaneous traffic jam [3], [4]. The NaSch model is suitable for macroscopic traffic description by microscopic observation of individual vehicles. It is especially useful to simulate phantom traffic jams. In recent years the model has been widely used and adapted to real measurements, for example in [5], [6]. In addition to its use to investigate traffic flow, the model has also been used to examine different traffic phases and transitions [7]- [9]. The model can also be used to investigate approaches to congestion reduction. In our opinion, there is potential for further investigations, especially when considering phantom jams. For this purpose, on the one hand, the influence of vehicular ad hoc networks has been evaluated (VANETs) [10], on the other hand, the use of reinforcement learning has been analyzed [11] in recent works. Other studies use the open-source traffic simulation package SUMO (Simulation of Urban Mobility) [12], [13], or a swarm-based approach [14]. As far as we know, the avoidance of phantom jams based on the extended NaSch model and the use of intelligent agents in different vehicle types have not yet been sufficiently simulated and investigated. Here, this paper presents the suitability of intelligent vehicles for avoiding spontaneous traffic jams on motorways, taking into account the obligation to drive on the right. For this purpose, we have implemented the NaSch model and its extensions including human behavior. The application and potential of our concept are analyzed and evaluated using fundamental diagrams and the average velocity, for example. In chapter 2, the NaSch model and its extensions as well as the functionality of intelligent vehicles we use in our approach is described. In addition, our implementation is presented. In chapter 3, the results are exposed and discussed. The conclusion is revealed in chapter 4. Nagel-Schreckenberg model The NaSch model is a cellular automaton for single-lane traffic. Space and time and also the velocities are discrete. The road is divided into cells that can either be empty or contain exactly one vehicle. The velocity of each vehicle i at time t is described by an inter value vi(t) = 0, 1, 2 ... vmax,i, where vmax,i is the maximum velocity. In each time step, the following four rules are applied to all cars in parallel [15]: (2) Slowing down due to vehicles driving ahead The first rule clarifies the desire of drivers to move at the maximum possible speed in order to reach their destination as fast as possible. The avoidance of accidents is modeled by the second rule, where di is the distance to the next vehicle ahead. By using the probability p in the third rule, the traffic model is adapted to real behavior, since no driver can maintain a constant velocity over a certain distance. The last rule defines the motion. A vehicle at position x(t) at time t is then moved forward by its recalculated velocity vi during the transition to the next time step t ® t + 1. Velocity-dependent randomization model The NaSch model is not able to reproduce complex phenomena such as hysteresis or metastable states, as they are observed in real traffic situations. Within an extension, the VDR model, this behavior can be simulated [5]. In this model, the randomization parameter p(v) depends on the velocity v, so that delayed moving off is modeled. In an initial step, this parameter is calculated as follows, where p0 > p: (0) Randomization parameter for deceleration After step (0), rules (1)-(4) of the NaSch model are executed. Two-lane model As we aim to investigate the formation of spontaneous traffic jams on motorways, we have to integrate a second traffic lane and introduce lane changing rules [6]. Here, asymmetric rules are used because, in many countries, it is only allowed to overtake on the left lane. In this case, a driver can change from the right to the left lane if the number of cells ahead on the same lane gapr is too low, compared to a chosen dr, and the number of cells forward gapl,ahead and backward gapl,back on the other lane are large enough, compared to dl,ahead and dl,back, respectively. Furthermore, there is a lane change probability pchange, so that lane changes happen stochastically: gap l,back ≥ d l,back (8) rand () < p change (9) Due to the obligation to drive on the right lane, a vehicle should change to the right, if possible, irrespective of the situation on the left lane. Therefore, the following three conditions are maintained, where the distances on the right lane gapr,ahead and gapr,back are taken into account and compared with dr,ahead and dr,back, respectively: gap r,back ≥ d r,back (11) rand () < p change (12) A lane change takes place before the second rule of the NaSch model is applied, which can then be divided into two sub steps: First, a vehicle is moved in parallel but not forwards. Then the safety distance is maintained. Autonomous vehicles This paper investigates the influence of intelligent vehicles on the development of congestion. For this purpose, autonomous vehicles are assumed, which can be interpreted as agents within an intelligent swarm. They drive independently so that human intervention is not necessary. We suppose that the vehicles are equipped with the required sensors to precisely capture the environment, such as ultrasonic and radar sensors, cameras and lidar scanners, for example. That gives them accurate information about the distances to other vehicles and the corresponding velocities within a specific range. Implementation For the implementation in MATLAB, the enhanced NaSch model with a length of 400 cells is used. Each cell has a length of 7.5 m which is equal to the space a car occupies on average in a jam. A total number of 300 time steps are simulated where one time step is equal to 1 s and a closed system with a global density of 12% is used. The average values of 100 executed simulations are considered in the following chapter. Each vehicle is initialized with a maximum possible velocity vi,max between 3 and 7 cells per time step. It is not possible to accelerate beyond this value. With these different maximum velocities, we are able to model different types of vehicles, such as trucks and sports cars, for example. In this way, we achieve a more precise division of different vehicles than in other works, in which two vehicle types are considered [16], [17]. The velocities are initialized with a normal distribution with the expected value E = 5 and the standard deviation ICTTE 2019 Besides, each vehicle receives an individual lane change probability pchange between 0.6 and 1.0. This value differentiates between drivers who use every possible opportunity to change the lane and drivers who rather stay on their current lane. The randomization parameters for deceleration p and p0 are selected once between 0.1 and 0.3 and between 0.6 and 0.9, respectively, at the beginning of the simulation. As soon as a vehicle needs more than four time steps to start, it is assigned p, which increases the likelihood to start in the following time step. In contrast to other approaches, we perceive this as more realistic modeling since in real traffic situations, other road users would alert the drivers who are not moving off. A lane change to the left is considered if the number of free cells to the vehicle in front is less than its velocity value, compare (6). The conditions for a transition to the right lane are checked in each time step. The parameters for the conditions (7), (8) and (10), (11) are set to: dl,ahead = 3, dl,back = 4, dr,ahead = vi and dr,back = 3, where vi is the current velocity of the vehicle i. The aim is to simulate more realistic traffic in which vehicles change the lanes even though there is not always enough distance for the rear driver on the other lane to keep its previous velocity. In the applied model no passing zones are not considered. Our simulated autonomous vehicles operate via two functions: On the one hand, the vehicles only change lanes if they do not force the vehicle approaching from behind to decelerate, neither in the current time step nor in the following time step. On the other hand, the vehicles are able to adapt to the vehicle in front and its driving behavior due to knowing its velocity in each time step. Results and discussion To analyze the influence of our concept, the model itself is first executed without autonomous vehicles. The resulting congestion situations are evaluated and a reference is obtained. With this reference, the results of the model using autonomous vehicles can be compared. For the purpose of generating the reference, the term congestion is defined. This describes the situation in which at least four vehicles in a row come to a standstill, equivalent to v = 0. Intelligent vehicle rate of 100% The use of fully automated vehicles is still a future vision and is considered as a significant research goal. Here, an autonomously driving vehicle replaces the human driver and, thus, eliminates human imperfections such as random behavior. Our results prove these assumptions: Not a single congestion situation is identified, when this concept is applied. Regarding the average mean velocities in Figure 1, it can be seen that the velocity distribution shifts towards higher speeds. In addition, we can observe that no vehicle drives below a speed of v = 3 on average. According to 2.5, this is the minimum initialization velocity. This only occurs due to the fact that no vehicle randomly reduces its speed. As a result, the vehicle behind does not have to decelerate and therefore does not decline beneath the limit of v = 3. This implies that no traffic jams are able to occur. Intelligent vehicle rate of 5% Since a complete adoption of autonomous vehicles is very unlikely for the foreseeable future, we consider also the use of just a certain percentage of intelligent vehicles. That allows us to determine to what extent it is worth driving autonomously to avoid traffic jams. So, we combine both intelligent vehicles and human drivers in the simulations. We take [19] as a basis, which claims that a percentage of 5% of intelligent vehicles is sufficient to harmonize traffic. The analysis of our approach reveals that the number of identified traffic congestions increases by 19.8%, which contradicts the statement in [19]. Over time, the number of traffic jams decreases continuously when no swarm intelligence is used. Considering the use of 5% intelligent vehicles, the number increases initially, instead. Here, most traffic jams occur between the time steps 100 and 150, after which the amount of traffic jams lessens again. This is shown in Figure 3, where the left diagram shows the temporal occurrence of traffic congestion of the reference, and the right diagram illustrates those results when using 5% swarm intelligence. A possible cause for this is the initialization. The vehicles are initialized in a way so they can maintain their initial speed in the first time step in any case. The comparatively few vehicles with integrated intelligent systems then adapt to the vehicles of their environment. They do not push or dawdle. After some time, human behavior predominates, resulting in dawdling or sudden overtaking. The comparatively small number of automated vehicles cannot compensate for this behavior. Traffic jams arise, but at a later point in time. The maximum congestion width, as seen in Table 1, which means the number of vehicles involved in a jam, and the length of a congestion are further global characteristics we investigate. Considering these, a comparatively small positive effect caused by the few fully automated vehicles is observed. The comparison of average speeds shows a large similarity between our reference simulations and those with a 5% proportion of intelligent vehicles. This behavior is presented in Figure 4. Although shorter traffic congestions are observed, the increase in the number of traffic jams does not result in a significant difference of the velocity distribution. The vehicles are temporally shorter but more often involved in traffic congestions. A comparison of the corresponding fundamental diagrams, here again using the example of the right-hand traffic lane, also shows no significant improvements in traffic flow when utilizing intelligent systems in 5% of all vehicles. The diagram in Figure 5 is almost identical with the upper diagram in Figure 2, which illustrates the results of our reference. Further intelligent vehicle rates In this respect, additional intelligent vehicle amounts of 10%, 20%, 30%, 40%, and 50% are investigated. The observed results are presented in Table 1. Evidently, the number and length of traffic jams decrease as the number of fully automated vehicles increases because they behave predictably. It is noticeable, however, that the maximum congestion width increases and therefore the number of vehicles involved. A possible cause could be convoys, which occur in about one-third of the simulations and are mainly caused by human drivers inside the simulation. Due to the imperfections of such drivers, individual dawdling vehicles may be sufficient to put the whole convoy into congested traffic flow. The tendency for a traffic jam to occur decreases, but if a traffic jam occurs, comparatively more vehicles are involved. However, the fact that autonomous vehicles can start directly in the next time step means that these traffic jams dissolve more quickly. Furthermore, it can be observed that the average velocity of vehicles increases with a growing percentage of swarm intelligence. The reason for this is the lower number of traffic jams in which the vehicles lose their speed. Another parameter studied in this paper is the ratio of average velocity to the maximum possible velocity of each vehicle. In addition to an undisturbed and comfortable journey, every driver also wishes to reach his destination as soon as possible. Obviously, this ratio also increases with an enhanced number of autonomous vehicles regarding the comparison. Taking the results into account, we consider that a number of intelligent vehicles of around 30% has a positive impact on traffic flow. The amount of congestion situations shows a significant decrease. The length of traffic jams is reduced by more than a half and the average speed is increased significantly. Conclusion The purpose of this paper is the evaluation of swarm intelligence to reduce and avoid spontaneous traffic jams on two-lane motorways. The basis was the extended NaSch model, which is suitable to model realistic traffic. By knowing the distances and velocities to other vehicles within a certain range, autonomous vehicles were able to optimize the timing of their overtaking. This prevents the deceleration behavior of the vehicle behind. Besides, a predictive behavior with an orientation based on the velocity of the vehicle ahead has been implemented, resulting in more consistent motion. At first, the influence of swarm intelligence implemented in all existing vehicles has been evaluated. The results reveal no congested situations. The intelligent vehicles calculate the time to a potential collision in each time step to adjust their speed accordingly. In this way, they prevent abrupt braking of their vehicle and, therefore, of the vehicle behind them, so that traffic congestions cannot occur. The average velocities increase and the traffic flow is undisturbed. However, the use of completely autonomous driving vehicles for private purposes is currently not permitted due to the non-existing legal situation. The adoption of autonomous vehicles would proceed gradually. For this reason, the influence of swarm intelligence, installed in only 5% of all vehicles, was investigated secondly. However, the results show no positive impact compared to the reference, in which only human drivers were simulated, as human behavior predominates currently. Only the use of intelligent vehicles in about one-third of all existing vehicles leads to a significant reduction of traffic jams and thus to a harmonization of traffic flow. To summarize, our results show that future approval would have many advantages in terms of an improved traffic flow. Complete avoidance of congestions would not only lead to a faster arrival time at the destination, but also to a reduction in vehicle emissions.
4,132.8
2020-01-01T00:00:00.000
[ "Computer Science" ]
Accuracy of the typicality approach using Chebyshev polynomials Trace estimators allow to approximate thermodynamic equilibrium observables with astonishing accuracy. A prominent representative is the finite-temperature Lanczos method (FTLM) which relies on a Krylov space expansion of the exponential describing the Boltzmann weights. Here we report investigations of an alternative approach which employs Chebyshev polynomials. This method turns out to be also very accurate in general, but shows systematic inaccuracies at low temperatures that can be traced back to an improper behavior of the approximated density of states with and without smoothing kernel. Applications to archetypical quantum spin systems are discussed as examples. I. INTRODUCTION The (numerically) exact evaluation of thermodynamic quantum equilibrium observables is restricted to small systems due to the exponential growth of the Hilbert space for systems with finite-size single-site Hilbert spaces such as Heisenberg or Hubbard models. For quantum systems with unrestricted single-site spaces the situation is even more severe. Only very few analytically solvable systems are known which creates a massive need for numerical (approximation) schemes. One rather successful means to approximate thermodynamic quantities rests on trace estimators which approximate a trace by an expectation value with respect to a random vector [1][2][3][4][5][6][7][8][9][10][11][12]. These schemes, sometimes also called typicality or (microcanonical) thermal pure quantum states [13][14][15][16], have been used very successfully in particular in the field of correlated electron systems, see e.g. but also in quantum chemistry [38,39]. Despite this success, the authors of [8] suggest that an alternative approximation using an expansion of the density of states in terms of Chebyshev polynomials should be more accurate [8]. The major argument is that this expansion does not suffer from the loss of orthogonality during recursive state generation used in Krylov space methods. This property is certainly responsible for the high accuracy obtained in numerical unitary time evolution using a Chebyshev expansion, see e.g. [8,[46][47][48][49][50]. In the present paper, we therefore study several Heisenberg quantum spin systems and derive numerical as well as formal conclusions about the accuracy of the method. * jschnack@uni-bielefeld. de We can summarize that the approach via Chebyshev polynomials is indeed accurate, but not more accurate than FTLM [44]. On the contrary, under certain circumstances the employed kernel which smoothens (unphysical) oscillations of the approximated density of states introduces systematic inaccuracies. The same holds for the mapping of the energy spectrum onto the interval [−1 + ε/2, 1 − ε/2] to comply with the domain of definition of the polynomials. The paper is organized as follows. In Section II we recapitulate the Chebyshev method. In Section III we present our numerical examples. The article closes with a discussion in Section IV. II. METHOD In this section, we briefly introduce the Chebyshev method and its parameters to be able to discuss the method's accuracy. For a more detailed description of the algorithm we recommend [8]. In a quantum mechanical system with a discrete energy spectrum, the microcanonical density of states is defined as The canonical partition function Z(β) is determined by the integral over the density of states weighted with the Boltzmann factor: with β = 1 k B T . Correspondingly, the heat capacity is evaluated as For the susceptibility we employ the S ∼ z -symmetry of Heisenberg systems and decompose the density into contributions from all orthogonal subspaces with total magnetic quantum number M , i.e. We further calculate only contributions for M ≥ 0, since the respective contributions for negative M are degenerate and can be added accordingly. The idea of the Chebyshev algorithm is to expand the microcanonical density of states ρ(E) in terms of Chebyshev polynomials and then approximate the integral (2) by Gauss-Chebyshev integration. We would like to state already at this stage that some accuracy problems shown later in this article arise if the approximated density of states does not behave like a proper density, e.g, if it becomes negative. Since the Chebyshev polynomials are restricted to the interval [−1, 1], a variable transformation of the Chebyshev polynomials to arbitrary intervals must be introduced as in [51,Sec. 1.3.2]. The transformation of the Hamiltonian results in where, as suggested in [8], a parameter ε is introduced to prevent truncation of the approximated delta peaks corresponding to the extremal eigenvalues. The original energy interval is thus scaled to the interval [−1+ ε 2 , 1− ε 2 ]. The corresponding scaled density of states ρ(x) is then expanded in terms of Chebyshev polynomials C n (x): It can be shown that the coefficients of the expansion are given by the traces These traces are approximated using the typicality approach, i.e. with being a random vector with Gaussian distributed components r ν with respect to a chosen orthonormal basis { | ν }. The relative error of an estimate Θ n (R) is proportional to 1/ R dim(H) as shown in e.g. [8], where R is the number of random vectors and dim(H) the dimension of the Hilbert space. At this point it should be noted that these traces can also be evaluated to numerical accuracy using a complete basis. This possibility will be used to distinguish statistical and systematic deviations later on. Due to the finite order of the expansion, so-called Gibbs' oscillations can occur which cause the approximated density of the states to have negative values. If one wants to obtain a "physical" representation of the density, i.e. without negative values, one can modify the coefficients µ n by a kernel [8]. The kernel fixes this problem at the cost of introducing a systematic error which vanishes for N deg → ∞. In this paper, we restrict our discussion to the use of the Jackson kernel whose coefficients read In figure captions or legends we will write g n =JK when the kernel is applied, otherwise g n = 1. For an arbitrary function f (x) the Gauss-Chebyshev integration gives rise to the approximation where the supporting points read This approximation is an exact identity if f (x) is a polynomial of order 2Ñ − 1 or smaller [51]. In the case at hand, f (x) has to be chosen as and is thus no polynomial of order smaller than 2Ñ − 1. However, the approximation through Gauss-Chebyshev integration is still a good choice for numerical purposes as it can be computed through a discrete cosine-transform (type III). Deploying the f (x) to Eq. (12) gives where the values of the weights γ k read If one choosesÑ ≥ N deg , the sum can be complemented to an upper limit ofÑ with additional g n = 0 terms. The γ k can then be computed through a discrete cosinetransform (type III) of the coefficients g n µ n which allows a faster computation of the sum. The time needed scales withÑ lnÑ instead ofÑ N deg [8]. If there are known symmetries the scheme can be performed for each orthogonal subspace H Γ separately. The approximated partition function can then be written as All of the following systems possess S ∼ z symmetry which implies orthonormal subspaces corresponding to the quantum number of the total magnetization Γ = M . In our numerical examples, subspaces with dimension D < 15, 000 are fully diagonalized for larger systems. For smaller systems, i.e. those containing only subspaces of dimension D < 15, 000, only subspaces with dimension D < 1, 000 are fully diagonalized. In a real application one would of course diagonalize all subspaces numerically exactly where this is possible. To summarize, the Chebyshev method depends on several parameters that can have an effect on the accuracy of the results. These are the order of the expansion N deg , the scaling parameter ε, the number of random vectors R, the use of a kernel g n , and the number of supporting pointsÑ . III. NUMERICAL RESULTS The Chebyshev algorithm uses random vectors for trace estimation, compare Eq. (9), therefore the results are expected to exhibit a statistical distribution. To assess this statistical behavior, we perform two kinds of studies: (A) We investigate a thermodynamic observable as a function of the number R of random vectors used for the trace estimator (9), and (B) we study the variance among P realizations per fixed parameter set (for some of the cases presented below). By considering each realization O i (β) as a random measurement, a mean O(β) and a variance δO(β) 2 can be defined: If additionally an exact result O E is known, the systematic deviation Figure 1. Systems investigated in sec. III A, from top to bottom: ladder, chain, sawtooth chain. Periodic boundary conditions will be applied. As specific quantum spin systems we investigate three archetypical systems that show fundamentally different behavior at low temperatures, namely a spin ladder that is gapped in the thermodynamic limit [53], a spin chain that is gapless in the thermodynamic limit [54], and a sawtooth chain in the vicinity of a quantum-critical point [55][56][57], compare Fig. 1. A. Heisenberg ladder In this subsection, the accuracy of the Chebyshev algorithm is investigated using a Heisenberg ladder for various numbers of spins N with spin quantum number s = 1/2 and periodic boundary conditions. The Hamiltonian reads where the first subscript i ∈ {1, . . . , N/2} of the spin operators denotes the rung and the second subscript j ∈ {1, 2} denotes the leg of the spin. Thus, the exchange interaction J 1 connects nearest neighbor spins on rungs, J 2 does the same on legs. Both are chosen to be antiferromagnetic, J 1 = J 2 = 1. standard configuration of parameters used in the following. The order of expansion N deg and the number of random states R are chosen for low computation times and sufficiently accurate results. Their influences on the accuracy are discussed in Sec. III A 2 a and Sec. III A 1. As the parameter ε is introduced as a corrective variable, its influence will be shown separately in Sec. III A 2 b and is omitted for the time being. The same argument holds for the kernel g n , shown in Sec. III A 2 d. To make use of the discrete cosine-transform (type III), the number of points of integrationÑ has to be greater than or equal to N deg . The equality is chosen as a starting point. Statistical deviations Since the approximation of the traces by using random vectors is the cause of the statistical variations in the result, the number of random vectors R is varied to investigate the latter. The values given in Tab. I are used as a standard configuration. In Fig. 2 and 3, the heat capacity and susceptibility for various values of R are plotted next to the result determined by exact diagonalization. In addition, a result determined by the Chebyshev algorithm is shown where the traces µ n are computed numerically exactly instead of approximating them using random vectors. One can see that both the heat capacity (Fig. 2) as well as the differential susceptibility (Fig. 3) match the respective curve derived from exact diagonalization very well for T /|J 1 | > 10 −2 . However, for R = 10 a noticeable deviation in the maximum of the main peak of both observables can be seen. Additionally, all curves of the heat capacity show a "ghost dip" at low temperatures for T /|J i | ≈ 10 −3 − 10 −2 . Nevertheless, for most purposes the achieved accuracy for the standard parameter configuration and R > 100 is more than sufficient at higher temperatures. It is noticeable that in the region of the "ghost dip", all approximate curves deviate from the exact solution independently of R, suggesting a small statistical but significant systematic error. This is confirmed by the fact that the curve determined with numerically exact traces shows this deviation as well. Systematic deviations Next, we discuss how tuning the parameters N deg , ε,Ñ , and g n affects systematic deviations. This is mostly done by observing the behavior of the"ghost dip" of the heat capacity under variation of each parameter. a. The order of expansion can be increased to push the systematic deviations, i.e. the "ghost dip", to lower temperatures. This is demonstrated in Fig. 4. Computation time increases linearly with N deg . b. The scaling parameter ε seems to have a negative rather than the intended positive effect on the heat capacity as shown in Fig. 5. The best result is achieved for ε = 0. Conversely, when considering the results for the density of states of the largest subspace with M = 0 (see Fig. 6), one can see that the parameter ε has the intended effect to prevent the peaks of the lowest and highest eigenvalues to be cut off. There are also situations where a ε ≈ 10 −6 seemingly decreases the depth of the dip compared to ε = 0. Anyhow, such small values of ε are not sufficient to prevent the cut-off of the density of states, and the improvement was not significant when compared to statistical deviations. c. The number of supporting pointsÑ , is investigated in Fig. 7. It is difficult to give universal recommendations regarding this parameter. Our experience is that it should be chosen equal to the order of expansion N deg . Other systems could show very different results, but it is important to note that a good choice ofÑ scales with N deg . d. The smoothing with the Jackson kernel causes the unphysical ghost dip to become a ghost peak, see more easily identified as an error than a peak. A good approach could be to always compare both the smoothened and the native result. This can be done without great computational effort. For larger systems a kernel can have an even stronger negative effect as is demonstrated in Fig. 9 for N = 24 and s = 1/2. The Jackson kernel is significantly setting back the convergence of the expansion. For an order of N deg = 100, which produces very accurate approximations without kernel, the application of the kernel renders the result to become unusable, compare top of Fig. 9. One needs to expand the polynomial to an order of N deg = 500 to counteract the inaccuracy introduced by the kernel, but even then the result with kernel is still not significantly better than the result without the kernel. The result without the kernel seems already sufficiently accurate for k B T /|J 1 | < 10 −2 and N deg = 100. B. Heisenberg ring Since the Heisenberg ladder is a gapped system, i.e a spin system with a non-zero excitation energy between the groundstate and the first excited state in the thermodynamic limit, we would also like to investigate a system that is gapless in the thermodynamic limit. The behavior of thermodynamic functions of the system at low temperatures highly depends on this excitation energy. One could argue that the deviation shown in the previous section are due to this dependency. Hence, in this section we will discuss an antiferromagnetic Heisenberg ring with s = 1/2 for which the excitation energy vanishes in the thermodynamic limit. In Fig. 10 the results for the Heisenberg ring with N = 24 spins is displayed. One can see that the deviations here are very similar to the ones for the Heisenberg ladder with the same system size and choice of parameters, compare Fig. 9, even though here, they occur at slightly lower temperatures. To further investigate the influence of the gap's size on the deviations in the results the curves for the heat capacity per site for various numbers of spins are displayed in Fig. 11. While the differences of the results between k B T /|J| = 2 · 10 −2 and 2 · 10 −1 are mostly due to finite-size effects, so even the exact curves would deviate from each other, the differences between k B T /|J| = 10 −3 and 10 −2 on the other hand are due to the method's inability to reproduce the low temperature behavior of the Heisenberg ring for various system sizes. However, there is no definite trend of better results for larger systems recognizable, see the lower graph in Fig. 11. Thus, the inaccuracies of the results do not directly depend on the gap size. The antiferromagnetic Heisenberg ring with N = 10, s = 5/2 and nearest neighbor interaction is an interesting example as well, as this system is realized as a magnetic molecule (abbreviated Fe 10 ) called the "ferric wheel" [44] that can be accurately described by this model. In Fig. 12 P = 100 estimates with the Chebyshev method using R = 1 random vectors and their mean are compared to an estimate using R = 100 random vectors. They are displayed with and without kernel. One can see that without kernel the estimates with R = 1 are broadly scattered at low temperatures while their mean and the estimate with R = 100 random vectors are almost perfectly aligned with the exact result. When employing the kernel the estimates with R = 1 are distributed less broadly but their mean and the estimate with R = 100 deviate strongly from the result of the exact diagonalization. From the experience collected before, we assume that these deviations can be resolved by using higher orders of expansion N deg , see Fig. 13 with a linear temperature axis. Note that even for N deg = 200 the result without kernel outperforms the one with kernel. Further more, the result with N deg = 100 without kernel is more accurate than the result with N deg = 200 and kernel. However, because of the narrower distribution of the estimates when using the kernel we want to investigate the statistical behavior of the results obtained with a higher order of expansions, see Fig. 14 for the results for N deg = 200. One can see that the narrowing of the R = 1 estimates by using the kernel is less significant than in the N deg = 100 case but still non-negligible especially in the low temperature regime. This can be seen in the deviation of mean from the exact result which is greater in the case without kernel. But again the deviation is still there for the same and even slightly higher temperatures. The best result is obtained with the R = 100 estimates without kernel. In this case an order of expansion of N deg = 100 is sufficient. Also for the differential susceptibility (not shown) we obtain that the mean of the R = 1 estimates deviates from the exact diagonalization result without kernel more strongly than with kernel. The R = 100 estimate on the other hand is almost accurate without kernel but deviates as strongly as the mean in the case with kernel. C. Sawtooth chain The sawtooth chain (also known as delta chain) is an example with a highly degenerate spectrum. The Hamiltonian reads with periodic boundary conditions, ferromagnetic nearest neighbor interaction J 1 < 0 and antiferromagnetic next-nearest neighbor interaction J 2 > 0. We select a case with |J 2 /J 1 | = 0.45 which is close to the quantum critical point (QCP) at |J 2 /J 1 | = 1/2 [55]. The typicality approach has shown to be very efficient for this systems in schemes such as FTLM [44]. This can be confirmed for the Chebyshev method as well. In Fig. 15 the R = 1 estimates of the heat capacity are distributed very narrowly around their mean which itself is perfectly aligned with R = 100. While the result without kernel is also aligned with the FTLM estimate, the result with kernel shows significant deviations from the FTLM result. We again show another result for a higher order of expansion N deg = 500, the lower graph in Fig. 16. The deviation due to the kernel can be minimized, but still does not fall below those of the results without kernel. IV. DISCUSSION AND CONCLUSIONS We have seen that the Chebyshev method achieves very accurate results when handled with care. There are many possible choices for the parameters introduced for this method. We tried to identify some "good" choices and some methods to optimize them. In particular we found that the number of points of integrationÑ should be chosen closely to the order of expansion which itself has to be chosen according to the dimension of the problem. In the cases investigated, N deg = 100 − 200 is a sufficient choice as well as R ≥ 100. The parameter ε had no positive effect at least not when trying to approximate thermodynamic functions. So it seems advisable to set it equal to zero. The most interesting "parameter" was whether to smooth the result with the Jackson kernel or not. We found here as well that for the investigated systems there was no positive effect. However, there might be a different use of the kernel. When the expansion of the density of states is completed the kernel can be employed without great computational effort to check the approximation with and without kernel for differences. For good results with a high order of the expansion the kernel changes the results only where they were already wrong, e.g. where heat capacity is negative. Therefore, if the kernel does not change the result too much one can be reasonably sure that the choice of the order of expansion is suffi- cient. Finally, we can summarize that the approach via Chebyshev polynomials is accurate, but does not show any advantage compared to FTLM [44].
5,055.4
2021-04-27T00:00:00.000
[ "Physics" ]
Micromagnetic Simulation of L10-FePt-Based Transition Jitter of Heat-Assisted Magnetic Recording at Ultrahigh Areal Density The areal density of hard disk drives increases every year. Increasing the areal density has limitations. Therefore, heat-assisted magnetic recording (HAMR) technology has been the candidate for increasing the areal density. At ultrahigh areal density, the main problem of the magnetic recording process is noise. Transition jitter is noise that affects the read-back signal. Hence, the performance of the magnetic recording process depends on the transition jitter. In this paper, the transition jitter of L10-FePt-based HAMR technology was simulated at the ultrahigh areal density. The micromagnetic simulation was used in the magnetic recording process. The average grain size was 5.1 nm, and the standard deviation was 0.08 nm. The recording simulation format was five tracks in a medium. It was found that a bit length of 9 nm with a track width of 16.5 nm at the areal density of 4.1 Tb/in2 had the lowest transition jitter average of 1.547 nm. In addition, the transition jitter average decreased when increasing the areal density from 4.1 to 8.9 Tb/in2. It was found that the lowest transition jitter average was 1.270 nm at an 8 nm track width and a 9 nm bit length, which achieved an ultrahigh areal density of 8.9 Tb/in2. Introduction The trend of the areal density (AD) in magnetic recording technology increases every year [1]. It can increase with increased bit density, increased track density, and reduced grain size [2,3]. However, the effects of reducing grain size decrease the thermal stability, which causes superparamagnetic effects. The thermal stability can increase by increasing the magnetocrystalline anisotropy constant, K u . Magnetic materials are modified to accommodate increasing the areal density for the magnetic recording technology. The L1 0 -FePt medium is currently selected as a candidate because of the suitability of the magnetic properties [4][5][6][7][8]. The high K u , the high saturation magnetization, M s , and the low curie temperature, T c , are the magnetic properties of L1 0 -FePt that have been optimized for the new technology of hard disks, such as MAMR [9] and HAMR [4,[10][11][12][13][14][15]. Heat-assisted magnetic recording (HAMR) technology is chosen to assist magnetic recording at high AD [8,16,17] due to the high K u of the magnetic materials. In addition, one of the main problems that occurs in the magnetic recording process is the noise that decreases the signal-to-noise ratio (SNR) or increases the error of the read-back signal. The noise mainly consists of DC noise and jitter noise; consequently, they cause irregular amplitude and make the read-back signal transition less sharp, respectively. The correlation of the noise and the transition jitter, σ jitter , is strong [18][19][20]. Therefore, the performance of the magnetic recording process is indicated by the σ jitter . The main causes of the σ jitter are the grain size, grain size distribution, grain shape, read width, heat spot geometry, and thermal gradient [10][11][12]19,21,22]. Many publications have simulated magnetic recording to achieve high areal density and high performance. The transition jitter has been used to indicate the performance of the magnetic recording process [12][13][14][15]22]. Valcu and Yeh [22] have improved Voronoipattern media for very close to the microtrack model prediction. The transition jitter is used to indicate the efficiency of Voronoi-pattern media and that the detection positions are the zero crossings. It was found that the read width is inversely proportional to the jitter. Niranjan and Victora [12] have shown that these analytical calculations work well for estimating the jitter when comparisons are made with simulation results under different recording conditions and media variations. One of the simulations showed that the grain pitch has a greater effect on the transition jitter than on the read width. Pituso et al. [13,14] have simulated the magnetic recording process in a two-dimensional (2-D) format. The simulation demonstrates magnetic footprints of HAMR technology where heating is based on the relationship of magnetic properties with temperature. The behavior of magnetic properties with temperature is used to identify the hotspot for the simulation. Hernandez et al. [15] proposed parameters that can achieve the high areal density in HAMR technology. From many studies [10][11][12]19,21,22], the transition jitter can be obtained by the standard deviation from a read-back signal at the zero-crossing position. In this paper, the transition jitter was shown in another form of the transition jitter by indicating the position of the transition bits in a 2-D format. Since the magnetic footprint simulation was analyzed for the transition jitter simulation in a 2-D format, this simulation was analyzed to resemble the magnetic footprint experimental analysis imaging shown in the 2-D format of spin-stand microscopy. The spin-stand is a machine that can characterize of the magnetic footprints for analysis, such as transition curvature analysis [23][24][25]. Therefore, this paper aimed to maintain a reasonable level of performance from increasing both the linear density and the track density. We also proposed that the magnetic footprint simulation was simulated for the transition jitter simulation in a 2-D format. The L1 0 -FePt magnetic material's properties depend on the temperature used to identify the hotspot area for the heating simulation in the Voronoi medium. The micromagnetic modeling is based on the Landau-Lifshitz-Gilbert (LLG) equation. The lowest transition jitter average simulation was investigated at the areal density of 4.1 Tb/in 2 . In addition, the lowest transition jitter average was investigated at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 in HAMR technology. Materials and Methods In this work, the micromagnetic simulation was based on the LLG equation, as shown in Equation (1) [13,26,27]: where M is the magnetization vector, γ is the gyromagnetic ratio, α is the damping constant, and M s is the saturation magnetization. The effective field, H eff , includes the exchange, demagnetizing, anisotropy, and Zeeman fields. The simulation was implemented by the object-oriented micromagnetic framework (OOMMF) software [28]. The magnetic recording simulation process of HAMR technology used the correlation between the temperature and the properties of the magnetic materials to create the hotspot area. Therefore, the hotspot area model used the Brillouin function equation, as shown in Equations (2)-(4) [13,14]: and where where J is the total angular momentum quantum number, n is a medium film series factor, T is the temperature, T C is the Curie temperature, H k is the anisotropy field, and K u is the magnetocrystalline anisotropy constant. The shape of the hotspot area was a squircle, the shape of the applied field was rectangular, and they were the same width. The writing model was five tracks in a medium of each bit length, and a single-tone sequence was written on a track of 31 bits and 30 boundaries. The Voronoi grain medium had dimensions of 1000 nm × 1500 nm and a thickness of 6 nm. The medium model had a resolution of 0.25 pixels per 1 nm in the x-y plane. The average grain size was 5.1 nm [15] with the standard deviation of 0.08 nm, and the grain boundary width was about 1-2 nm. The mesh cell size of the micromagnetic simulation in the x-y plane was 1 nm × 1 nm, and in the Z-axis it was 3 nm. The magnetic properties at room temperature of the L1 0 -FePt medium were as follows: M s (300 K) = 1.100 MA/m and K u (300 K) = 7 MJ/m 3 . The T for heating in the HAMR process was 700 K, and the T C was 710 K. The write head field was 10 kOe along the zdirection, and J was 0.85 at a medium film series factor, n, of 2.15 for the L1 0 -FePt magnetic material [14,15]. The intragrain exchange stiffness constant was 12 pJ/m, and the intergrain exchange stiffness was 0 J/m [13]. MATLAB [29] was used for the Voronoi medium modeling and the σ jitter that could be obtained from the zig-zag boundary procedure flow chart for the transition jitter simulation, as shown in Figure 1. where J is the total angular momentum quantum number, n is a medium film serie T is the temperature, TC is the Curie temperature, Hk is the anisotropy field, and magnetocrystalline anisotropy constant. The shape of the hotspot area was a squircle, the shape of the applied field tangular, and they were the same width. The writing model was five tracks in a of each bit length, and a single-tone sequence was written on a track of 31 bit boundaries. The Voronoi grain medium had dimensions of 1000 nm × 1500 n thickness of 6 nm. The medium model had a resolution of 0.25 pixels per 1 nm in plane. The average grain size was 5.1 nm [15] with the standard deviation of 0.08 the grain boundary width was about 1-2 nm. The mesh cell size of the microm simulation in the x-y plane was 1 nm × 1 nm, and in the Z-axis it was 3 nm. The m properties at room temperature of the L10-FePt medium were as follows: Ms (300 K MA/m and Ku (300 K) = 7 MJ/m 3 . The T for heating in the HAMR process was 70 the TC was 710 K. The write head field was 10 kOe along the z-direction, and J wa a medium film series factor, n, of 2.15 for the L10-FePt magnetic material [14,15] tragrain exchange stiffness constant was 12 pJ/m, and the intergrain exchange was 0 J/m [13]. MATLAB [29] was used for the Voronoi medium modeling and th that could be obtained from the zig-zag boundary procedure flow chart for the tr jitter simulation, as shown in Figure 1. The σ jitter was the standard deviation of each zero-crossing position, as shown in Equation (5) [10][11][12]19,21,22]: where x i is zero-crossing position, x m is an average position, and N is a total number of transitions. The transition jitter average, σ jitter , was calculated by the summation of σ jitter of each track in a medium divided by the total number of tracks, N t , as shown in Equation (6). The simulation parameters were determined under the scope of the AD at 4.1 Tb/in 2 for finding the lowest σ jitter , as shown in Table 1. In Table 2, the bit length of 9 nm was selected to investigate the σ jitter at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 by decreasing the track width. Figure 4 also show that the fluctuations in the σ jitter at bit lengths of 7 to 10 nm are probably from the grain shape or the grain size distribution. The σ jitter of the 7 nm bit length increased when the reducing bit length approached the grain size because the bit length of 6 nm cannot be simulated. Results and Discussions The results of the micromagnetic simulations also found that some of the bits did not have the magnetization switching in the grain because some parts of the grain were not in the hotspot area. Therefore, the broad zig-zag boundary was the effect of reducing bit length approaches to the grain size. Micromachines 2022, 13, 1559 6 of 9 bit length approached the grain size because the bit length of 6 nm cannot be simulated. The results of the micromagnetic simulations also found that some of the bits did not have the magnetization switching in the grain because some parts of the grain were not in the hotspot area. Therefore, the broad zig-zag boundary was the effect of reducing bit length approaches to the grain size. Figure 5 shows the magnetic footprint simulation result that was investigated from the areal density of 4.1 Tb/in 2 , and this is the magnetic footprint simulation result of 31 bits per track (between the yellow lines) at the ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The magnetic footprints of each track in Figure 5 were analyzed to show the transition boundaries in a 2-D format, as shown in Figure 6. Transition Jitter at Ultrahigh Areal Densities of 4.1-8.9 Tb/in 2 Figure 5 shows the magnetic footprint simulation result that was investigated from the areal density of 4.1 Tb/in 2 , and this is the magnetic footprint simulation result of 31 bits per track (between the yellow lines) at the ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The magnetic footprints of each track in Figure 5 were analyzed to show the transition boundaries in a 2-D format, as shown in Figure 6. bit length approached the grain size because the bit length of 6 nm cannot be simulated. The results of the micromagnetic simulations also found that some of the bits did not have the magnetization switching in the grain because some parts of the grain were not in the hotspot area. Therefore, the broad zig-zag boundary was the effect of reducing bit length approaches to the grain size. Figure 5 shows the magnetic footprint simulation result that was investigated from the areal density of 4.1 Tb/in 2 , and this is the magnetic footprint simulation result of 31 bits per track (between the yellow lines) at the ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The magnetic footprints of each track in Figure 5 were analyzed to show the transition boundaries in a 2-D format, as shown in Figure 6. The results in Section 3.1 show that the bit length of 9 nm had the lowest . In this section, the track width of the 9 nm bit length was selected to investigate the at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 , and Figure 7 shows the for track width and areal density variation. It was found that the values of each track width at 9 nm bit lengths of 8, 10, 12, 14, and 16.5 nm were 1.270, 1.490, 1.493, 1.60, and 1.547 nm, respectively. The lowest was 1.270 nm at an 8 nm track width. The trend of reduced with increases in the areal density from 4.1 to 8.9 Tb/in 2 (or decreasing track width from 16.5 to 8 nm.). The trend of reduced with decreasing track width. It was likely due to the zero-crossing position being outside the track area, and this trend is consistent with those in the literature [12,30]. Conclusions In this paper, we present the transition jitter simulation in the 2-D format of HAMR technology at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The L10-FePt magnetic material was used as the magnetic medium for future magnetic recording. The OOMMF was used for recording process simulation, and the MATLAB program was used to simulate Track Width (nm) The results in Section 3.1 show that the bit length of 9 nm had the lowest σ jitter . In this section, the track width of the 9 nm bit length was selected to investigate the σ jitter at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 , and Figure 7 shows the σ jitter for track width and areal density variation. It was found that the σ jitter values of each track width at 9 nm bit lengths of 8, 10, 12, 14, and 16.5 nm were 1.270, 1.490, 1.493, 1.60, and 1.547 nm, respectively. The lowest σ jitter was 1.270 nm at an 8 nm track width. The trend of σ jitter reduced with increases in the areal density from 4.1 to 8.9 Tb/in 2 (or decreasing track width from 16.5 to 8 nm.). The trend of σ jitter reduced with decreasing track width. It was likely due to the zero-crossing position being outside the track area, and this trend is consistent with those in the literature [12,30]. The results in Section 3.1 show that the bit length of 9 nm had the lowest . In this section, the track width of the 9 nm bit length was selected to investigate the at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 , and Figure 7 shows the for track width and areal density variation. It was found that the values of each track width at 9 nm bit lengths of 8, 10, 12, 14, and 16.5 nm were 1.270, 1.490, 1.493, 1.60, and 1.547 nm, respectively. The lowest was 1.270 nm at an 8 nm track width. The trend of reduced with increases in the areal density from 4.1 to 8.9 Tb/in 2 (or decreasing track width from 16.5 to 8 nm.). The trend of reduced with decreasing track width. It was likely due to the zero-crossing position being outside the track area, and this trend is consistent with those in the literature [12,30]. Conclusions In this paper, we present the transition jitter simulation in the 2-D format of HAMR technology at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The L10-FePt magnetic material was used as the magnetic medium for future magnetic recording. The OOMMF was used for recording process simulation, and the MATLAB program was used to simulate Conclusions In this paper, we present the transition jitter simulation in the 2-D format of HAMR technology at ultrahigh areal densities from 4.1 to 8.9 Tb/in 2 . The L1 0 -FePt magnetic material was used as the magnetic medium for future magnetic recording. The OOMMF was used for recording process simulation, and the MATLAB program was used to simulate the transition jitter in a 2-D format. The areal density of 4.1 Tb/in 2 has the lowest σ jitter of 1.547 nm (9 nm bit length and 16.5 nm track width). The areal densities from 4.1 to 8.9 Tb/in 2 had the lowest σ jitter of 1.270 nm (9 nm bit length and 8 nm track width). These results can be the guidelines for future magnetic recording technology development. Funding: This research was funded by funding for thesis, dissertation, or independent study for graduate students from the Faculty of Engineering, Khon Kaen University, Thailand (Grant No. Mas. Ee-2565/6). Data Availability Statement: Not applicable.
4,181.8
2022-09-20T00:00:00.000
[ "Materials Science" ]
Plant ecophysiological processes in spectral profiles: perspective from a deciduous broadleaf forest The need for progress in satellite remote sensing of terrestrial ecosystems is intensifying under climate change. Further progress in Earth observations of photosynthetic activity and primary production from local to global scales is fundamental to the analysis of the current status and changes in the photosynthetic productivity of terrestrial ecosystems. In this paper, we review plant ecophysiological processes affecting optical properties of the forest canopy which can be measured with optical remote sensing by Earth-observation satellites. Spectral reflectance measured by optical remote sensing is utilized to estimate the temporal and spatial variations in the canopy structure and primary productivity. Optical information reflects the physical characteristics of the targeted vegetation; to use this information efficiently, mechanistic understanding of the basic consequences of plant ecophysiological and optical properties is essential over broad scales, from single leaf to canopy and landscape. In theory, canopy spectral reflectance is regulated by leaf optical properties (reflectance and transmittance spectra) and canopy structure (geometrical distributions of leaf area and angle). In a deciduous broadleaf forest, our measurements and modeling analysis of leaf-level characteristics showed that seasonal changes in chlorophyll content and mesophyll structure of deciduous tree species lead to a seasonal change in leaf optical properties. The canopy reflectance spectrum of the deciduous forest also changes with season. In particular, canopy reflectance in the green region showed a unique pattern in the early growing season: green reflectance increased rapidly after leaf emergence and decreased rapidly after canopy closure. Our model simulation showed that the seasonal change in the leaf optical properties and leaf area index caused this pattern. Based on this understanding we discuss how we can gain ecophysiological information from satellite images at the landscape level. Finally, we discuss the challenges and opportunities of ecophysiological remote sensing by satellites. Introduction In recent decades, climate change has progressed owing to anthropogenic emission of greenhouse gasses, such as CO 2 , and its effect on ecosystems has been apparent from local to global scales (e.g., Gatti et al. 2019;Stöckli and Vidale 2004;Walther et al. 2002). The terrestrial ecosystem is a large carbon sink that absorbs about 30% of anthropogenic CO 2 via vegetation photosynthesis (Friedlingstein et al. 2019). Photosynthesis is fundamental to all other ecosystem processes and functions, including primary production for the food chain, carbon and energy cycles, and finally climate regulation (Chapin et al. 2011). Photosynthesis and vegetation growth are sensitive to environmental conditions over a broad temporal scale from minutes to seasons and years, and over spatial scales from single-leaf to individual plants and plant communities (Osmond and Chow 1988). The methods to measure photosynthesis and the carbon cycle vary with these scales. Ecological mechanisms of the primary productivity of vegetation and their relationship with meteorological conditions have been studied by biometric surveys (Fang et al. 2014;Gough et al. 2008;Ohtsuka et al. 2007). Micrometeorological measurements of CO 2 flux from towers allow us to observe the dynamic CO 2 exchange between the atmosphere and vegetation surfaces (Baldocchi et al. 2001;Owen et al. 2007;Saigusa et al. 2005;Yamamoto et al. 1999). Observations with these methods, however, have been limited to the in situ observation sites which researchers can access physically. The impact of climate change on ecosystems differs across geographic locations, and the relationship between meteorological conditions and ecosystems varies with time. Therefore, we need a method to repeatedly observe structure and functions of ecosystems located in remote places such as mountainous landscapes. Remote sensing by satellite is a powerful tool to observe ecosystems over large spatial and temporal scales. Data obtained by Earth-observation satellites has been widely used to monitor spatial and temporal variations in ecosystem functions and structure (Field et al. 1995;Running et al. 2004;Ustin et al. 2004). Several 'vegetation indices', such as the normalized vegetation index (NDVI) and the enhanced vegetation index (EVI), are used to monitor vegetation (Bannari et al. 1995;Tucker 1979;Wang et al. 2005). The vegetation indices are calculated from remotely sensed spectral reflectance of the vegetation surface which is strongly determined by the biophysical and biochemical characteristics of the vegetation, such as leaf shape and area, leaf pigments and water contents, the amount of non-photosynthetic organs (i.e., branches and stems), and their geometrical distribution. Leaf biochemical components are directly linked to the photosynthetic processes and the canopy geometrical structure determines the light environment within the canopy, and hence these leaf and canopy-level components determine photosynthetic production of the whole canopy (Kitajima et al. 2005). In the growing needs of optical remote sensing to measure the dynamics of canopy structure and functions such as primary productivity in a changing environment, it is essential to mechanistically understand the consequences between the biochemical and structural characteristics and optical properties from single leaf to canopy and landscape scales. In this paper, we review the relationships between optical and physiological properties across scales from the individual leaf to the landscape. Optical measurements can be conducted at a range of scales: in single leaves using an integrating sphere and a spectrometer, in whole canopies using a tower-mounted spectrometer, and across large landscapes using satellite sensors (Fig. 1). We focused on the seasonality of a deciduous broadleaf forest at Takayama, where long-term and multidisciplinary studies on the carbon cycle have been conducted since 1993 (see Muraoka et al. 2015 for details of the research focus and publications). Takayama is located on a mountainous landscape in a cool-temperate region in central Japan and the forest is dominated by oak and birch. Since this deciduous forest shows a remarkable seasonal change in both single-leaf properties and canopy structure, and hence canopy reflectance spectrum, it is a good example with which to understand the consequences of their changes. On the basis of this understanding, we then show a case of interpreting satellite data from a mountainous landscape where canopy characteristics and environmental conditions are spatially variable. Relationships between plant ecophysiological processes and spectral profiles across scales To understand the mechanisms that link optical data-i.e., spectral profiles-to ecophysiological processes in a forest ecosystem, a bottom-up approach from the single-leaf scale would be effective, because smaller-scale phenomena determine the larger scale. Figure 2 shows the scales of ecological and ecophysiological processes, optical data types, and relevant examples. Single-leaf scale The relationship of spectral profiles to physiological properties at the single-leaf scale is fundamental to interpreting remotely sensed vegetation data from the ecophysiological and biophysical perspectives. The leaf optical properties (i.e., reflectance, transmittance, and absorptance spectra) are determined by radiation scattering at the air-water interface and absorption by biochemical components, such as chlorophylls, carotenoids, anthocyanins, lignin, and water, Fig. 1 Scheme of multi-scale measurements of optical data at individual-leaf, canopy, and landscape scales in epidermal and mesophyll cells (Gates et al. 1965;Vogelmann 1993). Pigment contents strongly affect the overall spectral patterns in the photosynthetically active radiation (PAR) region (400-700 nm). Chlorophylls have strong absorbance peaks in the red and blue regions of the spectrum (Sims and Gamon 2002;Ustin and Gamon 2010). Carotenoids have a strong absorbance peak in the blue region (400-500 nm), and epidermal flavonoids absorb UV-A (315-400 nm; Burchard et al. 2000). Noda et al. (2021) measured optical properties of leaves at the canopy top (ca. 14 m above the ground) of oak (Quercus crispula Blume) and birch (Betula ermanii Cham.) in the Takayama site and showed that both reflectance and transmittance in the blue region are always low, even in very young leaves, which have little chlorophyll (Fig. 3). The relatively low chlorophyll content is also sufficient to saturate the absorptance in red region. On the other hand, the reflectance and transmittance in the green region (ca. 550 nm, slightly shorter than the red region) are highly sensitive to chlorophyll content (Gitelson and Merzlyak 1994a, b). In deciduous trees, leaf chlorophyll content increases rapidly during the leaf development period, as shown by Noda et al (2015). Noda et al. (2021) showed that the temporal changes in the transmittance and reflectance are larger in the green region than in the red and blue regions during that period: the reflectance in red and green regions decreased by 39% and 46% and transmittance in those regions decreased by 72% and 80%, respectably, from DOY 143 to 197 in 2004 in Q. crispula (Fig. 3). While the green region is largely correlated with the amount of chlorophyll, the spectral pattern in the longer-wavelength region than the red region-the so-called red-edge region (ca. 700 nm)-is commonly used to estimate chlorophyll content (Gitelson and Merzlyak 1998;Gitelson et al. 1996;Sims and Gamon, 2002). Since an anthocyanin-rich leaf has low green reflectance due to an anthocyanin absorption peak in that region, the red-edge region is a more reliable indicator (Merzlyak et al. 2008). Figure 4a shows the relationship between reflectance in the green region (ρ green ) and that at 700 nm (ρ 700 ), the most sensitive wavelength in the red-edge region, in Q. crispula from leaf unfolding to leaf fall in 2005 at Takayama (part of dataset published in Noda et al. 2021). We divided these leaf data into three groups according to leaf growth periods; very young (day of year; DOY ≤ 150), mature (150 < DOY ≤ 280), and senescence leaves (DOY > 280). Although ρ green and ρ 700 of mature leaves showed a linear relationship, those of young leaves tended to be out of the regression line for mature leaves. Since very young Q. crispula leaves are often brown because they are rich in anthocyanins (Fig. 4b), their ρ green may be lower than that of leaves with less anthocyanins. Some senescing leaves were also out of the line because of anthocyanin. This is the basic ecophysiological background of several empirical models used for predicting chlorophyll content from leaf reflectance in the red-edge region (Gitelson and Merzlyak 1998;Gitelson et al. 1996;Sims and Gamon, 2002). The mesophyll structure also affects the leaf optical properties over the entire wavelength range. Especially in the near-infrared (NIR) region (ca. 750-900 nm), where no leaf biochemical component absorbs the radiation, the leaf optical properties are determined mainly by the mesophyll structure. By studying 41 species, Slaton et al. (2001) found that reflectance in the NIR (ρ NIR ) is positively correlated with the ratio of the surface area of the mesophyll cells exposed to intercellular air space per unit leaf surface area. The phenological changes in ρ NIR and transmittance in NIR (τ NIR ) of deciduous broadleaf trees also help us to understand the consequences of those with mesophyll structure: young leaves just after unfolding have low ρ NIR and high τ NIR , and then ρ NIR increases and τ NIR decreases rapidly during leaf development (Demarez et al. 1999;Noda et al. 2021). In general, the initial stage of leaf development is characterized by packed and small mesophyll cells. After leaf unfolding, the mesophyll cell volume and intercellular air space expand rapidly (Miyazawa and Terashima 2001;Niinemets et al. 2012;Sims and Pearcy 1992;Tichá 1985). With such developmental changes of mesophyll structure, ρ NIR and τ NIR also change. The optical properties in the PAR region are also affected by the mesophyll structure but their phenological patterns are not simple because of the effect of chlorophyll (Noda et al. 2021). While development of mesophyll structure increase reflectance in the spectral region of PAR (ρ PAR ) and decrease transmittance in the region (τ PAR ), increase of leaf chlorophyll decrease both ρ PAR and τ PAR . Thus, the effects of these two factors on ρ PAR cancel each other out, and hence, ρ PAR decreased little during the leaf development period. On the other hand, development in mesophyll structure and leaf chlorophyll content both led to decreases in τ PAR , and hence, τ PAR decreased rapidly. These relationships can be quantitatively examined by mathematical analysis incorporating a radiative transfer theory. PROSPECT (Jaquemoud and Baret 1990; Jacquemoud et al. 2009) is the most popular radiative transfer model for broadleaf species. This model is based on the 'plate model' proposed by Allen et al. (1969), and the mesophyll tissue is assumed to be a pile of compact opaque layers. The model specifies the average number of air/cell wall interfaces within the mesophyll and simulates radiative transfer within the leaf. There are several versions of PROSPECT models with different algorithms. PROSPECT-5 considers the effect of carotenoids to achieve high accuracy of modeling for young leaves (Feret et al. 2008); PROSPECT-D adds the effect of anthocyanins for old leaves (Féret et al. Relationship between leaf reflectance in the green region (545-565 nm; ρ green ) and at 700 nm (ρ 700 ) (a), and photos of Quercus crispula leaves just after leaf unfolding (b) and a few weeks later (c). Data points in the graph are for a single leaf observed in the Takayama site in 2005. The leaves were divided into three groups according to the date (DOY, day of the year): DOY ≤ 150, young leaves; 150 < DOY ≤ 280, mature leaves; 280 < DOY, senescing leaves. Liner regression between ρ green and ρ 700 for mature leaves is shown 2017). PROSPECT models do not consider leaf dorsiventrality, whereas some other models do (Stuckens et al. 2009;Ustin et al. 2001;Yamada and Fujimura 1991). A model for conifer needles, LIBERTY (Leaf Incorporating Biochemistry Exhibiting Reflectance and Transmittance Yields), has been developed by Dawson et al. (1998). Canopy scale Incoming radiation is absorbed, transmitted or reflected by leaves and branches in the canopy and by the ground surface. The optical properties of leaves and branches and their geometrical structure, including leaf-area and leaf-angle distributions, strongly determine the light conditions within the canopy which determine the total amount of radiation absorbed by leaves and used for photosynthetic production (Kitajima et al. 2005;Monsi and Saeki 1953;republished in 2005;Reich 2012). The fate of scattered radiation led by transmission and reflection is observed by optical sensors such as a spectroradiometer mounted on an observation tower or Earth observation satellites. Canopy reflectance has been used to estimate canopy structure and photosynthetic production (e.g., Muraoka et al. 2013;Wang et al. 2005). In a deciduous forest, phenological phenomena are most apparently represented by the seasonal changes in leaf area index (LAI) due to leaf emergence, growth, and fall (Muraoka and Koizumi 2005;Mussche et al. 2001;Nagai et al. 2017;Nasahara et al. 2008). Leaf optical properties also show seasonal changes with leaf age, as mentioned above. A combination of these components determines the spectral profile of radiation reflected by vegetation canopies. Canopy reflectance can be observed automatically by a spectroradiometer mounted on a fixed location, such as an observation tower used to monitor CO 2 flux, at various sites (Gamon et al. 2006;Nasahara and Nagai 2015). Several studies have reported remarkable seasonal changes in canopy reflectance and vegetation indices and their possible consequences with canopy phenology (e.g., Motohka et al. 2010;Muraoka et al. 2013;Nagai et al. 2016;Nakaji et al. 2008). Motohka et al. (2010) observed seasonal patterns in canopy reflectance from four vegetation types-two deciduous forests, a paddy field, and a grassland-in Japan. They showed that the patterns of green and red reflectance of the deciduous forests are unique during the early growing season: green reflectance increased after leaf emergence and decreased after canopy closure, while red reflectance continued to decrease after leaf emergence. Figure 5 shows the seasonal patterns in canopy reflectance in the green and red regions at the Takayama site in 2006. With increasing leaf coverage in the early growing season, red reflectance decreased rapidly and then remained low, but green reflectance increased sharply, peaked at around DOY 160, and then decreased. We were able to identify a bright green color of the forest canopy at the time of the peak of green reflectance (see canopy photographs in Fig. 5). Motohka et al. (2010) also showed that green and red reflectance in the grassland and paddy field were almost constant through the seasons and finally, the authors indicated that the green-red vegetation index (GRVI) would be a good indicator to monitor canopy phenology in deciduous forests, particularly the timing of leaf emergence and fall. How then can we understand the seasonal phenomena in the deciduous forest from the leaf-level optical properties and canopy structure? To examine the consequences of leaf-level optical properties and canopy structure, we combined the data for single leaves and canopy leaf distribution (presented as LAI) over the seasons by using the mathematical radiative transfer model SAIL (Scattering by Arbitrary Inclined Leaves; Verhoef 1984). SAIL is an extension of a 1-D canopy bidirectional model by Suits (1972) and includes a leaf angle distribution to output a more realistic pattern of canopy bidirectional reflectance (Badhwar et al. 1985). Figure 6 shows a schematic diagram of our analysis with SAIL. Figure 7 shows the results of simulation of canopy reflectance in the green and red regions in 2006, the same year as in Fig. 5. Model A This model assumes that the canopy consists of only leaves (no branches or stems), as in the original SAIL model, and considers two cases: (1) leaf optical properties are constant throughout the seasons, and (2) leaf optical properties have seasonal changes (Fig. 6). In both cases, LAI changes seasonally according to long-term field observation data at the Takayama site (Nagai et al. 2017). In this model, since the field observations of LAI were periodical with irregular time intervals, the seasonal data were gapfilled with 5-day steps, assuming linear changes during the leaf development phase and mature phase (Fig. 6). When we calculated the canopy reflectance with the SAIL model assuming constant leaf optical properties (Case 1), as in the middle of the growing season (DOY 200), and seasonal LAI, green and red reflectance was very high at the beginning of the season (DOY 130, timing of snow melt) and dropped sharply after leaf emergence with no peak in green reflectance (Fig. 7a). Then we considered seasonal changes in leaf optical properties (Case 2), which we estimated with PROS-PECT-5 by using the phenological patterns in chlorophyll and carotenoid contents, leaf mass per area, and parameter N (a parameter representative of mesophyll structure for PROSPECT models) of Q. crispula (Noda et al. 2021). In the case of both LAI and leaf optical properties changing with season, both green and red reflectance at the beginning of growing season were high, as in Case 1; then, both dropped sharply again, with a small peak of green reflectance on DOY 145 (Fig. 7b). These results indicate that the canopy reflectance of a deciduous broadleaf forest is determined by a combination of seasonal changes (phenology) of both canopy LAI and leaf optical properties, which are influenced by leaf biochemical components and mesophyll structure. We would also like to highlight a phenomenon in the early growing season (before leaf emergence, DOY < 150). Since the canopy in the original SAIL model consists of leaves only, the effects of tree branches and trunks on radiation scattering and absorption are not considered. However, in the early spring after the snow melt, the branches are not covered by leaves, and the reflectance from those canopy components should be important. Model B To consider the effects of tree branches and trunks (in other words, the "bark" of the canopy), we modified SAIL by incorporating "bark layers" to be inserted between leaf layers (Fig. 6). For the bark layers, the reflectance spectrum of the Q. crispula trunk (Noda et al. 2014) was used and the stem area index was fixed as 0.8 according to Nasahara et al. (2008). When seasonal changes in LAI and leaf optical properties were simulated with the modified SAIL model, the patterns of green and red reflectance were more realistic (Fig. 7c). At the time of leaf emergence, red and green reflectance values were lower and the overall patterns were close to the observed ones, with a peak of green reflectance (compare Figs. 5 and 7c). This result suggests that branches and trunks are important components to determine radiation transfer in the canopy and final reflectance in remote sensing. These findings support our hypothesis that leaf-level optical properties, canopy structure, and their phenology matter in canopy spectral reflectance. However, we also recognize that the estimated green reflectance peaked on DOY 150, which is earlier than the observed timing. The estimated peak coincided with the LAI reaching 1.0, which Fig. 6 Description of the canopy model used to simulate the phenological patterns in canopy reflectance. "Parameter N" in Case 2 is a parameter representative of mesophyll structure for PROSPECT models theoretically means that the ground is entirely covered by one layer of leaves. However, in a real forest, the ground is not completely covered by leaves even if the LAI has reached 1.0 at DOY 150 (see photograph II in Fig. 5). The peak in the real forest occurred around DOY 160, when the ground was covered completely by leaves (see photograph III in Fig. 5). This discrepancy is caused by the heterogeneous distribution of leaves in the canopy ('leaf clumping'). This is a good example showing that we need to consider all such ecophysiological and ecological backgrounds to properly understand and estimate what the spectral data indicate for canopy ecological processes (see also Muraoka and Koizumi 2009;Muraoka et al. 2010). Focusing on the 3-D architecture of a tree crown, which has species-specific characteristics due to leaf shape and branching pattern, is a key to address the heterogeneity and diversity of canopy structure and function by remote sensing (e.g., Zellweger et al. 2019). Leuzinger and Körner (2007) measured the canopy surface temperature in a temperate forest in Switzerland, and showed that the difference between canopy leaf and air temperatures vary among species and that it was caused by the species-specific combination of canopy architecture and leaf traits (e.g., leaf shape, stomatal conductance). To appraise such 3-D structure of canopies, active remote sensing technique, i.e., laser scanning (also known as light detection and ranging, LiDAR) has progressed in recent decades. Airborne laser scanning maps 3-D structure of the canopy from above and so-called Terrestrial Laser Scanning, TLS, from below the canopy provides extremely detailed information including understory vegetation (García et al. 2015;Hosoi and Omasa 2007;Omasa et al. 2007;Zhu et al. 2018). Landscape to global scale Advancement of Earth observation satellites enables us to observe spatial scales from ecosystem to landscape, regional, and global scales; the strength of these satellites is their capability of repeated observations of the same locations on Earth (Cavender-Bares et al. 2020; Reed et al. 2003). The satellite data enables us to extend the knowledge from in situ observation data at a research-plot level to landscape to global scales. Cross-scale links help us to understand what the spectral information in the satellite imagery indicates for a given observed area (e.g., landscape). One of the challenges to scale-up the canopy-level in situ remote sensing to landscape-level satellite remote sensing observation for diverse forest ecosystems on a mountainous landscape is the consistency between optical information from these different platforms. In the Takayama site we have shown that spectral reflectance information is useful to scale-up the plot-level canopy ecological characteristics to landscape level, but caution is needed for atmospheric and topographic corrections in addition to ecological understanding in the spectral information as discussed above (Melnikova et al. 2018;Nagai et al. 2010). These careful validations would allow us to gain spatial information of ecosystem structure from satellite imagery. Figure 8 shows true-color images and an NDVI map of the mountainous landscape in which our Takayama site is located, observed by a RapidEye satellite in early spring (15 May 2010) and summer (19 July 2010). The distribution of NDVI values was heterogeneous, particularly in spring (Fig. 8c, e, f). If we know that this landscape consists of different types of vegetation-i.e., deciduous forests, evergreen forests, and croplands (rice paddy fields)-we could interpret the data as follows. The locations with low NDVI in spring (blue in Fig. 8c, e, f) and high NDVI in summer (red in Fig. 8d, g, h) are dominated by deciduous forests, reflecting their remarkable change in leaf area along the phenological phases. On the other hand, the locations with relatively high NDVI in both spring (light red areas in Fig. 8c, e, f) and summer are dominated by evergreen cedar plantations. Additional satellite data in between these two seasons would allow us to observe how the phenology of forests along altitudinal or latitudinal gradients changes (Nagai et al. 2016). Archived satellite data collected over many years enable us to analyze long-term changes in the vegetation response to climate change. For example, by analyzing the Advanced Very High Resolution Radiometer (AVHRR) NDVI data set, Stöckli and Vidale (2004) showed that the phenological trend of vegetation has shifted to earlier (− 0.54 days per year) and prolonged (0.96 days per year) growing periods in the past 20 years in Europe. On the basis of the NDVI data of AVHRR and MODerate resolution Imaging Spectroradiometer (MODIS), Wang et al. (2017) revealed that the weakening of summer monsoon circulation in the past three decades has affected the greening pattern in South Asia. As mentioned above, there is a growing demand to monitor and detect the effects of the on-going climate change on plant growth, vegetation dynamics, and ecosystem functions such as the carbon cycle in terms of CO 2 flux, primary production, and carbon sequestration at daily to yearly scales (Cias et al. 2014;Muraoka and Koizumi 2009). One challenge is to precisely measure the photosynthetic 'activity' in a plant physiological sense at ecosystem to landscape scales with special attention to the impacts of extreme climatic events (drought, heat stress, and unexpected frost) related to climate change (Reichstein et al. 2013). Another challenge is to precisely observe the temporal (phenological) change of the ecosystem functions, including carbon and water cycles, as they are fundamental to ecosystem services (Piao et al. 2019;Richardson et al. 2013;Tang et al. 2016). In general, two approaches can be used to apply ground-based observations to satellite data. One is to find a correlation between ecosystem phenomena, for example canopy phenology (leaf emergence, maturation and leaf fall) or CO 2 flux (gross and net primary production), and spectral information such as vegetation indices measured by satellites (e.g., Nagai et al. 2010;Xiao et al. 2004). The other approach is first to examine relationships between the ecosystem phenomena and in situ spectral data obtained at ground observation sites in detail, and then to validate the satellite-derived spectral data with the in situ spectral data for spatial upscaling. Muraoka et al. (2013) found dynamic relationships between five different kinds of vegetation indices (e.g., NDVI, EVI, chlorophyll index) measured on the tower and daily maximum canopy photosynthetic rate (GPP max ) throughout seasons at the Takayama site. They then applied the in situ EVI-GPP max relationship to EVI by Terra/MODIS to estimate the spatial and seasonal patterns of GPP in central Japan. But we also recognize that we still have challenges in remote sensing techniques to 2010 (b, d, g, h). Magnified NDVI images are shown for a cropland area (e, g) and around the Takayama site (f, h) observe the dynamic ecosystem physiological functions in detail such as ordinary plant ecophysiological studies in a changing environment. Recent challenges of remote sensing observations for spatial and temporal dynamics of ecosystems As discussed above we need to consider several issues in order to scale-up ground-based ecological and physiological knowledge to the broader scales by satellite remote sensing. In this section we discuss the challenges by focusing on spectral features and time resolution of sensors, and then on the retrieval of biochemical information from the spectral data. Challenges related to the spatial scales of biodiversity and ecosystem observations are well discussed in other recent reviews (e.g., Anderson 2018;Muraoka et al. 2012;Pettorelli et al. 2018;Vihervaara et al. 2017). Spectral features of sensors for detection of physiological processes As ecosystem information relies on the spectra that can be measured by sensors (Gamon et al. 2019), spectral resolution is crucial for precise observation of ecophysiological characteristics such as photosynthetic capacity and activity. The satellite sensors widely used for vegetation remote sensing, such as Terra and Aqua MODIS and NOAA AVHRR, measure radiation in several broad wavelength bands. The broadband vegetation indices-e.g., NDVI and EVI-have been widely used in satellite remote sensing of the geographical distribution of terrestrial vegetation, LAI, and primary productivity (e.g., Wang et al. 2005). These vegetation indices indicate green biomass, which can be converted to 'photosynthetic capacity' which is close to the GPP max as demonstrated by Muraoka et al. (2013) as mentioned above. In addition to these traditional global observations, there is a growing demand and possibility to measure the physiological responses of ecosystems to environment change by satellite remote sensing (Rogers et al 2017;Ustin et al. 2004). The development of satellite vegetation indices for monitoring photosynthetic activity would enable us to observe the photosynthetic responses to climate change and extreme events at landscape to regional scales. To achieve this through the advancement of remote sensing techniques or data analysis algorithms, it would be essential to first find appropriate spectral characteristics for monitoring photosynthesis. In general, ground observations use continuous spectra with very high resolution (e.g., Meroni et al. 2004), but very few hyperspectral satellite sensors are available because of difficulties in system design, data processing, and radiometric calibration (Qi et al. 2012). The difference in the spectra between in situ spectroradiometers and satellite sensors makes it challenging to directly extend ground-based findings to satellite data. For example, application of information obtained from leaf-level chlorophyll fluorescence measurements on the status of photosystem II electron transport activity or of heat dissipation via the xanthophyll cycle (Baker 2008) should be key for transferring knowledge of plant physiology to broad-scale measurements by satellite remote sensing. The Photochemical Reflectance Index (PRI; calculated from the reflectance at 531 and 570 nm) is well known as a sensitive optical index to detect changes in xanthophyll pigments in live leaves, and it can be used to characterize the diurnal xanthophyll cycle response 1997). Hikosaka and Noda (2019) have experimentally shown the feasibility of assessing the quantum yield of photochemistry and photosynthetic rate from the PRI and chlorophyll fluorescence at the individual-leaf scale. To monitor the physiological status and phenology of ecosystems, MODIS is expected to be the most suitable sensor because it has a very high temporal resolution (daily observations). However, because the original PRI bands are not available from MODIS, band 11 (526-536 nm) and band 12 (546-556 nm; Rahman et al. 2004) or band 1 (620-670 nm; Garbulsky et al. 2013;Goerner et al. 2011) have been used instead. While the satellite remote sensing scientists expect to use this "MODIS PRI" to detect environmental stress on ecosystem-scale photosynthesis, Gamon et al. (2016) pointed out that "MODIS PRI" differs spectrally and functionally from the original PRI and is an indicator of the chlorophyll/carotenoid ratio. Recent advancement of satellite sensors with high spectral resolution has made it possible to perform global measurements of solar-induced chlorophyll fluorescence (SIF). The satellite remote sensing community expects SIF to indicate photosynthetic activity (Porcar-Castell et al. 2014), which should be influenced by solar radiation, temperature, and water availability in the same way as single-leaf photosynthesis. Although chlorophyll fluoresces are very weak under natural conditions, SIF can be detected passively in narrow dark lines of the solar and atmospheric spectrum in which irradiance is strongly reduced (the so-called Fraunhofer lines; Carter et al. 1990Carter et al. , 1996Plascyk 1975). To obtain accurate SIF values, it is necessary to use a highspectral-resolution sensor, which can measure Fraunhofer lines. Ground-based SIF measurements have been well characterized, and it has been shown that SIF is a good indicator of light use efficiency or photosynthetic production (Meroni et al. 2009). To extend this approach to the monitoring of continental vegetation and to map photosynthetic activity in large areas, experimental satellite missions for SIF observation have been proposed several times since the 1990s (Moya et al. 2004;Rascher et al. 2008). The first SIF observation at a global scale by a satellite was achieved by using spectral data from thermal and near-infrared sensor for carbon observation-Fourier transform spectrometer (TANSO-FTS) of Greenhouse gases Observing SATellite (GOSAT), launched in 2009 (Frankenberg et al. 2011;Joiner et al. 2011). The main mission of GOSAT is to measure atmospheric greenhouse gasses (CO 2 and CH 4 ), while band 1 of TANSO-FTS covers the overlapping wavelengths of the solar Fraunhofer lines and chlorophyll fluorescence with high spectral resolution and is thus suitable for SIF retrieval. Lee et al. (2013) have demonstrated that GOSAT SIF measurements over tropical forests show clear water stress signals at midday that are not well represented in traditional vegetation indices such as NDVI or EVI. Other satellite sensors for atmospheric monitoring are available to retrieve SIF, such as MetOp GOME-2 (Global Ozone Monitoring Experiment-2; Joiner et al. 2013), OCO-2 (Orbiting Carbon Observatory-2; Sun et al. 2017), and Sentinel 5-P TROPOMI (TROPOspheric Monitoring Instrument; Köehler et al. 2018). TANSO-FTS2 of GOSAT-2, the successor to GOSAT, is also available to retrieve SIF, and GOSAT-2 SIF has been already released as an official product. FLEX (FLuorescence EXplorer), a satellite aimed mainly at SIF observation, will be launched in 2024 by the European Space Agency. These currently available satellites and future sensors promise to advance the satellite remote sensing of ecosystem physiology such as photosynthesis at landscape, regional, and global scales. Time resolution for leaf and canopy phenology Time-series analysis of satellite data has been used to characterize vegetation dynamics such as succession (Hall et al. 1991), land use change (Hansen et al. 2013), response to environmental stress (AghaKouchak et al. 2015;Reichstein et al. 2007;Saigusa et al. 2010), and phenology (Cleland et al. 2007;Piao et al. 2019;Stöckli and Vidale 2004;Tang et al. 2016). However, in general, the temporal resolution of satellite data is coarse relative to the temporal scale of phenological events or short-term vegetation responses to changing environment. For example, detection of year-toyear changes in phenology caused by global warming would need at least a few days' interval considering the rapid growth in the early growing season (Zhang et al. 2009). A moderate-spatial-resolution polar-orbiting satellite sensor, like Terra MODIS, observes the same location once a day (at around 10:30 local time in Japan), but the daily satellite data are generally composited into a cloud-free image, which results in a coarse temporal resolution data with mixed information for one to two weeks (Stöckli and Vidale 2004;Zhang et al. 2009). However, since such satellite sensors provide well calibrated data with large coverage, it is very convenient for vegetation observation. To use the data for phenology monitoring effectively, combined satellite and ground-observation data has been analyzed (e.g., Nagai et al. 2014). In some cases, to detect the phenological events from MODIS data, a time series of vegetation indices is applied to a simple sigmoid function and the dates of the events are estimated mathematically (Ahl et al. 2006;Zhang et al. 2003Zhang et al. , 2004. Several researchers have tried to use a multi-channel imager onboard a geostationary satellite for vegetation remote sensing (Fensholt et al. 2006;Miura et al. 2019). Miura et al. (2019) successfully used the Himawari-8 data to detect phenological patterns of forest ecosystems in Japan and validated the results with in situ phenological observations from automated digital cameras provided by the Phenological Eyes Network. Since a geostationary satellite maintains the same position relative to Earth's surface, its sensor provides data for the same area with short time intervals, such as 15 min for Himawari-8. Of course, the cloud cover problem remains, but such satellite sensors would help us to observe the temporal changes of ecosystem structure and functions in short intervals. The Geostationary Carbon Cycle Observatory (GeoCarb), a satellite scheduled for launch in 2024, plans to observe SIF from a geostationary orbit (Moore et al. 2018), and will enable monitoring of photosynthetic activity at high temporal frequency. Retrieval of ecophysiological characteristics from remotely sensed data Retrieval of leaf biochemical components (e.g., chlorophyll, nitrogen and water content), LMA and canopy structure parameters (e.g., LAI) from remotely sensed data will enable the ecosystem and Earth system sciences to investigate the diversity of ecosystem functions along climatic gradients and environmental changes from landscape to global scales (Ito et al. 2015;Rogers et al. 2017). Inversion of physical model and empirical approach are available to estimate the biochemical and structural parameters of vegetation canopy. Inversion of a physical model, i.e., the radiative transfer model, is thought to be robust for estimating the leaf and canopy parameters because it only deals with physical processes which directly connect plant geometrical features and spectral dynamics. PROSAIL (Baret et al. 1992), the coupling of SAIL and PROSPECT models, is one of the most widely used models (Berger et al. 2018;Jacquemoud et al. 2009). Although PROSAIL is a simple 1-D model, it demonstrates reasonable results. Bacour et al. (2002) compared simulated canopy reflectance by PROSAIL and three more complicated models with observed data by POLarization and Directionality of the Earth's Reflectances (POLDER), satellite sensor and showed that PROSAIL agrees with other models well in terms of the simulated reflectance and parameter effects. PROSAIL inversion is not only applied to airborne data (e.g., Jay et al. 2017), but also satellite data, including broad-band sensors such as MODIS (e.g., Zhang et al. 2005) and Landsat (e.g., Bayat et al. 2018). However, such physical models cannot estimate parameters which are not considered in the algorithm, such as leaf lignin and nitrogen. Empirical approach, which employs empirical regression equations, has been also used to estimate leaf biochemical components (e.g., nitrogen and lignin) from observed spectrum (e.g., Peterson et al. 1988;Wessman et al. 1988;Yin 1992). For canopy reflectance data obtained by a sensor with high wavelength resolution, partial least squares regression (PLSR) is suggested to be useful for estimating leaf properties . In PLSR, the full reflectance spectrum is collapsed into a smaller set of independent variables, or factors, with the measured canopy nitrogen used directly during the spectral decomposition process. While PLSR has a potential to provide detailed leaf parameters, this method can only be applied for high spectral resolution data obtained by hyperspectral sensors on a tower, airborne platform or Hyperion instrument of NASA's Earth Observing-1 (EO-1) satellite (Martine et al. 2008;Ollinger and Smith 2005). Future perspectives In this paper, we review how leaf-level optical properties are tightly coupled to leaf biochemical and anatomical structures, and how the canopy-scale spectral reflectance is driven by single-leaf optical properties and the geometrical structure of leaves and stems. If we could convert spectral data into plant physiological and ecological data such as photosynthesis and its phenology, satellite remote sensing could be further used for ecological research over broad spatial scales in changing environments. Although several problems exist in scaling up ecological and ecophysiological findings at single-leaf and canopy scales to the landscape scale, at which satellite observation is advantageous, we should be able to overcome these problems by accumulating experimental knowledge along biochemical, biophysical, and biogeographical theories as discussed above. The advancement of new satellite sensors which considers critical spectral bands for plant ecophysiology also helps us to apply knowledge of plant ecophysiology to fully use satellite data for ecosystem and biodiversity research. To further develop the satellite remote sensing of vegetation structure and functions, it is necessary to keep up to date with information on both ecophysiology provided by plant scientists and on satellite missions provided by space institutions. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
9,360.4
2021-05-10T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Approximation of boundary control problems on curved domains The boundary control problems associated to a semilinear elliptic equation defined in a curved domain Ω are considered. The Dirichlet and Neumann cases are analyzed. To deal with the numerical analysis of these problems, the approximation of Ω by an appropriate domain Ω<inf>h</inf> (typically polygonal) is required. Here, we do not consider the numerical approximation of the control problems. Instead of it, we formulate the corresponding infinite dimensional control problems in Ω<inf>h</inf> and we study the influence of the replacement of Ω by Ω<inf>h</inf> on the solutions of the control problems. Our goal is to compare the optimal controls defined on Γ = δΩ with those defined on Γ<inf>h</inf> = δΩ<inf>h</inf> and to derive some error estimates. The use of a convenient parametrization of the boundary is needed for such estimates. The results for convex domains are given in [1], the results for nonconvex domains are included in a work in progress. Introduction. In this paper we study a Dirichlet control problem (P) defined on a curved domain Ω.To solve numerically this problem, usually it is necessary to approximate Ω by a new domain (typically polygonal) Ω h .Our goal is to analyze the effect of the domain change on the optimal control.More precisely, a new optimal control problem (P h ) in Ω h is defined.The convergence of global or local solutions of problems (P h ) to the corresponding local or global solutions of (P) is investigated for the parameter h tending to zero.We also derive some error estimates.We restrict our study to the case of a convex domain Ω ⊂ R 2 approximated by a polygonal domain Ω h , h being the length of the biggest edge of Ω h .A family of infinite dimensional control problems (P h ) defined in Ω h is considered and the solutions of (P h ) are compared with the solutions of (P).In this way, the influence of small changes in the domain on the solutions of the control problem is analyzed.The case of a Neumann control problem is studied in [6]. In this paper we do not perform the numerical analysis of the optimal control problems.We refer the reader to the related papers, [5] for the numerical discretization of a Dirichlet control problem in the case of a polygonal domain and [7] for the analysis in curved domains. Let us describe the content of the paper.In §2 control problem (P) is introduced and analyzed.In particular, the second order sufficient optimality conditions are established.In spite of the fact that the cost functional is not of class C 2 in L 2 (Γ), we prove that the standard sufficient optimality conditions imply that the control is a strict local minimum in the L 2 (Γ) norm.This is an improvement of the known results where the optimality is established in the L ∞ (Γ) norm.Approximations Ω h of Ω are defined in §3 along with control problem (P h ).In subsequent §4, the analysis is performed and paper is completed with the full proof of the new error estimates in §5.The second order sufficient optimality conditions are a crucial tool for the derivation of error estimates. Control Problem (P). The following control problem is considered in this paper (P) where the state y u associated to the control u is the solution of the Dirichlet problem (2.1) −∆y + a(x, y) = 0 in Ω, y = u on Γ. The following hypotheses are assumed in the whole paper.(A1) Ω is an open, convex and bounded domain in R 2 , with the boundary Γ of class C 2 .Moreover we assume that N > 0 and −∞ < α < β < +∞. (A2) L : Ω × R −→ R and a : Ω × R −→ R are Carathéodory functions of class C 2 with respect to the second variable, L(•, 0) ∈ L 1 (Ω), a(•, 0) ∈ L p(Ω), for some 2 ≤ p < +∞.Furthermore, for every M > 0 there exist a constant C L,M > 0 and a function ψ L,M ∈ L p(Ω) such that for almost all x ∈ Ω and all |y|, |y i | ≤ M , i = 1, 2, the following inequalities hold We also assume We say that an element y u ∈ L ∞ (Ω) is a solution of (2.1) if the following integral identity is fulfilled (2.4) where ∂ ν denotes the normal derivative on the boundary Γ.This is the classical definition of a weak solution by transposition.The following result proved by Casas and Raymond [5] is valid for any convex domain Ω.If the domain is not convex, then some smoothness of Γ is required, Γ of class C 1,1 is enough.Theorem 2.1.For every u ∈ L ∞ (Γ) the state equation (2.1) has a unique solution y u ∈ L ∞ (Ω) ∩ H 1/2 (Ω).Moreover, the following Lipschitz properties hold Finally, if u n ⇁ u weakly ⋆ in L ∞ (Γ), then y un → y u strongly in L r (Ω) for all r < +∞.Under the assumptions (A1) and (A2), it can be shown by standard arguments that problem (P) has at least one solution.Since (P) is not convex we cannot expect any uniqueness of solutions.Moreover, (P) may have some local solutions.We formulate the optimality conditions satisfied by such local solutions.To this end, we analyze the differentiability of the cost functional J. Under the assumption (A2), (2.6) where y u is the state associated to u and ϕ u ∈ H 2 (Ω) is the unique solution of the problem Furthermore, we have (2.8) where Using (2.6) we obtain the necessary optimality conditions for (P).Theorem 2.2.Let ū be a local minimum of (P).Then ū ∈ W 1−1/ p, p(Γ) and there exist elements ȳ ∈ W 1, p(Ω) and φ ∈ W 2, p(Ω) such that The proof of theorem is given in [5].In order to establish the second order optimality conditions we define the cone of critical directions Now we formulate the second order necessary and sufficient optimality conditions.Theorem 2.3.If ū is a local solution of (P), then J ′′ (ū)v 2 ≥ 0 holds for all v ∈ C ū. Conversely, if ū is an admissible control for problem (P) satisfying the first order optimality conditions given in Theorem 2.2 and the coercivity condition then there exist δ > 0 and ρ > 0 such that for all u such that α ≤ u ≤ β and u − ū L 2 (Γ) ≤ ρ. Proof.The necessary condition is easy to obtain.The inequality (2.15) is strong when compared with the corresponding inequality of [5].Indeed, here we claim that (2.14) implies that ū is a strict local minimum of (P) in the sense of the L 2 (Γ) topology.In [5] it is shown that condition (2.14) leads to the strict local optimality of ū in the sense of the L ∞ (Γ) topology.A more general result is proved in [2] for a distributed control problem, but in such a case once again only the local optimality in the sense of the L ∞ (Ω) topology is shown.Here we can improve the results because the control appears in a quadratic form within the cost functional.Let us see the precise arguments. We proceed by contradiction.Let us assume that there is no pair (δ, ρ), with ρ, δ > 0, such that (2.15) holds.Then for every integer k, there exists a feasible control of (P), Let us define (2.17) By taking a subsequence, if necessary, there exists v ∈ L 2 (Γ) such that v k ⇀ v weakly in L 2 (Γ).The proof is divided into three steps: first, we prove that v ∈ C ū, then we deduce that v = 0 and finally we get the contradiction. Step 2. v = 0. Using again (2.16) we obtain the last inequality being a consequence of (2.12) Once again we denote by y k and ϕ k the state and adjoint state evaluated for ū + . Also we define z k and z v as the elements of H 1/2 (Ω) satisfying . Now, recalling the expression of the second derivative of J given in (2.8) we get Passing to the limit in this expression and using (2.22) we obtain Step 3. Final Contradiction.Using two facts, v k ⇀ v = 0 and v k L 2 (Γ) = 1, we deduce from (2.22) and (2.25) the following contradiction We conclude this section with the following result that provides an equivalent formulation of (2.14), which is more useful for our purposes. Theorem 2.4.Let ū be a feasible control of problem (P) satisfying the first order optimality conditions (2.10)-(2.12).Then the condition (2.14) holds if and only if where Proof.Since C ū ⊂ C ϑ ū for any ϑ > 0, it is obvious that (2.27) implies (2.14).Let us prove the reciprocal implication.We proceed again by contradiction.We assume that (2.14) holds, but there is no pair of positive numbers (µ, ϑ) such that (2.27) is fulfilled.Then for every integer k there exists and element Dividing v k by its norm and denoting the quotient by v k again, and taking a subsequence if necessary, we have that (2.28) Arguing as in the proof of Theorem 2.3, we obtain that v satisfies (2.13).On the other hand, from the fact that v k ∈ C 1/k ū and denoting by Γ k the subset of Γ formed by those points This inequality and the fact that v satisfies (2.13) imply that v vanishes whenever But from (2.28) we deduce that Consequently we have that v ≡ 0. However, if we argue as in the proof of Theorem 2.3, we have that 0 < N ≤ lim inf k→∞ J ′′ (ū)v 2 k ≤ 0, which is a contradiction. Control Problem (P h ).Now we define Ω h .We follow the notation introduced in [6, Section 4].Given a set of points {x j } N (h) j=1 ⊂ Γ, we put where x N (h)+1 = x 1 .Γ h is the polygonal line defined by the nodes {x j } j=1 and Ω h is the polygon delimited by Γ h .Since Ω is convex, then Ω h ⊂ Ω.Now, for every 1 ≤ j ≤ N (h), we denote by x j x j+1 the arc of Γ delimited by the points x j and x j+1 .Let us define ψ j : [0, h j ] −→ x j x j+1 ⊂ Γ by where ν j represents the unit outward normal vector to Ω h on the boundary edge (x j , x j+1 ) and Now, we define we define the one-to-one mapping g h : Γ h −→ Γ in the following way For every point x ∈ Γ, ν(x) denotes the unit outward normal vector to Γ at the point x.By τ (x) is denoted the unit tangent vector to Γ at the point x such that {τ (x), ν(x)} is a direct reference system in R 2 .For each point x ∈ Γ h the corresponding reference system is denoted by {τ h (x), ν h (x)}.If x ∈ (x j , x j+1 ) then ν h (x) = ν j and τ h (x) = τ j .The following relations are proved in [6] (3.1) max{|τ (g and In the domain Ω h we define the problem (P h ) as follows where y h,u is the solution of the problem Theorem 2.1 can be applied to (3.5) to get the existence and uniqueness of a solution Moreover, inequalities (2.5) hold.(P h ) has at least one global solution and possibly there are some other local solutions of (P h ).For each local solution we have the first order optimality conditions analogous to the conditions in Theorem 2.2.Theorem 3.1.Let ūh be a local minimum of (P h ).Then ūh ∈ H 1/2 (Γ h ) and there exist elements ȳh ∈ H 1 (Ω h ) and φh ∈ H 2 (Ω h ) such that We observe that ūh is less regular than ū.The same is true for ȳh and φh with respect to ȳ and φ.The reason of the lost of regularity is the lack of regularity of Γ h .Γ is of class C 2 and consequently we can deduce the W 2, p(Ω) regularity of φ (see, for instance, Grisvard [8]), which leads to the W 1−1/ p(Γ) regularity of ū and consequently to the W 1, p(Ω) regularity of ȳ.Using the results for polygonal domains of [8], we can establish W 2,p (Ω) regularity of φh for some 2 < p ≤ p (assuming p > 2), with p depending on the angles of Ω h .The point is that p → 2 if the maximal angle of Ω h tends to π.This is exactly the case for h → 0, therefore we cannot deduce the boundedness of { φh W 2,p (Ω h ) } h>0 for any p > 2. 4. Convergence Analysis.In this section we prove the convergence of the local or global solutions of (P h ) to the solutions of (P) with h → 0. To prove the convergence, first we establish the convergence of the solutions of the state and adjoint state equations. ) be the corresponding solutions of (2.1) and (3.5), respectively.Then there exists a constant C M > 0 independent of h such that Proof.Let us take From (2.5) and (3.2) we get Let us estimate φ h = y u − y h .By substraction of the equations satisfied by y u and y h and using the mean value theorem, we get (4.5) where w h = y h + θ h (y h,u h − y h ) and 0 < θ h < 1.Now we have Finally, by using the inequality (see Bramble and King [1, Lemma 1]) we conclude This inequality along with (4.4) proves (4.2).Now we proceed with the analysis of the adjoint state equation.Let ϕ u ∈ H 2 (Ω) and ϕ h,u h ∈ H 2 (Ω h ) be given as the solutions of the equations Then we have the following estimate.Theorem 4.2.Let (u, y u ) and (u h , y h,u h ) be as in Theorem 4.1.Let ϕ u ∈ H 2 (Ω) and ϕ h,u h ∈ H 2 (Ω h ) be the corresponding solutions of (4.7) and (4.8), respectively.Then there exists a constant C M > 0, independent of h, such that the following estimate holds (4.9) Proof.Let us define φ h = ϕ u − ϕ h,u h ∈ H 2 (Ω h ).From (4.7) and (4.8) we get (4.10) From assumption (A2), taking into account that y u and y h,u h are bounded and using (4.2), we get (see Kenig [10]) Let us estimate ϕ u in H 1 (Γ h ).The norm in H 1 (Γ h ) is given by , where ∂ τ h ϕ u (x) = ∇ϕ u (x) • τ h (x), τ h (x) being the unit tangent vector to Γ h at the point x; see §3.The estimate of the first term of the norm follows easily from (4.6) and the fact that Now the L 2 (Γ) norm of the tangential derivative is estimated.To this end we observe that ϕ u = 0 on Γ, therefore ∂ τ ϕ u = 0 on Γ as well.Thus, we also have This along with (4.6) and (3.1) leads to Finally, (4.9) follows from (4.11), (4.12) and (4.13).Corollary 4.3.Under the assumptions of Theorem 4.2, the following inequality holds for some see [?] and [10].From this inequality, Assumption (A2), estimates (4.2), (4.9), (4.10) and (4.11) we get We complete this section by proving that the family of problems (P h ) realizes a correct approximation of (P).More precisely we prove that the solutions of problems (P h ) converge to the solutions of (P).Reciprocally, we also prove that any strict local solution of (P) can be approximated by a sequence of local solutions of problems (P h ). Theorem 4.4.Let ūh be a solution of problem (P h ).Then {ū h • g −1 h } h>0 is a bounded family in H 1/2 (Γ).If ū is a weak limit for a subsequence, still denoted in the same way, ūh where ȳ and ȳh denote the solutions of (2.1) and (3.5) corresponding to ū and ūh , respectively. Proof.First of all we recall definition of norm Let us estimate each of two integrals.In Remark 3.2, we establish the boundedness of { ūh By the change of variables in the second integral of (4.15), in view of (3.4), we get Therefore, Now, we assume that x ∈ [x j , x j+1 ] and On the other hand, Analogously, we can prove that Finally using (4.18) we obtain Using this inequality in (4.17) we conclude that From (4.15), (4.16) and (4.19) it follows Therefore, there exists a subsequence and an element ū denote by ȳh the states associated to ūh and by ȳ the state associated to ū, we deduce from (4.2) that lim Hence, it is easy to prove that J h (ū h ) → J(ū).It remains to prove that ū is a solution of (P).Let us take any feasible control u for (P), then u • g h is also feasible for (P h ).Therefore, since ūh is a solution of (P h ), we obtain which completes the proof.Theorem 4.5.Let ū be a strict local minimum of (P), then there exists a family {ū h } such that each control ūh is a local minimum of (P h ) and ūh Proof.Let ε > 0 be such that ū is the unique global solution of problem Now, for every h we consider the problems It is obvious that ū • g h is a feasible control for each problem (P hε ), therefore there exists at least one solution u hε of (P hε ).Let us show that u hε • g −1 h ⇀ ū weakly in H 1/2 (Γ) with h → 0. Since , we can extract a subsequence, still denoted by the same symbol, and an element ũ the state associated to u hε and consider an extension of y hε to Ω, still denoted by y hε , such that . Therefore, by taking a subsequence, we can assume that We are going to prove that ỹ is the state associated to ũ.According to the definition given in §2, we have to prove that the following identity holds (4.20) For a given w ∈ H 2 (Ω) ∩ H 1 0 (Ω) we take As in the proof of Theorem 4.2 we have Hence Since y hε is the state associated to u hε we have In view of (4.21), this identity can be rewritten as follows (4.24) Now we want to pass to the limit with h → 0 in (4.24).Using the compactness of the imbedding H 1/2 (Ω) ⊂ L 2 (Ω) it is easy to pass to the limit in the first two integrals, which are also the first two integrals of (4.20).Let us consider the right-hand side term of (4.24).Applying (4.23) we get (4.25) Now from Lemma 4.6 below we deduce (4.26) Finally, combining (4.25) and (4.26) we get Thus, we show that (4.20) follows from (4.24) by the limit passage.Now, using that u hε • g −1 h ⇀ ũ weakly in L 2 (Γ), y hε → ỹ strongly in L 2 (Ω), {y hε } h>0 is bounded in L ∞ (Ω) and the fact that u hε is a solution of (P hε ) and ū • g −1 h is feasible for problems (P hε ) we obtain Since ū is the unique solution of (P ε ), the above inequality leads to ũ = ū and J h (u hε ) → J(ū), which implies This identity and the weak convergence imply the strong convergence u hε • g −1 h → ū in L 2 (Γ).First consequence of this strong convergence is that the constraint u • g −1 h − ū L 2 (Γ) ≤ ε is not active at the controls u hε for h small enough.Therefore, u hε is a local minimum of problem (P h ) for every h small enough.Since { u hε L 2 (Γ h ) } is bounded, then we can argue as in the proof of Theorem 4.4 and conclude that Lemma 4.6.Let w ∈ H 2 (Ω) and v ∈ L 2 (Γ), then there exists a constant C > 0 independent of w and v such that Proof.First, we observe that (3.3) implies that On the other hand, From this identity we get, in view of (3.1), (3.2) and (4.6), Now, (4.28) and (4.29) imply (4.27). Error Estimates. In this section we assume that ūh is a local minimum of (P h ) such that ūh • g −1 h converges weakly in H 1/2 (Γ) to a local minimum ū of (P) with h → 0; see Theorems 4.4 and 4.5.The goal of this section is to derive an estimate for ū − ūh • g −1 h L 2 (Γ) , which is established in the following theorem.Theorem 5.1.Let ū and ūh be as above and let us denote by ȳ, ȳh and φ, φh the states and adjoint states associated to ū and ūh respectively.Let us assume that the second order sufficient optimality condition (2.14) is fulfilled for ū.Then there exists a constant C, independent of h such that the following estimates hold Before proving this theorem we provide a preliminary result.The proof of Lemma 5.2 is inspired by [5, Lemma 7.2], however there are some important differences. Lemma 5.2.Let µ > 0 be taken from Theorem 2.4.Then there exists h 0 > 0 such that Proof.By applying the mean value theorem there is an intermediate element Let us take Taking a subsequence, if necessary, we can assume that v h ⇀ v weakly in L 2 (Γ).We show that v belongs to the critical cone C ū defined in §2.First of all, observe that v satisfies the sign condition (2.13) since every element v h satisfies the same condition.Let us prove that v(x) = 0 if N ū(x)−∂ ν φ(x) = 0. To this end it is enough to establish the limit passage Indeed, from (5.4) we deduce, in view of (3.8), that which proves the required property.Let us show (5.4).By the strong convergence ūh • g −1 h → ū in L 2 (Γ) combined with (4.14) and (3.2), we have On the other hand, from Lemma 4.6 we get (5.6) Finally, from (3.3) we obtain (5.7) Thus, (5.4) follows from (5.5), (5.6) Taking into account that v L 2 (Γ) ≤ 1, the above inequality leads to lim h→0 J ′′ (û h )v 2 h ≥ min{µ, N } > 0, which proves the existence of h 0 > 0 such that From this inequality, by the definition of v h and (5.3), we deduce (5.2), which completes the proof.
5,429.2
2010-03-01T00:00:00.000
[ "Mathematics" ]
King penguins adjust their fine-scale travelling and foraging behaviours to spatial and diel changes in feeding opportunities Central place foragers such as pelagic seabirds often travel large distances to reach profitable foraging areas. King penguins (Aptenodytes patagonicus) are well known for their large-scale foraging movements to the productive Antarctic Polar Front, though their fine-scale travelling and foraging characteristics remain unclear. Here, we investigated the horizontal movements and foraging patterns of king penguins to understand their fine-scale movement decisions during distant foraging trips. We attached multi-channel data loggers that can record depth, speed, tri-axis acceleration, tri-axis magnetism, and environmental temperature of the penguins and obtained data (n = 8 birds) on their horizontal movement rates from reconstructed dive paths and their feeding attempts estimated from rapid changes in swim speed. During transit toward main foraging areas, penguins increased the time spent on shallow travelling dives (< 50 m) at night and around midday, and increased the time spent on deep foraging dives (≥ 50 m) during crepuscular hours. The horizontal movement rates during deep dives were negatively correlated with maximum dive depths, suggesting that foraging at greater depths is associated with a decreased horizontal travelling speed. Penguins concentrated their foraging efforts (more deep dives and higher rates of feeding attempts) at twilight during transit, when prey may be more accessible due to diel vertical migration, while they travelled rapidly at night and midday when prey may be difficult to detect and access. Such behavioural adjustments correspond to a movement strategy adopted by avian deep divers to travel long distances while feeding on prey exhibiting diel vertical migration. Introduction Breeding seabirds are central place foragers dependent on patchily distributed marine resources (Ashmole 1963). Pelagic species often need to commute a large distance over the course of several days between their breeding sites and productive foraging areas. Many seabird species that breed in the Southern Ocean travel several hundred kilometers to mesoscale oceanic features (e.g. fronts and eddies) characterized by physical properties such as a large gradient in sea-surface temperature (SST) where productivity and prey availability increase (Weimerskirch 2007;Bost et al. 2009). During these trips, flying seabirds such as albatrosses and petrels can quickly travel a large distance (e.g. 300-500 km per day; Shaffer et al. 2003) at a low energetic cost (Weimerskirch 2007). Owing to their ability to travel quickly and efficiently, flying seabirds reduce the time spent travelling and the energetic cost of moving to and from productive foraging areas. In contrast, flightless seabirds such as penguins can travel only 20-120 km per day (e.g. Bost et al. 1997;Hull Responsible Editor: T. Clay. 3 29 Page 2 of 14 Trathan et al. 2008) due to their slower mode of locomotion (i.e. swimming). Swimming is slower and incurs a higher energetic cost per unit travel distance (Davis et al. 1989;Green et al. 2002) than flying or gliding (Maina 2000). Therefore, how penguins must balance time spent travelling and foraging is an important constraint in enduring long-distance journeys to and from productive foraging areas. Larger penguin species can dive to depths of 100 m to > 500 m (Wilson 1995;Charrassin et al. 2002;Wienecke et al. 2007) during foraging dives in search of prey (Charrassin and Bost 2001;Bost et al. 2009). Therefore, penguins are expected to develop movement strategies for long-distance travel with behavioural adjustments that differ considerably from those of flying seabirds. The king penguin (Aptenodytes patagonicus) is the second-largest, and second-most deep-diving penguin, after the emperor penguin (A. forsteri). King penguins can dive to depths of over 360 m for foraging (Charrassin et al. 2002;Pütz and Cherel 2005;Shiomi et al. 2016;Proud et al. 2021) and travel more than 400 km between their breeding sites and foraging zones during summer (Bost et al. 1997;Pütz 2002). During their long-distance travels, king penguins rapidly move toward their main foraging areas such as oceanic fronts and eddies (Cotté et al. 2007;Trathan et al. 2008;Scheffer et al. 2010;Bost et al. 2015). Although the large-scale foraging movements of king penguins according to mesoscale oceanic features are well known (Bost et al. 2009), the fine-scale movement characteristics of their travelling and foraging behaviours during long-distance trips remain unclear, due to the technical challenges of directly linking underwater movement paths of penguins and prey captures. Thus, it has been poorly documented how prey accessibility affects the local movement patterns of king penguins during their long-distance trips. King penguins mainly forage for small mesopelagic fish, myctophids, in summer (Cherel and Ridoux 1992). Myctophid fish exhibit diel vertical migration, being distributed at deep depths (i.e. depth > 100 m) during the daytime, and at shallower depths during the nighttime when foraging on zooplankton (Zaselsky et al. 1985;Perissinotto and McQuaid 1992;Collins et al. 2008). As visual foragers, king penguins need sufficient light intensity to detect and pursue their prey underwater (Wilson et al. 1993;Pütz et al. 1998;Bost et al. 2002). The level of light intensity during the daytime, even at the great depths that king penguins forage, is greater than that at shallow depths during the nighttime . This supports the hypothesis of the ambient light level while foraging, explaining why king penguins primarily forage at depths > 100 m during the daytime, and less at shallower depths during the nighttime . Furthermore, foraging during twilight may be much more effective than during the daytime for penguins if there is sufficient light to detect and pursue their prey (Pütz and Cherel 2005). These crepuscular hours indeed correspond to the periods of myctophid vertical migration from the upper water column to greater depths during dawn, and vice versa during dusk; therefore, myctophids may be much more accessible in the shallower depths during the twilight than during the daytime. This study aimed to understand the fine-scale movement decisions of king penguins during their foraging trips in relation to feeding opportunities. We investigated their 3D dive paths and the timing of feeding attempts using multichannel data loggers, which allowed us to investigate the relationship between the availability of prey as indicated by feeding behaviour and finer-scale movement patterns. We hypothesized that during long-distance travel, king penguins continuously adjust their fine-scale travelling and foraging behaviours in response to the large-scale spatial distribution of their prey associated with sea temperature and diel changes in feeding opportunities. Specifically, we predicted that (i) penguins concentrate their foraging behaviours during twilight when feeding opportunities are expected to be higher, and (ii) focus on travelling behaviours during the middle of the day and nighttime when feeding opportunities are expected to be lower. Fieldwork This field study was performed at Possession Island (46°25′S, 51°45′E), Crozet Archipelago, South Indian Ocean between late January and early March 2011. Nine chick-rearing king penguins were gently captured using a hooked pole before their departure for foraging trips. To record their diving behaviours, each bird was equipped with a multi-channel data logger (W1000L-3MPD3GT, Little Leonardo Ltd., Tokyo, 166 mm in length, 26 mm in diameter, weighing 132 g in the air, i.e. 1.2% of the mean body mass of equipped king penguins), using waterproof tape (Tesa tape, 4651; Tesa), stainless steel cables (4.5 mm in width, STB-360S; Hellermann Tyton), and instant glue (Loctite, 401; Henkel, Germany). The multi-channel data logger recorded swim speed (m·s −1 ), depth (m), ambient temperature (°C), tri-axis magnetism (nT) at a rate of 1 Hz, and tri-axis acceleration (m·s −2 ) at a rate of 8 Hz for two birds (K6 and K9) and 16 Hz for all other birds. Two out of the nine penguins (K3 and K5) were equipped with GPS data loggers (CatTrack, recustomized with a 1500 mAh lithium-iron phosphate battery and a deep depth casting; final size was ca. 60 × 40 × 25 mm, 50 g in air), together with the multi-channel data loggers (GPS data not used in this study due to short battery lifetime and limited available data). One out of the nine penguins (K8) was equipped with an oesophageal temperature logger in addition to the multichannel data logger. The temperature sensor of this logger was set in the oesophagus and the body of the logger was set in the stomach (see methods in Bost et al. 2007). The deployment procedure for each penguin was completed within an hour. Attaching data loggers can affect diving behaviours in king penguins (Ropert-Coudert et al. 2000a). In previous studies and this study, the foraging trip duration of king penguins with loggers was longer than that of penguins without loggers (e.g. Bost et al. 1997;Charrassin et al. , 2002. Nevertheless, the range and mean of diving depths in our penguins (1-366 m, 52 ± 74 m) were similar to that reported in a previous study (3-343 m, 55 ± 16 m, mean ± standard deviation) that used smaller loggers (e.g. 98.5 × 20 × 10 mm, 30 g: Pütz and Cherel 2005). Thus, we expect the effect of our data loggers on diving capacity to be relatively limited. After their foraging trips, birds were recaptured upon their return to their colony, and the data loggers were retrieved. Diving behaviour Data analysis was performed using IGOR Pro 8 (WaveMetrics, USA) with the program package Ethographer (Sakamoto et al. 2009). Submersions with a depth ≥ 1 m that lasted ≥ 30 s were considered dives. Deep dives were defined as dives of ≥ 50 m depth (Charrassin et al. , 2002Pütz et al. 1998;Ropert-Coudert et al. 2000a;Hanuise et al. 2013) and shallow dives were defined as dives of < 50 m depth. It is well known that the foraging movements of the king penguins at Crozet are associated with the areas of relatively low SST (4-5 °C) in summer (Park et al. 1993(Park et al. , 1998Bost et al. 1997). We, therefore, calculated the SST experienced by the birds as the mean ambient temperature recorded by the loggers during 1-10 m depths (Charrassin and Bost 2001). Diving data were analyzed with respect to the time of day (night, dawn, daylight, dusk). The times for sunset and sunrise were downloaded from the Hydrographic and Oceanographic Department of Japan Coast Guard website (www1. kaiho. mlit. go. jp/ KOHO/ autom ail/ sun_ form3. html), which are determined as the instant uppermost portion of the sun is at the horizon when viewed from the position of the breeding colony. During the experimental period (27 January-01 March 2011), sunrise and sunset times were 05:17-06:07 and 19:23-20:14, respectively, per the local time (GMT + 4 h). Dawn and dusk were defined as 1 h before sunrise and after sunset, respectively. The period between dawn and dusk was defined as "daytime," while "nighttime" was the period between the end of dusk and the onset of dawn. "Twilight" refers to the combined periods of dawn and dusk. We used sunrise and sunset times at the breeding colony for the corresponding dates because we do not know the accurate locations of birds at sea. The sunrise and sunset times at the breeding colony could deviate from those at the foraging locations of the study birds, but the deviations would be less than 20 min within a typical latitudinal movement range (up to 5° south of the breeding colony). Therefore, we considered that the use of sunrise and sunset times at the breeding colony would have a minimal impact on the main results in this study. Swim speeds were calculated from the rotations of a propeller (rev·s -1 ), which was located at the front part of the multi-channel logger. To convert the number of rotations into absolute speeds (m·s −1 ), we set a constant value for each bird using the calibration method described by Shiomi et al. (2008). First, using the number of propeller rotations per second and the angle of the logger's longitudinal axis (relative to the horizontal plane), which was calculated from gravitational acceleration (Sato et al. 2003), the 'simulated vertical movement rate' (penguin's depth change per second) was obtained. Then, the simulated vertical movement rate at each second was summed from the start to the end of each dive to reconstruct the simulated depth profiles. Then, the attachment angle of the logger relative to the penguin's body axis was estimated so as to make the simulated depth at the end of the dive zero (see Fig. 3 in Sato et al. 2003). Finally, the simulated depth profiles were compared with the actual depth profiles measured by a pressure sensor. We then chose the optimal constant value for each dive so that the simulated depth profiles were consistent with the actual depth profiles (see Fig. 1 in Shiomi et al. 2008). The constant values obtained for all dives of each bird were averaged, and the average value was used to calculate actual swimming speeds for all dives. After converting propeller rotations into swim speed, the mean swim speed within a dive was calculated to identify dives with speed recording failures. Speed data from some dives were compromised due to propeller failure (e.g. temporal intrusion of materials such as algae or feathers). These dives (3.2% of all dives) with a mean swim speed below a threshold of 1.0 m·s −1 were excluded from further analyses. We chose this threshold by visually checking a plot of average swim speed against stroke rate within a dive (Shiomi et al. 2016). Feeding behaviour To estimate the timings of the feeding activities of penguins during dives, the following approach was used. First, the oesophageal temperature data obtained from one king penguin (K8) was used to measure feeding activities. A decrease in oesophageal temperature was considered a reliable index for ingestion events since king penguins feed on ectothermic prey Hanuise et al. 2010). We defined a decrease of 0.06 °C·s −1 in oesophageal temperature as a feeding event as described by . Next, a steep increase and decrease in swim speeds were considered to be related to feeding events such as the pursuit and capture of prey (Wilson et al. 2002;Shiomi et al. 2016;Brisson-Curadeau et al. 2021). Steep increases in swim speeds (up to 4.0 m·s −1 ) that interrupt cruising speed (approximately 2.0 m·s −1 ) were often observed in the swim speed profiles and were termed "dashes" (Ropert-Coudert et al. 2000b). To quantify the dashes in swim speeds of other instrumented penguins, swim speed, U, was converted into accumulated values of acceleration following Ropert-Coudert et al. (2000b): where U' t is the function describing the increase of the swim speed as a function of time (t). The threshold value on acceleration peaks (dashes) was determined based on the number of dashes per dive to match the number of feeding events estimated from oesophageal temperature changes and a value of ≥ 0.68 m·s −2 (y = 0.93x + 0.06, R 2 = 0.75, n = 1442 dives, Fig. S1) was defined as a Changes in sea-surface temperature (SST), travelling, and foraging characteristics for a king penguin (K6) during a foraging trip. Parameters include SST, depth, horizontal movement rate, and the number of feeding attempts per day "feeding attempt". This value was applied to all the individuals and used for further analysis of feeding behaviour. We note that the number of oesophageal temperature drops may underestimate the actual number of prey captures, i.e. a single decrease in oesophageal temperature may reflect up to several prey captures when penguins catch small myctophid prey (less than 2 g) (see Hanuise et al. 2010). Feeding dives were defined as dives in which at least one feeding attempt occurred. The number of feeding attempts per minute submersed was calculated as a feeding attempt rate (n·min −1 ) for each feeding dive. The feeding attempt depth refers to the depth where each feeding attempt occurred. Assuming that breeding king penguins travel to reach distant foraging areas at the Polar Front (PF), we divided their foraging trips into two phases: a "low-feeding phase" representing the transit period and a "high-feeding phase" representing periods of intense feeding in main foraging areas based on the total number of feeding attempts per day. Based on rapid changes in swim speed, we estimated king penguins made 3-665 feeding attempts daily (Figs. S2, S3), with a higher number of feeding attempts in the middle of their foraging trips (Figs. 1, S3). The frequency distribution of the total number of feeding attempts per day was bimodal (Fig. S2). This enabled the separation between the two trip phases. We defined 'high-feeding phases' as the phases when the birds made more than 300 feeding attempts per day and the 'low-feeding phases' as the phases when the birds made less than 300 feeding attempts per day (Figs. 1, S2). However, one bird (K8) made 286 feeding attempts on day 8 and 400 and 665 feeding attempts on days 7 and 9, respectively (Fig. S3). We, therefore, considered day 8 of K8 as a high-feeding phase (Fig. S3). We did not analyze inward and outward low-feeding phases separately (Bost et al. 1997) because the logger data mostly covered outward but not inward low-feeding phases due to the limitation of battery and memory capacity ( Fig. 1; Table 1). Horizontal movement The 3D dive paths were reconstructed from the following data: tri-axis magnetism, tri-axis acceleration, swim speed, and depths using the dead-reckoning method (Johnson and Tyack 2003;Shiomi et al. 2008). A tri-axis acceleration sensor in the logger recorded the following two components of acceleration: (i) dynamic acceleration related to propulsive activities, and (ii) static acceleration derived from gravity related to posture changes. First, we used a low-pass filter with a threshold value of 0.69 Hz to extract only static accelerations from tri-axis accelerations. The threshold value of the filter was determined by analyzing the power spectral density of the raw acceleration data. We then used the freeware macro available online to reconstruct 3D dive paths Shiomi et al. 2010). Using this macro, 3D dive paths were estimated by the following procedure (Johnson and Tyack 2003;Shiomi et al. 2008). Firstly, the pitch and roll angles of the penguins were calculated from tri-axis static accelerations (Johnson and Tyack 2003;Shiomi et al. 2008). Then, the heading was calculated from the pitch, roll, and tri-axis magnetism as angles between the vector of the horizontal component of the total geomagnetic intensity and that of the longitudinal axis of the animal. Headings relative to true north were obtained by adding the declination of earth's magnetism at the breeding colony, − 49.6° (the International Geomagnetic Reference Field model; https:// www. ngdc. noaa. gov/ geomag/ calcu lators/ magca lc. shtml# decli nation). Finally, the 3D dive paths were reconstructed from heading, swim speed, and depth recorded every second with a dead-reckoning method. From the reconstructed 3D dive paths, the horizontal straight-line distance from the start to the endpoints of each dive was calculated. The horizontal straight-line distance was divided by dive duration to calculate the horizontal movement rate (m·s −1 ) for each Data recording of K8 started 96 h after deployment Solid lines represent the time range when the bio-logging data were obtained for each trip. Broken lines represent the time range with no biologging data. Closed and open circles represent the start and the end of the trips, respectively. The bold lines represent the period of the highfeeding phase (≥ 300 feeding attempts per day). Dates are given in the yyyy/mm/dd format dive, which was used as an index of travelling behaviour (Fig. S4). The horizontal movement rate indicates that the speed of horizontal movement per unit time. High values indicate that the penguins move rapidly in a horizontal dimension. We also calculated path straightness as an index of the tortuosity of dive paths. The horizontal straight-line distance was divided by the cumulative horizontal distance travelled from the start to the end points of each dive (Fig. S4). Straightness values range from 0 to 1, with higher values indicating that the path is more linear (Benhamou 2004). Statistical analysis Statistical analysis was performed using R software (R Core Team 2020). A Brunner-Munzel test was used to compare the median dive depths in deep dives (≥ 50 m), feeding attempt rates, feeding attempt depths, and horizontal movement rates between two trip phases (low-feeding phase and high-feeding phase) for each bird using the R package lawstat (Gastwirth et al. 2020). This statistical test was selected because the distributions of behavioural parameters were not normally distributed. A generalized linear mixed model (GLMM) with Poisson error distribution was fitted to determine the effect of the mean SST per day on the number of daily feeding attempts. We selected Poisson error distribution because the dependent variable (number of feeding attempts per day) was count data. We determined the effect of maximum dive depth in deep dives on the horizontal movement rate using a linear mixed model (LMM). In this LMM model, we included the trip phase (low-feeding or high-feeding phase) as a categorical fixed factor and 'maximum dive depth × trip phase' as an interaction term. We also determined the effect of the number of feeding attempts per dive on the path straightness using LMM. In this model, we included trip phase as a categorical fixed factor, and 'number of feeding attempts per dive × trip phase' as an interaction term. We included BirdID as a random factor in all GLMM and LMM models. We used the glmer and lmer function, for GLMM and LMM, respectively, from the R package lme4 (Bates et al. 2015). We obtained the p-value of GLMM and LMM models using the glht function from the R package multicomp (Hothorn et al. 2008). In addition, we calculated marginal R 2 and conditional R 2 using r.squaredGLMM function from the R package MuMIn (Barton 2020). The marginal R 2 and conditional R 2 components are interpreted as the variance explained by the fixed effects only and the entire model, respectively (Nakagawa and Schielzeth 2013). Values are shown as the mean ± the standard error of the mean. Data recovery We recaptured the equipped birds and retrieved the loggers from 8 of the 9 birds after their foraging trips (trip duration: 15.7 ± 1.6 d, range 10.7-22.5 d). One bird (K7) did not leave the colony after deployment and hence foraging data could not be obtained. The logger data from the 7 birds covered 27-88% of the whole trip due to onboard memory and battery limitations of the loggers, while those from another bird (K6) covered a whole trip (Table 1). For two of the 8 birds (K2 and K8), tri-axis magnetism data were not available because of technical problems and thus 3D dive paths could not be calculated. Diving behaviour The median dive depths in deep dives (≥ 50 m) of king penguins did not differ clearly between the trip phases (Table 2). Among 5 birds, the median dive depths were significantly shallower in the high-feeding phase in 2 birds, while the opposite pattern (significantly shallower dives in low-feeding phases) was observed in 2 other birds, with no significant differences between trip phases in another bird ( Table 2). As previously reported, diving depths showed a diurnal rhythm (Figs. 2, 3), with only shallow dives (< 50 m) occurring during the nighttime, and both deep and shallow dives occurring during the daytime. Dive depths gradually increased at dawn and decreased at dusk (Fig. 2). Notably, the general diel pattern of deep and shallow dives differed between the two phases (Figs. 2, 3). In the low-feeding phase, deep dives occurred throughout the daytime but were often interspersed with shallow dives (Figs. 2a, 3a). Deep dives with some feeding attempts occurred intensively for several hours after sunrise and several hours before sunset (Figs. 2a, 3a) during this phase. In contrast, deep dives continuously occurred throughout the daytime in the high-feeding phase (Figs. 2b, 3b). Feeding behaviour The daily number of feeding attempts estimated from swim speed data was higher in colder water masses (Fig. 1) with a significant effect of SST on feeding activity (GLMM: y = exp (-0.48 x + 8.28), n = 57 days from 8 birds, p < 0.001, random effect = BirdID, Fig. S5). This indicated that penguins fed actively when they reached the low SST area (Fig. 1), presumably near or in the PF area. The median feeding attempt depths (depths of feeding attempts) did not differ clearly between the trip phases: the median feeding attempt depths of 2 of the 5 birds were shallower in the high-feeding phase than in the low-feeding phase, and deeper in the other birds (Table 2). In the high-feeding phase, the feeding attempt depths during all dives were concentrated at 100-140 m (Figs. 4a, S6). In contrast, in the low-feeding phase, the feeding attempt depths during deep dives showed more dispersed distributions (range of coefficient of variation: 30-45% in the low-feeding phase and 25-31% in the high-feeding phase, Figs. 4a, S6). In the high-feeding phases, the feeding attempt rates during feeding dives were higher during the daytime than during the nighttime (Fig. 5b). In contrast, in the low-feeding phase, the feeding attempt rates during feeding dives were lower Median horizontal movement rate (m·s −1 ) Low-feeding High-feeding P Low-feeding High-feeding P Low-feeding High-feeding P Low-feeding High-feeding P . 2 Typical diel dive pattern of a king penguin (K6) in the lowfeeding (a) and high-feeding (b) phases during a foraging trip. Unshaded, light gray, and dark gray zones indicate daytime, twilight, and nighttime, respectively. Orange circles indicate the occurrence of feeding attempts during the middle of the daytime than other daytime hours and during the nighttime (Fig. 5a). Horizontal movement The median horizontal movement rates during all dives were higher in the low-feeding phase than those in the high-feeding phase ( Fig. 1; Table 2). Shallow dives showed a higher horizontal movement rate than that deep dives (1.62 ± 0.0002 m·s −1 in the shallow dives vs. 0.98 ± 0.01 m·s −1 in the deep dives, p < 0.001). During deep dives, the horizontal movement rates decreased with an increase in maximum dive depths (LMM: y = − 0.03x + 1.1, n = 3498 dives from six birds, marginal R 2 = 0.35, conditional R 2 = 0.46, p < 0.001, random effect = BirdID, Figs. 6a, S7). In addition, the horizontal movement rate in deep dives (≥ 50 m) for a given depth was higher in the low-feeding phase than in the high-feeding phase, and a significant 'maximum dive depth × trip phase' interaction effect was observed (p < 0.01, Fig. 6a). Path straightness in deep dives (≥ 50 m) decreased with an increase in the number of feeding attempts per dive (LMM: y = − 0.04x + 0.83, n = 2714 dives from three birds, marginal R 2 = 0.48, conditional R 2 = 0.5, p < 0.001, random effect = BirdID, Figs. 7, S8). In addition, the path straightness in deep dives (≥ 50 m) for a given number of feeding attempts was higher in the low-feeding phase than in the high-feeding phase, and a significant 'number of feeding attempts × trip phase' interaction effect was observed (p < 0.001, Fig. 7). Discussion This study examined the fine-scale patterns in the travelling and foraging behaviours of a deep-diving avian predator based on the rapid changes in swim speed and horizontal movements estimated by 3D dive path reconstruction. This novel approach shows that deep avian divers such as king penguins adjust fine-scale travelling and feeding Time of day (h) Fig. 4 Comparison of the vertical distribution of feeding attempts between low-feeding and high-feeding phases of foraging trips. a Frequency distribution of the feeding attempt depths for a king penguin (K6) during the low-feeding (black) and high-feeding (red) phases. b Feeding attempt depth as a function of time of the day for a king penguin (K6) during the low-feeding (black coloured circle) and the high-feeding (red coloured circle) phases. Unshaded, light gray, and dark gray zones indicate daytime, twilight, and nighttime, respectively Fig. 5 Mean feeding attempt rates and the standard error during feeding dives (dives with at least one feeding attempt) per hour for king penguins as a function of time of day during the low-feeding (a, n = 8 birds) and high-feeding (b, n = 5 birds) phases patterns in relation to the distribution of their prey at different spatial and temporal scales. In this section, we discuss how king penguins adjust their local travelling and foraging behaviours in relation to (1) large-scale distribution of their prey associated with sea temperature changes, and (2) diel changes in feeding opportunities. Low-feeding phase vs. high-feeding phase of trips King penguins are well known to rely on prey resources present in distant but predictable frontal areas such as the PF area, which are 300-500 km away from their colonies (Bost et al. 2015). They feed mainly on myctophids, which are abundant in the PF area and distributed near the water surface at nighttime but in deeper depths during the daytime (Sabourenkov 1991;Pakhomov et al. 1994). The prey ingestion rate of king penguins, based on oesophageal temperature and dive profile records (i.e. 'wiggles') is higher in the PF area (Bost et al. 2015). Our results confirmed a higher feeding activity in the middle of the trips, where the penguins experienced low SST, corresponding to the PF area (Table 2; Fig. 1). King penguins dive typically to and below thermocline depths where myctophids are thought to be distributed during the daytime, and their dives become shallower as they travel toward the PF area as a response to the relatively shallower thermocline (Charrassin and Bost 2001). Feeding attempt depths are considered to reflect the depth distribution of available prey for a given dive (Fig. 4b). A higher concentration of feeding attempt depths to particular depth zones (100-140 m depths) in high-feeding phase than in the low-feeding phase (Figs. 4,S6) suggests that myctophids might be more aggregated at these depths. Our new results reinforce the hypothesis that prey is more accessible and predictable for king penguins in the PF area (Bost et al. 2009). The travelling behaviour of king penguins appears to reflect the high prey availability in the PF area. Higher horizontal movement rates in the low-feeding phase than in the high-feeding phase (Table 2; Fig. 1) suggest that penguins swam horizontally which favor a rapid transit rapidly to the PF area. A pattern of decreasing path straightness with an increasing number of feeding attempts per dive (Figs. 7,S8) suggests that penguins performed area-restricted search, with increased search effort associated with prey encounters (Kareiva and Odell 1987). Higher path straightness adjusted for the number of feeding attempts per dive (Figs. 7, S8) Fig. 6 Comparison of horizontal movements between low-feeding and high-feeding phases for a king penguin (K6). a Relationship between horizontal movement rates and maximum dive depth during deep dives (≥ 50 m) for a king penguin (K6). Deep dives during lowfeeding (black coloured circle) and high-feeding (red coloured circle) phases are shown separately, with linear regression lines (low-feeding phase: y = -0.006x + 1.5, high-feeding phase: y = -0.003x + 1.6). Statistical analysis was conducted with a linear mixed effect model (see 'Result'). Gray circles indicate shallow dives. b-e Examples of horizontal movement paths (b, c) and depth profiles (d, e) for dives during low-feeding (b, d) and high-feeding (c, e) phases. Closed and open circles represent the start and end of the dives, respectively. Arrows indicate travelling direction. Orange circles indicate the occurrence of feeding attempts and higher horizontal movement rates during deep dives (≥ 50 m) for a given depth (Figs. 6a, S7) in the low-feeding phase than in high-feeding phase suggest that deep dives in the low-feeding phase may be intended for both travelling and opportunistic foraging (Fig. 6b-e). In the high-feeding phase, relatively low horizontal movement rates and path straightness reflected the increased foraging activity in the profitable PF area. Lower horizontal movement rates are likely to be the result of steeper diving body angles and lower straightness of dive paths of penguins ( Fig. 6b-e). These results are in accordance with the previous findings that penguins use steeper diving body angles during the descent and ascent phases of their dives when they experience relatively high-feeding success, even for comparable maximum dive depths (Sato et al. 2004;Hanuise et al. 2013). Thus, our results suggest that king penguins may shift their prey search behaviour in relation to the high prey availability in the more distant but profitable PF area. Diel changes in foraging and travelling behaviour Myctophids perform diel vertical migration (Zaselsky et al. 1985;Perissinotto and McQuaid 1992;Collins et al. 2008), which results in diel changes in the feeding opportunities for visual foragers such as king penguins (Wilson et al. 1993;Bost et al. 2002). Higher feeding attempt rates around dawn and/or dusk than during the nighttime and around midday (Fig. 5) suggest that feeding opportunities are high for king penguins during twilight when myctophids transit between deep depths and the near-surface water column (Pütz and Cherel 2005;Scheffer et al. 2010). During the nighttime, penguins made few feeding attempts, as has been previously reported (Pütz and Bost 1994;Pütz et al. 1998;Shiomi et al. 2016). This has been attributed to the low light intensity available during nighttime for foraging penguins (Wilson et al. 1993;Pütz et al. 1998;Bost et al. 2002). During the daytime, penguins made more feeding attempts at deep depths where myctophid are thought to be distributed (Pütz and Bost 1994;Bost et al. 2002). Lower feeding attempt rates during daytime in the low-feeding phase than in the high-feeding phase (Fig. 5) probably reflect that thermocline depths where myctophids are distributed may be deeper and more difficult to reach by penguins in the low-feeding phases, presumably before penguins reach the PF area (Charrassin and Bost 2001). Thus, feeding opportunities are likely to be higher during dawn and dusk because prey are accessible at shallower depths with sufficient light intensity for visual detection of prey (Piersma et al. 1988;Zimmer et al. 2008;Regular et al. 2010). Furthermore, during these periods, penguins spend more time on relatively deep dives (Figs. 2, 3), suggesting that king penguins concentrate their foraging efforts when feeding opportunities are high, such as around dawn and dusk. Overall, the travelling behaviour of king penguins appears to reflect the diel changes in feeding opportunities. At nighttime, penguins increased the time spent on shallow dives per hour (Fig. 3), suggesting that penguins may concentrate their travelling behaviour at nighttime when feeding opportunities are relatively low due to low light intensity for detecting their prey. During the daytime, penguins spent less time on shallow dives in the high-feeding phases (Fig. 3b), suggesting that they performed less travelling in the PF area (Cotté et al. 2007). In contrast, in the low-feeding phase, the time spent for shallow dives per hour tended to be high after dawn and during the middle of the daytime (Fig. 3a). This suggests that penguins may increase their travelling time when feeding opportunities are low (Fig. 5a). During deep dives (≥ 50 m), the horizontal movement rate decreased with increased dive depth (Figs. 6a, S7), indicating that foraging at greater depths is associated with a decreased horizontal travelling speed when penguins only have a limited time to prospect at a greater depth. These results showed that king penguins travelling to and from the PF area made behavioural adjustments by increasing their travelling behaviour during periods of low-feeding opportunities such as during the nighttime and the middle of the daytime due to the possible difficulty in detecting and accessing prey. Relationship between path straightness and number of feeding attempts per dive for a king penguin (K6). Dives during low-feeding (black coloured circle) and high-feeding (red coloured circle) phases are shown separately, with linear regression lines (low-feeding phase: y = -0.04x + 0.88, high-feeding phase: y = -0.02x + 0.66). Statistical analysis was conducted with a general linear mixed effect model (see 'Result') Conclusions This study provides new insights into the fine-scale patterns of travelling and foraging of an avian deep diver, the king penguin, during foraging trips using multiple types of bio-logging measurements. At the scale of a foraging trip, king penguins appear to modify their travelling and foraging behaviours in relation to the large-scale spatial distribution of their prey. Thus, king penguins maximise horizontal travel and opportunistic foraging to main foraging areas and then fed intensively near or in the PF area throughout the daytime. At the diel scale, king penguins travelling to and from the PF appear to adjust travelling and foraging in relation to the diel changes in feeding opportunities associated with the diel vertical migration of prey. King penguins concentrate their foraging efforts during dawn and dusk when feeding opportunities are likely to be high because prey is much more accessible at shallow depths. The behavioural adjustments reported here might be an important movement strategy adopted by king penguins to travel long distances while foraging on prey exhibiting diel vertical migration. Diving predators can search for their prey in the three dimensions of the water column when travelling toward profitable foraging areas (Bost et al. 2009). Therefore, diving seabirds could take advantage of spatiotemporal changes in feeding opportunities underwater more easily than flying seabirds which are surface feeders. Such abilities might compensate for the higher energy costs of swimming and have developed a characteristic movement strategy in diving seabirds. Concurrent measurements of feeding activities and finescale movements are a promising tool to better understand the dynamics movement decisions of long-ranging diving predators.
8,687
2023-01-24T00:00:00.000
[ "Environmental Science", "Biology" ]
FTIR and GCMS Analysis of Bioactive Phytocompounds in Methonalic Leaf Extract of Cassia Alata The methanolic extract of plant Cassia alata was prepared by using soxhlet apparatus. FTIR and GCMS analysis were done to this plant extract to find out the bioactive phytocompounds. The FTIR results of this plant extract showed 21 peaks indicate the presence of the bioactive compounds such as sulfates, sulfonamides, sulfones, sulfonyl chlorides, sulfates, sulfonamides, alkanes, aromatic, aromatic, alkenes, ester, alkenes, ketenes, isocyanates, isothiocyanates, acetylene, nitrile, phosphine, phosphine, aldehyde, alkane, amide, alcohol and alcohol. The GCMS results showed 13 peaks. The rent retention time (RT) of all these thirteen peaks indicate the presence of functional group such as 1-Butanol, 3-methyl-1,6-Anhydro-.beta.-D-glucopyranose (levoglucosan), 3-O-Methyl-dglucose, Oxirane, 10-Methyl-E-11-tridecen-1-ol propionate, l-(+)-Ascorbic acid 2,6-dihexadecanoate, (R)-(-)-14-Methyl-8-hexadecyn-1-ol, Oleic Acid , Vitamin E acetate and 1,2-Bis(trimethylsilyl)benzene. INTRODUCTION Plants play a vital role in our lives. Addition to this, they are mainly used to cure the various human diseases, treatment of ailment and improve the health of affected organs from the period of time immemorial (Sofowora,1993). About 80% of the world's population relies solely or largely on traditional remedies for their healthcare needs. Today, about 70,000 to 80,000 plant species are used for medicinal or aromatic purposes globally. This is because of some biological active and naturally occurring phyto-chemical present in the various parts of plants. Plant produce these chemical compounds as part of their normal metabolic activities to protects their own cells from environmental hazards such as pollution, stress, drought, UV exposure and pathogenic attack (Gibson et a, 1998., Mathai et al,2000 andIzhaki, 2002 ), which provide health benefits for humans further than those attributed to macronutrients and micronutrients (Hasler and Blumberg,1999).Even today, large number of peoples in the developing countries are using the plants and plant based preparations to cure various diseases by their inherent traditional knowledge because they believe the herbal medicines are safer than synthetic medicines because the phytochemicals in the plant extract target the biochemical pathway (Zaidan et al., 2005).Addition to this, the side effects associated with synthetic drugs continue to make researchers to look for natural remedies which are safe and effective (Gijtenbeek et al, 1999 andJohnson andWilliam, 2002). A single plant may contain a great number of bioactive phytocompounds and a combination of plants even more. This complexity is one of the most important challenges to phytoscientists to identify and sort out which bioactive compounds has the potential to cure which disease. Without screening the active compounds from particular plant, the researcher cannot invent a new medicinal drug to cure particular disease. Hence, in the present investigation, the plant Cassia alata have been selected to screen the possible bioactive compounds by GCMS method. Plant description Cassia alata belongs to family Fabacea and sub family Caesalpinioideae is commonly known as King of the forest,emperor's candlesticks, candle bush, candelabra bush, Christmas, Ringworm Bush, Dadrughna, Dadmardan, Dadmari (Daad=Ringworm) Desay, Fleur, Impetigo bush, Ringworm tree, Candelabra bush, Guajava Empress candle plant, Seven Golden Candlestick, and Christmas candle. It is a medium growing, soft wooded medicinal shrub plant reaching up to height of 6-8 feet. This plant is growing in open wastelands near watery places. It is one of the oldest known medicinal plant of Central America, world wide, particularly in Asia Pacifi countries where people used it for curing skin diseases, worms, fever, bites of insects, ring worm, goiter, hook worm infestation, sexually transmitted diseases, constipation and other skin diseases ,blemishes, scabies, ringworm and other fungal skin infections. Common names in various countries Dadmurdan ( Distribution This plant is mostly found in tropic and sub tropical regions. i.e., India, Pakisthan, Burma, Srilanka, Philliphines and most of the African countries. Habitats They are grow in rather open vegetation such as roadsides, river banks, rain forest edges, lake shores, pond and ditch margins, open forest, orchards and around villages, at elevations up to 1,400 metres, occ 2,100 metre and fast-growing, but short-lived plant. This plant is mostly found in moister areas in the tropics. It is reported to tolerate a mean annual rainfall of 600 -4,300mm and average yearly temperatures of 15 -30°c, grow well in well drained both heavy and sandy, acid to slightly alkaline, soil of a sunny position, wastelands, flood plains, highly adaptable, tolerating both drought and waterlogged soils. Morphology This plant is erect shrub, able to grow up to 3 -4 m tall. Foliage contains pinnately compound leaves with 6 -12 pairs of leaflets (30 -60 cm long). Leaflets are oblong, smooth and thinly leathery (6 -15 cm long, 3.5 -7.5 cm wide) in appearance. They have a rounded tip with a slight indentation in the middle.T he flowers are arranged in a vertical column and bloom from the base of the column. The inflorescence is raceme type, resembles a lit, yellow candle, because the flowers at the base are yellow, while the unopened flower buds at the top are covered by orange bracts. Fruits are in the form of winged pods with dark purple to black colour, smooth and 4-sided. Each pod contain 50 -60 flattened, triangular to squarish seeds. Medicinal uses In rural Tamilnadu, this plant is used to treat various skin diseases caused by bacteria and fungal infection and by insect bite. Cassia alata is widely used as traditional medicine in India and Southeast Asia (Reezal et al., 2002). This plant is reported to possess insecticidal, anti-inflammatory, hydragogue, sudorific, diuretic, pesticidal properties.Fresh leaves juice is used for ring worm, snakebite, scorpion bite, skin diseases, impetigo, syphilis sores,itching, mycosis (washerman's itch), herpes and eczema. Roots, leaves and flowers of this plant possess many biological properties such as antibacterial, antifungal, anti-inflammatory, antitumor, expectorant and also useful in urinary tract problems (Quattrocchi, 2012), The leaves have been reported to be useful in the treatment of convulsions, gonorrhoea, heart failure, abdominal pains, oedema and also The leaves have been reported to be useful in the treatment of convulsions, gonorrhoea, heart failure, abdominal pains, oedema and also as a purgative (Ogunti and . Elujoba,1993). Cassia alata has been reported to contain anthraquinones and the methanol fractions were found to be active against Aspergilus flavus (Ogunti andElujoba,1993 andOwoyale et al, 2005). MATERIALS AND METHOD Shade dried leaves were ground well by using mixer grinder to get fine powder. This powder was stored in a air tight contained for later usage. 25 grams of this powder was packed in the thimble of the soxhlet extractor and the methanol was loaded into the distillation flask. The leaf extract obtained at the flask finally collected after the completion of soxhlation. This methanol extract was taken to the FTIR and GCMS analysis. FTIR and GCMS Analysis FTIR analysis was done by the instrument FT/IR-6300 type, S.No-A021261024 BMDLABS Company in standard light sources by using TGS detector at resolution 4cm-1 . The GCMS analysis was done by GCMS-Perkin Elmer. The bioactive compound of methanol extract of leaves of plant Cassia alata was traced out by FTIR spectrophotometer (Thermo electron Scientific). Totally 21 peaks were obtained for the GC-MS chromatogram of the ethanol leaf extract of Cassia alata (Figure 1) showed 13 peaks indicating the presence of thirteen compounds with the retention time range between 2.72 and 34.54 (Figure 1).The active principles in the methanol leaf extract of Cassia alata were confirmed based on retention time (RT), molecular formula, molecular weight (MW) and structures. The phytochemical compounds with their retention time (RT), molecular formula, molecular weight (MW) and structures are presented in Table 1. The first compound identified with less retention time (4.181min) was 1-Butanol, 3-methyl-, whereas 1,2-is(trimethylsilyl)benzene, was the last compound which took longest retention Table 2 The present a GC-MS analysis result of C. alata leaves all thirteen compounds possess many biological properties. For instance, Oleic acid shows some beneficial effect on cancer, autoimmune and inflammatory diseases, besides its ability to facilitate wound healing and may improve the immune response associated to a more successful elimination of pathogens such as bacteria and fungi, by interfering in many components of immune system such as macrophages, lymphocytes and neutrophils ( Sales-Campos et al, 2013).Vitamine E acetate at retention time 14.709 (RT) is a potent antioxidant compound. It exhibits antioxidant activity by virtue of the phenolic hydrogen on the 2H-1-benzopyran-6-ol nucleus. It has four methyl groups on the 6-chromanol nucleus. The natural d form of alpha-tocopherol is more active than its synthetic dl-alpha-tocopherol racemic mixture. 10-Methyl-E-11-tridecen-1-ol propionate obtained at retention time 10.607(RT) was also identified in GSMS study of Premna serratifolia by Vasantha and Maruthasalam (2015) has no any remarkable bioactivity. l-(+)-Ascorbic acid 2,6-dihexadecanoate observed at retention time 10.176 (RT) shows Antioxidant, antiscorbutic, antiinflammatory, antinociceptive, anti-mutagenic, wound healing property ( Vasthi Gnana Rani and Murugaiah, 2015). The bioactive compound oxirane observed at retention time 9.669,9.783 and 9.841 exhibited bacticidal fungicidal and sporicidal activities. Its is also used as effective antimicrobial agent to control a variety of micro organisms including virus and also used as sterilizing agent (Pub-chem,2004). ,6-Anhydrohexopyranoses observed at 7.672min have proven to be valuable synthons for the preparation of biologically important and structurally diverse products such as rifamycin S, indanomycin, thromboxane B2, (+)-biotin, tetrodotoxin, quinone, macrolide antibiotics and modified sugars. phytocompound 1,2-Bis(trimethylsilyl)benzene at retention time15.651 and 16.055 min has antioxidant, antimicrobial, anticanserous and antitumerous activity (Alok prakash and Suneetha,2014)
2,142
2018-03-25T00:00:00.000
[ "Chemistry", "Agricultural And Food Sciences" ]
“The moderating role of firm size and interest rate in capital structure of the firms: selected sample from sugar sector of Pakistan” The selection of financing is a top priority for businesses, particularly in short-and long-term investment decisions. Mixing debt and equity leads to decisions on the financial structure for businesses. This research analyzes the moderate position of company size and the interest rate in the capital structure over six years (2013–2018) for 29 listed Pakistani enterprises operating in the sugar market. This research employed static panel analysis and dynamic panel analysis on linear and nonlinear regression methods. The capital structure included debt to capital ratio, non-current liabilities, plus current liabilities to capital as a dependent variable. Independent variables were profitability, firm size, tangibility, Non-Debt Tax Shield, liquidity INTRODUCTION The sugar industry of Pakistan participates in a significant portion of the overall economy.Sugarcane is Pakistan's fourth largest cultivated cash crop.An agriculture-based industry provides employment for the rural landless population and greatly impacts the country's economy.There has been a renewed interest worldwide in identifying the factors affecting optimum capital structure decisions in manufacturing sectors.The main goal of enterprises is to maximize shareholders` wealth using mixed financing sources, including equity capital, retained profits, issuance of ordinary shares, preferred shares, and debt capital.Banks, individuals, financial institutions, and insurance firms have issued debt capital.Borrowing companies may take advantage of the tax shield using debt resources if they have operating profits, but it raises bankruptcy risks.Direct and indirect costs include the risk of bankruptcy.Indirect costs emerged due to shifts in corporate practices concerning long-term investments.Consequently, the potential advantages of leverage are minimized due to bankrupt costs, which are deemed highly leveraged companies to be an incredible risk.Modigliani and Miller (1958) claimed that a company's investment strategy should be focused solely on those factors which would improve a company's net worth or profitability.They also described a more sustainable capital structure and indicated that leverage and firm value were negligible.Ibrahim and Lau (2019) studied the determinants of financial leverage and suggested that tangibility is a significant positive association to debt ratio while liquidity and profitability observed a significant negative association.In the Pakistani context, which comprises a growing sector of sugar, the key objectives of this study are to contribute and extend the literature in exploring the relationship between macroeconomic factors and capital structure.This study is designed as follows.The following segment will review the literature.Thereafter, the methodology and the proposed theoretical model analyze the empirical results and originate a conclusion based on the findings.Modigliani and Miller (1958) clarified the capital structure value; although this assumption is only effective in perfect market conditions where all shareholders have free access to the financial data.There is no tax difference between zero transaction costs and profits and capital gains.Although several studies have been conducted on the determinants that define the capital structure, Sari and Sedana (2020) interred the effect of profitability and capital structure.They revealed a clear positive association between the variables of profitability and the capital structure of samples taken.Chen and Duchin (2019) noted that operating leverage showed a negative association between profitability and leverage statically.Operating leverage decrease optimal financial leverage and enhance profitability.They demonstrated outcomes using the capital-labor ratio of US enterprises.An.Chakrabarti and Ah.Chakrabarti (2019) revealed a significant negative association between debt ratio and profitability.Shah and Khan (2017) noted that the leverage ratio is inversely associated with the current ratio and profitability.However, the leverage ratio is favorably influenced by tangibility, firm size, and Non-Debt Tax Shield.The profitability effect is substantially poor, while the impact of tangibility, liquidity, Non-Debt Tax Shield, and size is highly significant.As suggested by Nasution, Siregar, and Panggabean (2017), tangible assets have a positive effect on the capital structure, while Non-Debt Tax Shield and profitability have a negative impact on the capital structure.Besides, these factors together have a major impact on the capital structure.Almendros and Mira (2016) revealed that financial distress has a significant and positive association with Non-Debt Tax Shield.Goh, Tai, Rasli, Tan, and Zakuan (2018) performed research on the capital structure and its factors in Malaysian firms from 2011 to 2014 and revealed that firm's Non-Debt Tax Shield and profitability are negatively related to firm debt.Lei (2020) also disclosed the important positive relationship between corporate capital structure and Non-Debt Tax Shield. LITERATURE REVIEW Vo (2017) suggested the coefficients are significant and negative in the short-term firm leverage.According to Eysimkele and Koori (2019), the debt financing and efficiency of the Nairobi securities exchange-listed agricultural companies, Kenya, has revealed a negative relationship between longterm debt and profits while being stable in the short and medium term.A further negative association is also observed in size, liquidity, and shortterm debt.Ibrahim (2017) provides evidence that liquidity, size, profitability, and leverage have a significant negative impact on firm value. Céspedes, Chang, and Velasco (2017) suggested that the real exchange rate could affect credit constraints, and a novel leverage ratio also affects.As per the study, uncertainty in the exchange rate influences foreign trade in the lengthy period and seems to have no impact in the short term (Nguyen & Do, 2020).Submitter, Sari, Siska, and Sulastri (2019) studied the moderating effect of size and revealed that size offers a moderating influence on the link between profitability, tangibility, liquidity, and capital structure efficiency, and this moderation is significant in large corporations.L. Chen and S. Chen (2011) suggested that firm size is the moderator variable and affects the relationship between leverage and profitability.In the first stage, the moderating effect happens.Mirza (2015) noted that firm size positively affects firm leverage.Muigai and Muriithi (2017) study the capital structure and indicated that firm size has a major moderating impact on the combination of financial instability and corporate capital structure. Al-Hunnayan (2020) found that the leverage relates positively to the company's size and is negatively linked to its competitiveness and tangibility.Li, Krause, Qin, Zhang, Zhu, Lin, and Xu (2018) examined interest rate regulations and accomplishing transparency.The finding of the study indicates that transparency of earnings increases firm leverage and the additional research indicated that such shock occurs as a means of raising the cost of debt financing.Although information disclosure can reduce the effect of the interest rate on the capital structure, Guo and Zhao (2017) examined the capital structure determinants and showed that size and tangibility are positively related.In contrast, Non-Debt Tax Shield and profitability have a negative impact on the determinants.Yazdanfar, Öhman, and Homayoun (2019) noted that profitability, tangibility, size, and financial crises explained the changes from the perspective of debt ratio.Rao, Khursheed, and Mustafa (2020) also explained that borrowing showed significant tangibility and firm size is negatively associated with debt ratio.Iqbal and Usman (2018) suggested that a high amount of debt and interest rates decrease equity value.Leland (1994) examined capital structure debt values and revealed that the debt ratio is explicitly linked with interest rate.Staking and Babbel (1995) focused on studying the role of capital structure and interest rate and noted that interest rate and debt have opposite effects.Bokpin (2009) examines the effect of macroeconomics variables and capital structure using a panel date unrelated regression approach of 34 emerging market countries.He indicated that the interest rate has a beneficial impact on businesses to replace long-term debt with short-term debt shows that a significant result is not obtained. HYPOTHESES OF THE STUDY Based on the previously discussed aims, the following hypotheses concerning the sugar sector are described: There is a positive relationship between debt to capital ratio and profitability of the Pakistani sugar sector. The interest rate has a significant moderating influence on the relationship between debt to capital ratio and profitability of sugar firms. H 3 : The firm size has a significant moderating influence on the relationship between debt to capital ratio and profitability of sugar firms. H 4 : There is a positive correlation between debt to capital ratio and liquidity of the Pakistani sugar sector. H 5 : There is a significant relationship between debt to capital ratio and the Non-Debt Tax Shield of the Pakistani sugar sector. H 6 : There is a positive correlation between the debt to capital ratio and the exchange rate of the Pakistani sugar sector. METHODOLOGY This section of the study describes analytical techniques for examining patterns, variables, the development of research assumptions, and the interdependence of interest rate and firm size on its capital structure. Data and sample The study sample included 29 registered Pakistani businesses working in the sugar sector.The first sugar sector was undertaken to avoid specious findings or some situations, such as the impact of interest rate on the firms' capital structure formation.The major focus of the study here is the moderating effect of the firm size and interest rate on capital structure, the net decision on profitability and tangibility, and the focus of macroeconomic variables (exchange rate and interest rate) on debt to capital ratio.They tend to be influenced and Source: Authors. Tools and techniques For assessing the impact of interest rate and firm size as moderate with debt to capital ratio, mean, standard deviation, and coefficient of variance are used.The coefficient of correlation is applied to get the association between firm size and debts to capital ratio and interest rate with debt to capital ratio.In the case of a static panel, to manage the robust standard error, a PCSE technique is used, where it covers the problem of autocorrelation and a heteroscedasticity problem after applying the correlation(ar1).During the analysis with linear and nonlinear regression analysis, to test the regression T-test results instead of the Z-value, the "small" option is used in system GMM regression. For "robustness," PCSE helps manage the heteroscedasticity and autocorrelation consistency (HAC) problem as well.The no-diff Sargan command is used to prevent the recording of a certain difference in Sargan statistics.An orthogonal option is used for transmitting orthogonal variations transform rather than the first difference. Variables An experimental variable counts in the investigation and it is considered during the experiment. Empirical model The paper explores how variables impact the company's debt to capital ratio (DCR) using the panel data analysis of cross-sectional time-series data ended in 2013-2018.DCR will be used as a response variable with a combination of variables; hence, DCR can be interpreted as follows: , Tangibility, Non-Debt Tax Shield, . Static panel model A simple linear regression equation is as follows: Static linear models stand accessible in the subsequent empirical equations ( 3) and ( 4): where i ( ) is the intercept for every firm, t ( ) Dynamic panel model Many businesses, banking, economics, and finance matters are character-driven and use panel data arrangements to agree with adjustments.It is essential to allow dynamics in the primary pro-cess for the constant estimation of other parameters.The dynamic connections are described by the carriage of a lagged dependent variable with the regressors, i.e. ,1 , (Bowsher, 2002).Rodman (2009) explained that the source of these difficulties is device expansion, an answer that cuts the measurement of the adjustable instrumental combination.Blundell and Bond (1998) and Alonso-Borrego and Arellano (1999) show that if the dependent and explanatory variables determined and running continuously over time or almost behaving a random walk, the variance of these components, in differences is performing as a weak instrument for regression (Nyblom, 1989).This is either due to the autoregressive approximation of the parameter union or the variability of the separate impact rises, increasing when idiosyncratic error varies.Therefore, to reduce the potential error and barriers related to difference estimators, Blundell and Bond (1998) projected a GMM method by merging differences and regressions crosswise levels. In calculating the regression of differences, the means on behalf of regression in levels are lagged differences (transformed), in which the reliability of GMM estimation is contingent on double descriptive diagnostics tests. Correlations The correlation analysis Results are presented in Table 3 where debt to capital ratio is a dependent variable and independent variable, are as follows: profitability, size, tangibility, NDTS, liquidity, exchange rate, and interest rate.To explore the correlation between DCR and profitability, it has a positive and significant impact, while liquidity has a significant and negative association with DCR. Overall variables significantly correlated with DCR.Tangibility, NDTS, and interest rates are positively correlated with DCR, while exchange rate, liquidity, and size are negatively and significantly correlated.(Singla, 2020).The initial reports of different independent variables are the results in the first column for Pooled Ordinary Least Squares (OLS), second column Random Effects (RE), then in the third column, the Fixed Effects (FE) regressions at the second stage.One uses the techniques to robust the standard error with the techniques of autocorrelation parameter is high, and the standard errors are large than for model exclusive of serial correlation, which is to be possible if there is a serial correlation.Column 4 (Hambuckers & Ulm, 2020) makes a case in contradiction of estimating panel exact AR parameters instead of one autocorrelation (AR) parameter for all panels.Outcomes from the two-step system GMM regression are included in the last column.The coefficient of determination, known as adjusted R-squared, suggests that different explanatory variables best explain the statistical models, and the model is best fit to data, and there are no multicollinearity problems in all the sample data as indicated by the variance inflation factor (VIF) values.Profitability, size, NDTS, liquidity, ExR, have a negative influence on debt to capital ratio, while tangibility and interest rate has a positive effect on debt to the capital ratio in case of a fixed-effects model.Using the PCSE technique to manage the problem of serial correlation, it was reported through the Wooldridge test and heteroscedasticity test as significant.It was then adjusted with PCSE in static panel data and reported that profitability and NDTS had changed their signs from negative to positive.It shows that PCSE effectively covers the problem of serial correlation and heteroscedasticity.In the case of system GMM, the value profitability rotates position and becomes positive, which means that one can infer after applying system GMM with a positive influence on debt to capital ratio. RESULTS AND DISCUSSION The OLS model explains profitability, size, tangibility, NTDS, liquidity, exchange rate, and interest rates to explain the disparity in debt to capital ratio.The fixed-effect model revealed that profitability, size, and NDTS are negative, while tenability and interest rates significantly positively affect the debt to capital ratio as it is the best choice.PCSE is always a good technique to overcome the problem with heteroscedasticity and serial corre- (Bokpin, 2009).The regression findings with adjusted R-squared values show that for all models, the specified independent variables have meaningfully explained the variance in debt to capital ratios (Mulyadi & Sihabudin, 2020).AR (1) and AR (2) are insignificant, whereas the Sargan test also has a consistent value.The selection of system GMM is the best fit for the selected sample data to infer the outcomes (Zhang & Wang, 2020).This model is tested using the Sargan / Hansen method for over-identification restrictions (Chatterjee, 2020).The AR (1) estimates were insignificant, whereas those for AR (2) were insignificant.The Sargan test results were insignificant, suggesting that the null hypothesis of jointly valid instrumental variables has not been ignored (Ma & Fu, 2020). CONCLUSION Researchers have conducted several experiments to determine what defines a firm's capital structure.Similarly, one examined the moderating effect of firm size and the interest rate on the firm's capital structure using panel data from the sugar sector of Pakistan.One has adopted a static and dynamic data panel approach.Interactive data panel models are anticipated to serial connection challenges, heteroscedasticity, and independent variable endogeneity.In this regard, applying static data panels, one uses PCSE, and for dynamic panel models, GMM estimation yields highly accurate regression results and is widely applied in research-based finance sectors.The results showed that firm size and interest rate have a strong and negative effect on its capital structure.Due to the high interest rates offered by commercial banks, large-size firms have enough relationships with consumers.They can manage their funds for loans and capital structure ratios in the firm's best interest.Higher short-term loans can accumulate more money because they lower the risk of liquidity, and it is found that moderator role interest rates affect liquidity.They can set up their funds.The Non-Debt Tax Shield is adversely linked to corporate debt ratios, and the higher Non-Debt Tax Shield is followed by lower levels of debt, thereby creating a certain replacement effect on corporate capital structure.The study findings affirm the effect of Non-Debt Tax Shield on the fundamental hypothesis.A favorable correlation is found between debt to capital ratio and tangibility, where the business collects debt to purchase tangible assets.The sample data from Pakistan is subject to a correlation test, which indicates no high correlations between the independent variables; therefore, no multicollinearity problem exists.Afterwards, it is checked with the command of VIF and found its value is less than 10, which means no multicollinearity in the model.The paper indicates that different influences, including the size, interest rate, profitability, liquidity position, influence the debt to capital ratio of the company.Managers will be considering the interest rate and the proportion of their total assets to debts of the company and other considerations in their debt finance decisions. Table 1 . Empirical literature review Author(s) Sample Dependent variable(s) Independent variable(s) Empirical methodology Several abbreviations were used to save space in creating a table of studies in the literature.PA = profitability, TB = tangibility, NDTS = Non-Debt Tax Shield, LQ = liquidity, REER = exchange rate.The positive sign (+) in the table indicates a positive association here between variables and the response variable, whereas the negative sign shows a negative relationship between the dependent variable(s) and the variables.The IS abbreviation (Insignificant) regarding debt management and other decisions regarding capital structure, which can fluctuate around different manufacturing sectors.All selected firms are listed on the Karachi Stock Exchange (KSE).The selected sample describes six years from 2013 to 2018, and the data were collected from the State Bank of Pakistan Department of Statistics. Table 4 . Linear regression model Standard errors in parentheses *** p < 0.01, ** p < 0.05, * p < 0.1.The dependent variable is DCR representing debt to capital ratio, profitability means firm financial performance, measured by net profit before tax / total assets; size represents the log of total assets of the firms; tangibility represents fixed assets after depreciation / total assets; Non-Debt Tax Shield (NDTS) represents the output of depreciation expenses of fixed assets/total assets; liquidity represents a firm's liquid position, measured by the ratio between current assets to current liability; EeR represents a Pakistani rupee vs. USD exchange rate real effective exchange rate (REER); Irate is the interest rate (KIBOR) offered by commercial bank calculated by State Bank of Pakistan and beta represents a firm's systematic risk.The numbers presented in Table3for each variable are coefficients.Column 3 shows the main effect of DCR; column 4 tests PCSE for the interaction effect of size and Irate; column 5 shows the main effect of the two-step system GMM. Notes: Table 5 . (Youn, Hua, & Lee, 2015)interaction method is applied to check the moderator effect of interest rate and firm size on the debt to capital ratio(Youn, Hua, & Lee, 2015).One found understanding of interactions in a nonlinear model is more complicated than in a linear model, where the interaction term marginal effect is approximately equal to the interaction term coefficient.As emphasized in Ai and Norton (2003), the model is nonlinear; the interaction effect cannot be re-evaluated simply by looking at the symbol, significance, or statistical relevance of the interaction term coefficient.The interaction effect may have different signs with different covariate values, and therefore the sign does not necessarily indicate the interaction effect.The interaction term is included in the model.Irate*Size is expected to capture the joint effects of firm size with interest rate and debt to capital ratio.Its alpha value is compared to the linear model, and here some explanatory variable coefficient value also gets changed, for example, size has a negative value in the linear model, but in the nonlinear, it gets rotate its position become positive.Similarly, the coefficient value of liquidity and exchange rate has changed very severely.Through empirical analysis about the selected sample, it was found that interest rate with firm size have an interaction effect with debt to capital ratio.It was observed from the outputs, and it infers abnormal variation in the coefficient value of different variables, which approves the moderate effect. liability; EeR represents a Pakistani rupee vs. USD exchange rate real effective exchange rate (REER); Irate is the interest rate (KIBOR) offered by commercial bank calculated by State Bank of Pakistan and beta represents a firm's systematic risk.The numbers presented in Table4for each variable are coefficients.Column 3 shows the main effect of DCR; column 4 tests PCSE the interaction effect of size and Irate; column 5 shows the main effect of two-step system GMM.
4,809
2020-12-15T00:00:00.000
[ "Economics", "Business" ]
A NEW CHAIN RATIO ESTIMATOR USING INFORMATION ON AUXILIARY ATTRIBUTE Abstract: In this paper, we develop to ratio estimator suggested by Naik-Gupta [J. Indian Soc. Agric. Stat., 48 (2), 151-158] [1] and obtain its MSE equation. We prove that the proposed chain ratio estimator is more efficient than the Naik-Gupta estimator under certain conditions. In addition, this theoretical result is supported by an application with original data sets. Introduction The Naik and Gupta estimator for the population mean of the variate of study, which make use of information regarding the population proportion possessing certain attribute, is defined by where it is assumed that the population proportion of the form of attribute is known. Let be th characteristic of the population and is the case of possessing certain attributes. If th unit has the desired characteristic, it takes the value 1, if not then the value 0. That is; Let and be the the total count of the units that possess certain attribute in population and sample, respectively.And and shows the ratio of these units, respectively. The MSE of the Naik and Gupta estimator is where, ; N is the number of units in the population; is the population ratio; is the population variance of the form of attribute and is the population variance of the study variable [1]. The Proposed Chain Estimator Following Kadılar and Cingi (2003) [2], We propose a chain estimator using information about population proportion possessing certain attributes.When in (1.1) is replaced with , the proposed chain estimator is obtained as We can re-write (2.1) using (1.1) as, where is real numbers.MSE of this estimator can be found using Taylor series method defined as; where, and [3].Where, .and denote the population of variances of the study variable and unit ratios possessing certain attributes, respectively.denotes the population covariance between units ratio possessing certain attributes and study variable. According to this definition, we obtain d for this estimator as follows; We obtain the MSE equation of this estimator using (2.3) as follows; where, , , , . We can have the optimal values of (2.4) by following equations: where . We can obtain minimum MSE of the proposed chain estimator using the optimal equations of in (2.5). Efficiency Comparisons In this section, we compare the MSE of the proposed chain estimator, given in (2.2), with the MSE of the Naik-Gupta estimators, given in (1.1).We have the condition; For Populations 1 and 2, We take the sample sizes as and using simple random sampling [6] .The MSE of the Naik-Gupta and proposed chain estimators are computed as given in (1.2) and (2.6), respectively, and these estimators are compared to each other with respect to their MSE values. In tables 1 and 2, There are the statistics about the population for data 1, data 2 sets.Note that the correlations between the variate are 0.766 and 0.878, respectively.; and Thus, the condition mentioned in section 3 is satisfied for Population 1 and 2 data sets. Conclusion We have analyzed the proposed chain estimator and obtained its MSE equation.According to the theoretical discussion in Section 3 and the results of the numerical examples, we infer that the proposed chain estimator are more efficient than the Naik-Gupta ratio estimator.In forthcoming studies, we hope to adapt the proposed chain estimators in stratified random sampling. For condition (3.1) or (3.2) is satisfied, the proposed chain estimator given in (2.2), are more efficient than the Naik-Gupta estimator, given in (1.1). Table 2 : Population 2 Data Statistics
812
2018-08-07T00:00:00.000
[ "Mathematics" ]
Development of a valid and reliable software customization model for SaaS quality through iterative method: perspectives from academia Despite the benefits of standardization, the customization of Software as a Service (SaaS) application is also essential because of the many unique requirements of customers. This study, therefore, focuses on the development of a valid and reliable software customization model for SaaS quality that consists of (1) generic software customization types and a list of common practices for each customization type in the SaaS multi-tenant context, and (2) key quality attributes of SaaS applications associated with customization. The study was divided into three phases: the conceptualization of the model, analysis of its validity using SaaS academic-derived expertise, and evaluation of its reliability by submitting it to an internal consistency reliability test conducted by software-engineer researchers. The model was initially devised based on six customization approaches, 46 customization practices, and 13 quality attributes in the SaaS multi-tenant context. Subsequently, its content was validated over two rounds of testing after which one approach and 14 practices were removed and 20 practices were reformulated. The internal consistency reliability study was thereafter conducted by 34 software engineer researchers. All constructs of the content-validated model were found to be reliable in this study. The final version of the model consists of 6 constructs and 44 items. These six constructs and their associated items are as follows: (1) Configuration (eight items), (2) Composition (four items), (3) Extension (six items), 4) Integration (eight items), (5) Modification (five items), and (6) SaaS quality (13 items). The results of the study may contribute to enhancing the capability of empirically analyzing the impact of software customization on SaaS quality by benefiting from all resultant constructs and items. 4. Tenants can create customization based on templates. 5. Tenants can select their own workflow templates and items relating to SaaS application templates from the template repository. 6. A set of components are provided in the application template which facilitates a variety of tenant needs. By making a choice from the relevant component set, tenants can personalize each customization point. 7. When a tenant wishes to subscribe to the SaaS application, the capabilities of each feature within the system are analyzed to determine whether they ought to be assimilated within the application. 8. Configuration can manage incongruities by permitting the client to establish set pre-defined parameters and options within the context of the runtime. 9. The configuration of the SaaS application involves disabling or excluding some features of the application. Composition refers to techniques and solutions that bring together a distinct collection of pre-defined application components that jointly amount to a custom solution. Please indicate to what relevancy you feel these statements represent the composition approach in the SaaS Multi-Tenant context. 2. Composing different collaboration components is done according to the runtime of the SaaS application. 3. The composition of SaaS components takes into account the subset of components. 4. The composition approach supports the decomposition of SaaS components. 5. Performing the composition of SaaS application components considers the relationships and dependencies between these components. SaaS Extension Extension refers to techniques and solutions that that stretch the functionality of the application by implanting the custom code in pre-defined places of application's code. Please indicate to what relevancy you feel these statements represent the extension approach in the SaaS Multi-Tenant context. 2. The SaaS application provides a set of extension points which permit a customized service to be plugged in at virtually points in the application. 3. Extending an existing object can happen at SaaS application runtime. The SaaS service provider supplies an open platform and an API, which allows developers to inject custom codes into business object layers. 5. These extension points can either be replacements for existing objects or extensions to them. 6. An extension may be private to an individual tenant or shared by multiple tenants. SaaS Integration Integration refers to techniques and solutions that Implement third-party components designed to work with the application. Please indicate to what relevancy you feel these statements represent the integration approach in the SaaS Multi-Tenant context. SaaS Modification Modification refers to techniques and solutions that alter the application design and other functional requirements of the application by way of alterations implemented on the source code. Please indicate to what relevancy you feel these statements represent the modification approach in the SaaS Multi-Tenant context. Part 3: SaaS Quality Based on the definition provided for each quality attribute, please indicate to what relevancy you feel these statements represent the quality attributes of SaaS application that play an important role in customization. 13. Response time: There is defined time limit which is adhered to between a service request and a service response. Comments and Suggestions If there are any other statements, or further comments regarding the customization approaches or the quality attributes of SaaS applications that you think is needed and have not reflected in this survey, please add your remarks in the space provided below. SaaS Configuration Configuration refers to techniques and solutions that offer a pre-defined setting for the alteration of application functions within the pre-defined scope. Please indicate to what relevancy you feel these statements represent the configuration approach in the SaaS Multi-Tenant context. 4. Tenants can create customization based on templates. 5. Tenants can select their desired workflow templates and items relating to SaaS application templates from the template repository. 6. When a tenant wishes to subscribe to the SaaS application, the capabilities of each feature within the system are analyzed to determine whether they ought to be assimilated within the application. 7. All Configurations established by the tenants have to be within the context of the runtime of the application. 8. An option of disabling or excluding some features of the SaaS application should be provided with the isolation effect across the tenants. SaaS Composition Composition refers to techniques and solutions that bring together a distinct collection of pre-defined application components that jointly amount to a custom solution. Please indicate to what relevancy you feel these statements represent the composition approach in the SaaS Multi-Tenant context. SaaS Extension Extension refers to techniques and solutions that expand the functionality of the application by inserting the custom code in pre-defined places of application's code. Please indicate to what relevancy you feel these statements represent the extension approach in the SaaS Multi-Tenant context. and an API, which allows developers to inject custom codes into business object layers. 5. These injected codes can either be replacements for existing objects or extensions to them. 6. An extension may be private to an individual tenant or shared by multiple tenants. SaaS Integration Integration refers to techniques and solutions that implement third-party components designed to work with the application. Please indicate to what relevancy you feel these statements represent the integration approach in the SaaS Multi-Tenant context. SaaS Modification Modification refers to techniques and solutions that alter the application design and other functional requirements of the application by means of alterations implemented to the source code. Please indicate to what relevancy you feel these statements represent the modification approach in the SaaS Multi-Tenant context. 13. Response time: SaaS application adheres to a defined time limit between service request and service response. Comments and Suggestions If there are any other statements, or further comments regarding the customization approaches or the quality attributes of SaaS applications that you think is needed and have not reflected in this survey, please add your remarks in the space provided below. Part 1: Demographics Please mark your response for each of the following questions: Part 2: SaaS Customization Approaches The following statements describe the different customization approaches that may impact the quality of SaaS applications. The scales below represent opinions of equivalent weight and strength. You should select the responses which most closely correspond to your views. For each question, you must choose one scale ONLY. SaaS Configuration Configuration refers to techniques and solutions that offer a pre-defined setting for the alteration of application functions within the pre-defined scope. Please indicate the extent to which, in your view, the following statements represent the configuration approach in the SaaS Multi-Tenant context. Questions Strongly Disagree Disagree Neither Nor Agree Strongly agree 1. Configuration typically maintains diversity by establishing pre-defined parameters, options, and components, and treats each tenant individually. 2. Each tenant can configure the application in a standalone way by employing techniques to modify the functions of applications within established limits. 3. SaaS providers have to develop and capture sets of services and plugins, from which tenants can make selections and perform configurations. 4. Tenants can create customization based on templates. 5. Tenants can select their desired workflow templates and items relating to SaaS application templates from the template repository. 6. When a tenant wishes to subscribe to the SaaS application, the capabilities of each feature within the system are analyzed to determine whether they ought to be assimilated within the application. 7. All Configurations established by the tenants have to be within the context of the runtime of the application. 8. An option of disabling or excluding some features of the SaaS application should be provided with the isolation effect across the tenants. SaaS Composition Composition refers to techniques and solutions that bring together a distinct collection of pre-defined application components that jointly amount to a custom solution. Please indicate the extent to which you feel the following statements represent the composition approach in the SaaS Multi-Tenant context. SaaS Extension Extension refers to techniques and solutions that expand the functionality of the application by inserting the custom code in pre-defined places of application's code. Please indicate the extent to which, in your view, you feel the following statements represent the extension approach in the SaaS Multi-Tenant context. Questions Strongly Disagree Disagree Neither Nor Agree Strongly agree 1. The SaaS application is extended by adding custom code to extend the application through custom functionality. 2. The SaaS application provides a set of extension points which permit a customized service to be plugged in at virtual points in the application. 3. Injecting custom code into SaaS application has to be supported at the run time of the application. Strongly Disagree Disagree Neither Nor Agree Strongly agree 4. The SaaS service provider supplies an open platform and an API, which allows developers to inject custom codes into business object layers. 5. These injected codes can either be replacements for existing objects or extensions to them. 6. An extension may be private to an individual tenant or shared by multiple tenants. SaaS Integration Integration refers to techniques and solutions that implement third-party components designed to work with the application. Please indicate the extent to which you feel the following statements represent the integration approach in SaaS Multi-Tenant context. SaaS Modification Modification refers to techniques and solutions that alter the application design and other functional requirements of the application by means of alterations implemented to the source code. Please indicate the extent to which you feel the following statements represent the modification approach in the SaaS Multi-Tenant context. Part 3: SaaS Quality Based on the definition provided for each quality attribute, please indicate the extent to which you feel each quality attribute plays an important role in customization. Questions Strongly Disagree Disagree Neither Nor Agree Strongly agree 1. Multi-tenancy: SaaS services can support instances of simultaneous access by multiple users for multiple tenants. 2. Scalability: SaaS providers can manage growth or decline in the level of services. 3. Availability: SaaS services can function within a specific time to satisfy users' needs. 4. Reliability: SaaS application maintains operating and functioning under given conditions without failure within a given time period. 5. Maintainability: Modifications to the application are made by SaaS provider to retain it in the condition of good repair. 6. Security: the effectiveness of SaaS provider's controls on service data, access to the services, and the physical facilities from which service are provided. 7. Usability: the ease with which SaaS application can be used to achieve tenant-specific-goal. 8. Interoperability: SaaS service can easily interact with other services from the same SaaS provider or other providers. 9. Efficiency: SaaS services effectively utilize resources to perform their functions. 13. Response time: SaaS application adheres to a defined time limit between service request and service response.
2,854.6
2020-09-21T00:00:00.000
[ "Computer Science" ]
The Relationship between Chemokine Ligand 3 and Allergic Rhinitis Abstract Introduction Allergic rhinitis (AR) is a symptomatic condition of the nose induced by an immunoglobulin E (IgE)-mediated inflammatory reactions after exposure of the nasal mucosa to an allergen. Chemokines play an important role in both the immediate and late phases of the allergic inflammatory process, inducing the activation and migration of immune system cells, including mast cells and eosinophils, into target organs, and activating macrophages to trigger B-cells to synthesize allergen specific-IgE [1]. Chemokine ligand 3 (CCL3, a macrophage inflammatory protein 1α) is produced by a variety of cells, including lymphocytes, resident and recruited monocytes and macrophages, fibroblasts, and epithelial cells. CCL3 is a potent chemokine that binds to and activates lymphocytes and monocytes through its receptors chemokine receptor (CCR)1 and CCR5. CCL3 synthesis is important in maintaining the recruitment of inflammatory cells during episodes of inflammation, as well as activating eosinophils and T-cells and regulating Ig production [2,3]. CCL3 constitutes a second signal for mast cell degranulation, acting as a direct costimulatory signal via CCR1. CCL3 is, therefore, essential for inducing acute-phase AR, making this chemokine critical for mast cell activation [4]. Several studies in humans have investigated the role of CCL3 in respiratory allergies. A pharmacological study investigating the role of the antihistamine fexofenadine found that nasal antigen challenge increased peak levels of CCL3 in nasal secretions, from 8.1 ± 1.8 ng to 11.2 ± 1.7 ng (P = 0.05) [5]. Levels of CCL3 increased observed six to eight hours after challenge with the nasal allergen Timothy grass pollen and decreased after treatment with fluticasone propionate, an inhibitor of Th2 cytokine synthesis [6]. The levels of CCL3 were significantly higher in patients with seasonal allergic rhinitis than in patients with perennial allergic rhinitis and healthy controls [3]. Moreover, serum concentrations of CCL3 in subjects allergic to ragweed were reduced outside the pollen season [7]. High levels of CCL3, along with high levels of the chemokines eotaxin, RANTES (Regulated upon Activation, Normal T-cell Expressed and Presumably Secreted), and monocyte chemoattractant protein-1 were observed during late responses to grass pollen [8]. Also, high levels of CCL3 messenger RNA were induced in mouse macrophage RAW 264.7 cells by birch and oak dusts, which have been associated with the development of allergic diseases [9]. The present study compared serum concentrations of CCL3 in patients with AR and healthy controls and correlated these concentrations with various aspects of respiratory allergies. Materials And Methods This prospective study was approved by the local medical committee of Santa Maria Clinics and Laboratories (formerly Anima Medical Center Bucharest) and the Ethics Committee for Research of Carol Davila University of Medicine and Pharmacy Bucharest. All participants provided written informed consent before enrollment. This study enrolled 39 participants, 24 patients with AR and 15 healthy controls, all aged >18 years. AR was diagnosed using the 2019 updated Allergic Rhinitis and its Impact on Asthma criteria [10]. Twenty patients were diagnosed as having seasonal allergic rhinitis (SAR) and four perennial allergic rhinitis (PAR). Seasonal allergies usually occur during the spring and fall season in response to outdoor allergens like pollens. Perennial allergies can occur year round in response to indoor allergens, like dust mites, cockroaches and pet dander. Allergic sensitization was assessed according to the European Academy of Allergy and Clinical Immunology guide for the investigation of respiratory allergies [11]. All participants were subjected to skin prick tests (Lofarma, Milan, Italy) for 22 aeroallergens considered relevant to Europe, including Romania. These included dust mites (Dermatophagoides farinae and D. pteronyssinus), cockroaches (Blatella germanica), cat and dog danders, molds (Alternaria, Aspergillus, Penicillium and Cladosporium), and pollens of grasses, cereals, weeds (Ambrosia, Artemisia, Parietaria, Plantago lanceolata, and Helianthus annuus) and trees (Betulaceae, Oleaceae, Platanus occidentalis, Cupressaceae, Fagaceae, and Salicaceae). All tests were performed by an allergist, with a positive result defined as a wheal diameter ≥3 mm with surrounding erythema. All tests included histamine dihydrochloride (1 mg/ml) as a positive control and allergen-free saline solution as a negative control. The severity of AR was evaluated using the Total Nasal Symptom Score (TNSS), which is composed of at least three of the following four nasal symptoms: rhinorrhea, nasal congestion, nasal itching, and sneezing, with each having three levels of severity (mild, moderate and severe) [12]. Other inflammatory conditions were excluded by serologic analysis, including total blood count with differential; concentrations of rheumatoid factor, fibrinogen, and C-reactive protein; and erythrocyte sedimentation rate. Only participants with normal values were included in the study. All participants were examined by an otorhinolaryngologist using a flexible nasal fibroscope, with nasal mucosal hypertrophy (NMH) considered a sign of chronic inflammation. CCL3 concentrations in blood samples were measured at the Immunology Laboratory of the Cantacuzino National Institute for Military Medical Research and Development Bucharest, using Human Multianalyte Profiling Base Kit A (R&D Systems Inc., Minneapolis, MN). The measurement of CCL3 levels in serum samples was realized in our study because in our facilities (clinics and laboratory) only for blood products were possible sampling, transport, storage and analysis of the tests. The detection limit was set to 3.2 pg/ml according to manufacturer's instructions. Plates were read on a Luminex 200 platform (Luminex Corporation, Austin, TX), and data were processed with Luminex 200 IS 2.3 Star Station software (Applied Cytometry, Plano, TX) [13]. Concomitantly we have measured the serum levels of other chemokines (MCP-1/CCL2 (monocyte chemoattractant protein-1), IP-10/CXCL10 (interferon gamma-induced protein 10), ENA-78/CXCL5 (epithelial-derived neutrophil-activating peptide 78)) in order to obtain a wider image of the role of different subfamilies of chemokines in the pathogenesis of AR. We did not reach a statistical significance. All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 20.0. (IBM Corp., Armonk, NY) [11]. Because the groups were unbalanced, they were compared by Mann-Whitney U test groups with the Monte Carlo simulation technique and Student's t-tests (for equal variances not assumed). The correlation of variables was assessed by Pearson's or Spearman's correlation test, as appropriate. The 95% reliability range of all data were reported, and p-values <0.05 defined as statistically significant. Results The baseline demographic and clinical characteristics of the study subjects are detailed in Table 1. FIGURE 1: CCL3 values patients versus witness CCL3 concentrations were significantly associated with NMH (p = 0.003 by Mann-Whitney Utest) and correlated with polysensitization (two or more positive skin prick tests per patient; r = 0.325, p = 0.046) and seasonal allergic rhinitis (r = 0.482, p = 0.002; Table 4). The observed correlation between NMH and seasonal allergic rhinitis might be explained by the fact that although SAR has a temporal clinical picture, a subclinical inflammation is presented at cellular levels all years and becomes active when allergens take direct contact with local mast cells. CCL3 was also associated with sensitization to grass pollen (p = 0.040) and with patient age (r = 0.495, p = 0.012). Discussion Our finding that serum CCL3 concentrations are higher in patients with AR than in healthy controls is in good agreement with CCL3 concentrations in nasal secretions [3]. The latter results are likely closer to real situations, in that chemokine concentrations were measured directly at the site of allergic reaction rather than in blood samples. Serum CCL3 concentrations may have been affected by systemic inflammatory conditions. To minimize this possibility, we measured important markers of systemic inflammation, including total blood count with differential; concentrations of rheumatoid factor, fibrinogen, and C-reactive protein; and erythrocyte sedimentation rate, and excluded all patients and controls with abnormal results. Measurement of CCL3 and other chemokine levels in nasal secretions is, by far, the most sensitive and specific method to evaluate their presence in allergic rhinitis. But this demands a specialized stuff (nurse, physicians (otorhinolaryngologist), laboratory personal) that is limited to university/research facilities. Measurement of chemokines in serum samples may be performed in many ordinary laboratories around the globe. This technique is affected by the fact that levels of chemokines might be influenced by other systemic conditions. The association between CCL3 concentration and NMH confirms that CCL3 plays a role in the pathogenesis of chronic inflammation, including respiratory allergies [2,3]. Moreover, our finding of an association between CCL3 and grass pollen allergy is in agreement with previous results [6,9]. CCL3 concentration appears to be associated with pollen sensitization, but this observation requires confirmation in future studies [3,6,7]. To our knowledge, however, no previous study had reported a correlation between CCL3 concentration and polysensitization to aeroallergens, providing further evidence of the role of CCL3 in the pathogenesis of AR. Several studies have suggested a role for CCL3 in allergen sensitization [3,6,7]. The CCL3 gene may be a selective end-point marker in CD34+-progenitor derived dendritic cells exposed to contact allergens and irritants, enabling these cells to differentiate between chemical sensitizers and irritants [14]. We also observed a correlation between CCL3 concentration and TNSS (r = 0.495, p = 0.001). The mean TNSS score in the group of 24 patients was 7.0 ± 3.83, indicating moderate disease. This may indirectly suggest that CCL3 plays a role in the pathogenesis of AR. Our study has two major limitations. First, the study population was relatively small, which might have affected the statistical power of the study. Secondly, while we tried to eliminate all systemic inflammatory conditions that might influence the levels of CCL3 by using the most important inflammatory markers and keeping only participants with healthy values, we could not exclude a possible infection or inflammation in one or more patients, which constitutes the second major limitation of our study. Conclusions Our study intended to evaluate the role of CCL3 in the pathogenesis of AR. CCL3 presented significant differences between patients with AR and healthy controls. We have shown that the measurement of the levels of CCL3 in blood samples might represent an alternative approach of evaluating chemokines in allergic rhinitis in a non-academic outpatient facility when nasal collection of probes is not feasible. CCL3 also correlated with some aspects of respiratory allergies, such as polysensitization, seasonal allergies, and nasal mucosal hypertrophy. These findings indicate that CCL3 may play a role in AR. Our study implicated a small number of patients and evaluated CCL3 only from blood samples, and therefore, our results should be interpreted with caution. Future studies including a larger number of participants and samples from both the nose and blood may be necessary to confirm the association between CCL3 and AR. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Avizul Comisiei de Etica a Cercetarii Stiintifice UMF Carol Davila Bucharest Cod: PO-35-F-03 issued approval 103/05.12.2016. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2,602.6
2020-04-01T00:00:00.000
[ "Medicine", "Biology" ]
A novel explainable machine learning approach for EEG-based brain-computer interface systems Electroencephalographic (EEG) recordings can be of great help in decoding the open/close hand’s motion preparation. To this end, cortical EEG source signals in the motor cortex (evaluated in the 1-s window preceding movement onset) are extracted by solving inverse problem through beamforming. EEG sources epochs are used as source-time maps input to a custom deep convolutional neural network (CNN) that is trained to perform 2-ways classification tasks: pre-hand close (HC) versus resting state (RE) and pre-hand open (HO) versus RE. The developed deep CNN works well (accuracy rates up to 89.65±5.29%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$89.65 \pm 5.29\%$$\end{document} for HC versus RE and 90.50±5.35%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90.50 \pm 5.35\%$$\end{document} for HO versus RE), but the core of the present study was to explore the interpretability of the deep CNN to provide further insights into the activation mechanism of cortical sources during the preparation of hands’ sub-movements. Specifically, occlusion sensitivity analysis was carried out to investigate which cortical areas are more relevant in the classification procedure. Experimental results show a recurrent trend of spatial cortical activation across subjects. In particular, the central region (close to the longitudinal fissure) and the right temporal zone of the premotor together with the primary motor cortex appear to be primarily involved. Such findings encourage an in-depth study of cortical areas that seem to play a key role in hand’s open/close preparation. Introduction Scalp electroencephalography (EEG) is a noninvasive technique that collects the electrical fields produced by the brain and indirectly reveals its underlying activity [40]. EEG is the gold-standard diagnostic technique for several neurological diseases, as well as in neuroscience and cognitive research [9,13,20,21,54]. EEG is commonly exploited in brain-computer interface systems (BCI), whose ultimate goal is to allow the brain to directly communicate with an external device by decoding subject's intentions from EEG signals and converting them into a set of suitable commands [31,44]. EEG is relatively affordable, widely spread, easy to use, and generally well tolerated by subjects. Unfortunately, EEG entails some non trivial limitations: (1) a poor signal-to-noise ratio (SNR) which reflects on the the brain's waves of interest to be corrupted by multiple sources of noise called artifacts [36]; (2) EEG recordings are non-stationary signals, and thus their statistical characteristics vary across time [58]; (3) poor spatial resolution caused by volume conduction effects [34]; (4) high inter-subject variability which limits the ability of a classifier, trained over a cohort of subjects, to generalize well across subjects [45]. One of the greatest potentials of Deep Learning (DL) is the ability to generalize well even in presence of complex inputs [28]. In the context of the EEG analysis, this results in the possibility of identifying patterns relevant to classification even in presence of additional irrelevant waves in the EEGs. However, the main limitation in the application of DL to EEG processing lays in the relatively small number of samples generally available in EEG databases, as compared to the number of samples that are typically available in databases meant for computer vision or natural language processing (NLP) applications, which made DL so powerful in such fields [7,59]. The DL has been applied so far to EEG mainly in the following fields: motor imagery (MI) (22%), mental workload (16%), emotion recognition (16%), seizure detection (14%), event related potential detection (10%), sleep stage scoring (9%) and other studies (13%) [8]. MI is the topic more related to the focus of the present study: motor preparation. BCI systems based on MI generally require the user to imagine performing a given movement, in a sustained way, in order to allow the system to learn how to classify the imagined movement with good accuracy [32]. However, sustained motor imagery is not natural neither comfortable for the user; moreover, it requires an intensive training and causes a delay between the onset of imagination and the time the desired control is issued [38]. Conversely, in motor preparation investigation, the subject performs or attempts to perform the movement and the behavior of EEG signals collected before motion onset/attempt is investigated to predict the intended movement [52]. Decoding the preparation of the movement, whether it is actually implemented or just attempted (for example, in case the subject has a motor disability hindering motor implementation), would be far more natural and immediately decodable [42]. Furthermore, the mechanisms of motor preparation are still not clear to scientists. Previous studies showed that premotor cortex is activated contralaterally during motor preparation, which was observed by fMRI/NIRS as well as in EEG signals [56], however it is not clear whether and how the different sub-areas of premotor cortex work together to develop motion planning. For the all the aforementioned motivations, it is reasonable to consider motor preparation a key field of research and, in line with such consideration, motor preparation of different sub-movements of upper limbs was investigated in [35]. Frames of EEG signals preceding motion's onset were compared with frames of EEGs collected in absence of any motion planning (resting) [35]. Mammone et al. [35] reached an accuracy of 90:30 AE 5:6% in pre-movement versus resting discrimination and of 62:47 AE 6:7% in the discrimination of the preparation of different sub-movements. A deep convolutional neural network (CNN) was designed and trained through stratified time-frequency maps of 210 EEG source locations in the premotor and primary motor cortex. However, no interpretation of the achieved results was provided. In the present work, we aim at achieving good accuracy in motor planning classification of hands' movements, by designing a deep CNN to be trained over single channel images (time vs. sources), and also to provide an intrpretation of the achieved results in terms of involvement of the different cortical areas. As artificial neural networks act as a black-box, an explainable machine learning (EML) approach is here proposed to interpret the achieved results by exploring the behavior of the trained network [37]. In summary, the aim of the present work is twofold: (1) To design a novel deep CNN that, by processing EEG source signals in the motor cortex, is able to discriminate the phases of preparation of hands' movements (open/close) from resting (no movement planning); (2) To explain the achieved results by means of EML, in order to assess which EEG sources (i.e., which cortical locations) play a decisive role in the classification of hand's motor preparation phases. The final aim is to find out possible areas in the motor cortex that are mainly involved in planning hands' movements. In fact, while it is well known which areas are most involved in the implementation of movements of the different parts of the body [5], it is indeed known that the activation of the movement is triggered by relatively well localized areas in the primary motor cortex, it is not well known whether and how motor planning is spatially organized. EML could yield a significant contribution in this field [11]. To this end, a deep CNN was designed and trained to discriminate hand's opening (HO) and hand's closing (HC) motion preparation phases from resting (RE) phases. The present analysis is focused on the classification of HC versus RE and HO versus RE. The training database was constructed by processing EEG signals collected from 15 subjects recruited within a BCI study conducted by Ofner et al. [41]. The paradigm introduced in [41] provided that the subject performed cue-based movements starting from a neutral rest position. The developed CNN receives as input epochs of EEG source signals in the time interval of 1 s preceding motion onset (named time-source maps herein). Such source signals were estimated by solving the inverse problem starting from EEG scalp signals. Source locations belonging to the motor cortex are then included in the analysis. The developed CNN was able to discriminate premov (HC or HO) versus RE with an average accuracy of 90%. An occlusion sensitivity analysis was subsequently carried out by passing the time-source maps as input to the network to evaluate which sources (i.e., cortical locations) are estimated to be more relevant in the classification of HC/HO from RE. We could observe a recurrent spatial pattern across subjects that show greater activation of the left part of the motor cortex in the central area, close to the longitudinal fissures between the two hemispheres, together with the extreme right part of the motor cortex belonging to the temporal lobe. The paper is organized as follows: In Sect. 2 the proposed method is presented. The preprocessing steps including beamforming technique and the cortical sources extraction are also described. Section 3 shows the proposed deep CNN for the pre-movements tasks classification; whereas, Sect. 4 introduces the salient cortical source recovery procedure by means of EML (i.e., occlusion sensitivity analysis). In Sect. 5 experimental results are reported. Section 6 discusses the achieved findings and Sect. 7 concludes the paper. Methodology The proposed methodology is shown in Fig. 1. It includes the following processing modules: Extraction of premotor EEG epochs In the present research, a publicly available database of EEG recordings co-registered with signals collected from motion sensors [41] was used to construct the train and test dataset. The database can be found at http://bnci-horizon-2020.eu/database/data-sets together with detailed information about channel layout, recording settings and paradigm description. The study involved 15 healthy subjects (aged 27 AE 5 years, nine of the them are females). EEG signals were acquired by Ofner et al. [41] by means of 61 active EEG electrodes and four 16-channel amplifiers (g.tec medical engineering GmbH, Austria). Right mastoid channel was used as reference one and AFz was set as ground channel. EEG signals were band-pass-filtered between 0.01 Hz and 200 Hz (eighth-order Chebyshev filter), notch filtered at 50 Hz and sampled at 512 Hz. The database consists of a motor execution and a motor imagery part. Since the goal of the present study was to investigate motor preparation, the first part was included in the analysis. During the experiment, subjects remained seated on a comfortable chair and an anti-gravity exoskeleton (Hocoma, Switzerland) supported their right arm. The paradigm consisted in executing cue-based movements of the right upper limb starting from a neutral position (lower arm extended to 120 degree and in a neutral rotation, hand half open) [41]. The experiment consisted of 10 runs, every run included 6 trials and each trial included one hand open (HO), one hand close (HC) and one rest (RE) cues. The timeline of the paradigm can be summarized as follows: at second 0, a fixation cross appeared on a computer screen, positioned in front of the subject, to attract her/his gaze on it and limit eye movements. At second 2, the cue of the task to be performed (HC/HO/RE) appeared on the computer screen. After task execution, the subject moved her/his hand back to the starting neutral position. In order to train a neural network to decode motor preparation phases from EEG signals, a dataset of EEG epochs preceding motion onset was necessary. To this purpose, the onset of movement was estimated by processing motion data collected by glove sensors. Specifically, the onset of movement was detected by processing the signals recorded from motion sensors embedded in the glove, following the procedure described in [41]. The marked onset timing was manually checked for all of the 1800 pre-motion epochs under examination and the frames (epochs) of EEG signals preceding the marked onset were extracted accordingly. Such epochs were included in the analysis together with a balanced set of resting EEG epochs. Specifically, 900 EEG epochs (derived from 10 runs  6 trials  15 subjects) per movement class (hand open/hand close) were taken into account. In order to generate a balanced dataset, a comparable number of resting state EEG epochs was extracted. In the end, 2700 EEG epochs (derived from 10 runs  6 trials  15 subjects  3 classes) were extracted from the EEG recordings and included in the dataset. As regards the choice of the length of the frame preceding motion onset, it was set at 1 s after taking into account the typical timeline of motor related cortical potentials (MRCP) which are brain waves that arise together with movements' preparation and initiation [35,47]. Inverse problem solution and extraction of the cortical EEG sources It is known that the EEG has a very good temporal resolution but a poor spatial resolution, due to volume conduction effects [14,16,40]. Inverse problem solution is a possible way to deal with such effects. In the proposed methodology, EEG signals are used to reconstruct a set of source signals where every source signal represents the contribution of a source location (current dipole) located in the cortex [14,16]. Solving the ''inverse problem'' means reconstructing source locations' contribution to the overall EEG signals collected at the scalp. EEGs can be hypothesized to be the projection of sources' contributions from cortical locations to scalp sensors through a ''forward model'' [10]. Such forward model takes into account the structural and conductive properties of brain tissues. In the frequency range of EEG signals, the quasi-static approximation of Maxwell's equations can be assumed hence the forward model becomes linear [17] and be formulated as follows: q r ðtÞ is the 3 dimensional directed current dipole associated to cortical location ''r'' (where r ¼ 1; . . .; Ns and Ns is the number of possible source locations in the cortex); L is knows as ''lead field'' matrix, which represents the head model that projects the current dipole q r ðtÞ into the scalp potential x(t) [17]. The number of sources Ns is typically larger than the number of channels Nc thus estimating q r ðtÞ from x(t) is inherently an ill-posed problem. The adopted head model consists of 2000 cortical locations (Ns ¼ 2000) whereas the number of scalp channels of the EEG recordings analysed in the present work is Nc ¼ 61. In this work, the New York Head (NYH) forward model, developed by Haufe et al. [18], was adopted. Such head model is based on the popular ICBM152 anatomy, a nonlinear average of T1-weighted structural MR images collected from 152 adults. By solving the inverse problem, cortical current dipoles q r (t)) are estimated starting from the recorded EEG signals x(t) and from the lead field matrix L. Several inverse problem solution approaches can be found in the literature on EEG source imaging: minimumnorm solutions, beamformers, and dipole modeling [14,49]. Beamforming solves the inverse problem by maximizing the contribution of a given source location while suppressing contributions from the other ones and was proved very effective in BCI applications by Grosse-Wentrup et al. [15]. The premotor and primary motor cortex are considered crucial in movement planning and execution [19,22]. Such regions fall in the Brodmann's Areas 4 and 6 of the brain [3]. Each one of the 2000 available source locations was associated to the corresponding Brodmann Area through its Montreal Neurological Institute (MNI) stereotaxic coordinates. MNI coordinates of every source locations were known. First of all, they were indeed converted into Talairach coordinates [27] and then matched with Talairach Atlas labels [23], in order to come up with the corresponding Brodmann area of every source location. In the end, 210 locations belonging to Brodmann areas 4 and 6 were selected out of the 2000 available ones. 3 Deep learning-based system for premovements tasks classification Convolutional neural network CNN is a well-known deep learning model widely used especially in computer vision [4,30,48,50], image classification [2,6,33] and pattern recognition [12,29,55]. It is composed of subsequence layers of convolution, activation, pooling followed by a multilayer fully connected neural network for classification purpose [26]. The convolutional layer includes a bank of J filters used to estimate the dot product (i.e., covolution operation) with the input map T sized t 1  t 2 . More specifically, each filter (sized j 1  j 2 ) performs the convolution with the selected local area and sweeps over the input representation with a specific stride using the same values of weights. This operation results in the so-called features maps Z of size z 1  z 2 : and where p is the zero padding parameter. The activation layer introduces nonlinearity in the model. Specifically, here, the Rectified Linear Unit (ReLU) is employed for its ability to achieve good generalization and training time [39]. The pooling performs a downsampling operation of the feature maps resulting from the previous layer. The max pooling operation is used for its good translation-invariant properties [46]. It has a filter sized j 1  j 2 that scans the input feature map with stride s. This operation outputs a reduced map sized t 1  t 2 , with: and The CNN ends with a standard feed-forward MLP composed of a softmax output function used for performing the discrimination tasks. Design of the deep CNN The proposed CNN is developed to accept as input timesource maps (i.e., EEG sources epochs) sized t 1  t 2 , where t 1 ¼ 210 represents the number of sources taken into account in this study, and t 2 ¼ 512 represents the number of samples included in 1s temporal epoch before the movement onset. The deep learning model consists of three stacked modules of convolutional (conv i , with i ¼ 1; 2; 3), ReLU and max pooling (mpool i ) layers followed by a common MLP for performing the 2-ways classification: HC versus RE and HO versus RE. Figure 2 shows Learning parameters set-up The Adaptive Moment (Adam) optimization procedure [25] was used to train the proposed deep CNN (Fig. 2), using mini-batches size of 28. Training options were set up by using the practical recommendations reported in [1,25], specifically: learning rate a ¼ 10 À2 , first moment decay rate b 1 ¼ 0:9, and second moment decay rate b 2 ¼ 0:999. 4 Explainable deep learning system: salient cortical source recovery Occlusion sensitivity analysis Occlusion analysis has been widely used in image classification to show the sensitivity of a pre-trained CNN to different areas of an input image [57]. It consists in systematically occluding different patches of the input data with a grey mask and estimating the related effect on the network output. For each mask location, the discrimination is performed using a pre-trained CNN and estimating the change in classification score for a specific class than the initial prediction (input without occlusion). Such changes in classification result in the so called heatmap or saliency map H with a coloration ranging from blue to red and with the same input dimension. This representation reveals which area of the image is the most essential for the classification. Specifically, red color corresponds to higher values and consequently represents the most significant area that contributed to identify the specified class. When this region is occluded, the classification performance decreases. Blue color corresponds to lower values and represents the areas not relevant during the discrimination task. In this study, the occlusion technique is applied to recover the cortical sources that are activated during the (open/close) hand's movement preparation. Given a subject under analysis, the eth EEG source epoch (sized 210  512) is repeatedly occluded with a 42  256 pixel grey mask that moves across the input data with a vertical and horizontal stride of 21 and 51, respectively. It is worth noting that the dimension and stride of the mask has been set-up empirically, after several experiments. For each position of the mask, the 2-way discrimination task (i.e., HC vs. RE or HO vs. RE) is performed by using the Fig. 2 Architecture of the proposed deep CNN. It consists of three convolutional layers (followed by ReLU nonlinearity) and three max pooling layers. The network ends with a two-hidden layers MLP employed to perform the two-way classifications: HC versus RE and HO versus RE proposed pre-trained CNN (Sect. 3.1). The output is a heatmap sized 210  512. As an example, Fig. 3a shows the input map (i.e., EEG sources epoch) when a portion is occluded by a gray mask, whereas Fig. 3b reports the achieved saliency representation map. In this case, as can be seen, the red area roughly corresponding to the sources ranged between (130-180) in the temporal window (0.7-0.9), denotes the most relevant zone in the classification process. Further considerations and analyses are reported in Sect. 5.2. Saliency maps segmentation through k-means In order to provide a deeper understanding of which subareas in the motor cortex gave the largest contribution to decode movements' planning, a segmentation of the saliency maps was necessary in order to extract the high saliency zones automatically. To this end, the k-means clustering algorithm was applied to partition each saliency map to k ¼ 10 clusters. K-means is a widely applied clustering algorithm [53]. Its aim is to gather data points into a given number of clusters by following an iterative four-step procedure: 1. the initial cluster centers are set randomly; 2. data points are assigned to the nearest cluster by estimating the euclidean distance between the data point and the cluster centers, in this way clusters are redefined; 3. update the clusters' centers; 4. go back to step 2. and repeat the procedure from 2. to 4. until the cluster centers do not change or the specified iteration number is reached. After applying k-means to the saliency maps, the cluster associated to the highest saliency values was detected, the corresponding highly salient sources were extracted and mapped onto the cortex by red dots. Classification performance of the premovements tasks The classification performance of the proposed deep CNN were evaluated using standard metrics (accuracy, recall, precision, F1-score): Recall where TP and TN are true positive and negative, respectively, whereas, FP and FN are false positive and negative, respectively [43]. cross-validation technique was applied (with k ¼ 10), in particular: the train set included 70% of data (i.e., EEG sources epochs) and the test set the remaining 30%. Table 1 reports results of the HC versus RE classification. Remarkable discrimination values were observed in all subjects, reporting average recall, precision, F1-score and accuracy of 89:14 AE 7:24% and 91:19 AE 7:88%, 89:69 AE 4:98%, 89:65 AE 5:29%, respectively. It is worth noting that the highest individual classification performance was achieved by Sb 08 with accuracy of 98:02 AE 2:10%, F1-score of 97:94 AE 2:21%, recall of 96:03 AE 4:20% and precision of 100%, while the lowest individual classification performance was achieved by Sb 07 . However, also in this case high discrimination scores were observed, but with higher standard variation: accuracy of 79:76 AE 10:11%, F-score of 79:86 AE 9:60%, recall of 80:16 AE 12:77% and precision of 80:83 AE 12:36%. Table 2 reports results of the HO versus RE classification. Also in this scenario very good performance were observed (average recall of 89:31 AE 8:02%, average precision of 93:04 AE 7:66%, average F1-score of 90:41 AE 5:32% and average accuracy of 90:50 AE 5:35%). Notably, Sb 08 and Sb 02 achieved the best performances in terms of accuracy and F1-score; while, Sb 07 reported the worst individual classification performance with an accuracy value of 75:79 AE 9:72% and F1-score of 76:95 AE 5:87%. Hence, the proposed deep CNN reported very good performance for both HC versus RE and HO versus RE discrimination task. This result was also confirmed by the analysis of the Precision-Recall Curve (PRC, shown in Fig. 4) and in particular by measuring the corresponding Area Under the PRC. Specifically, the average AUPRC over all subjects was estimated, reporting AUPRC rates up to 89:36 AE 9:02% and 90:1 AE 8:93%, for the HC versus RE and HO versus RE classification tasks, respectively. Salient cortical source locations recovery Occlusion sensitivity analysis was performed by using the proposed pre-trained deep CNN to estimate the saliency of every EEG source. Specifically, for each subject, an averaged saliency map was estimated by averaging the saliency maps corresponding to the HC/HO EEG epochs correctly classified during the testing procedure and herein denoted asH tk Sb i , where tk represents the pre-movement task (i.e., HC/HO) and Sb i is the subject under analysis (with i=1,2,..15). As an example, Fig. 5a illustrates the average saliency representation of Subject 08 while preparing to perform hand closing (H HC Sb 0 8 ). Saliency is encoded with a coloration going from blue (low saliency) to red (high saliency). Highly salient EEG source locations can be recovered by detecting the EEG sources associated to red areas. Notably, red areas denote that the classification score decreases when the corresponding local regions of the input were hidden by the mask, which means that the occluded area is relevant to classification. In the example map shown in Fig. 5a, the area located around 0.85s and approximately associated to the EEG sources ranging from 70 to 170, looks colored in red. Hence, it resulted relevant to the decision making. Average saliency maps were then segmented as described in Sect. 4.2. Figure 5b depicts the clustered saliency map of Subject 08. Notably, the red area represents the cluster with the highest saliency and refers to the most relevant EEG sources, which were then mapped onto the cortex (red dots in Fig. 5c). In the example shown in Fig. 5c), EEG sources located in the left central zone (close to the longitudinal fissure) and in the right-temporal zone contributed the most to decoding hand closing motor planning. Following the aforementioned procedure, salient source locations in HC and HO motor preparation were estimated for every subject and are shown in Fig. 6. It is worth to note the a recurrent spatial pattern of cortical activation (left central zone close to the longitudinal fissure and right-temporal zone) which occurred similarly during HC or HO motor preparation. Such pattern occurred in 10 out of 15 subjects during hand closing preparation and in 11 out of 15 subject in hand opening preparation. Nine out of 15 subjects exhibited the same spatial pattern in HC as well as in HO motor planning. Discussion The present research aims at exploring the interpretability and explainability of the proposed DL-based system in order to provide further insight into the hidden mechanism of cortical sources activation when the brain is preparing hand's open/close movement. To this end, the dataset [41] composed of EEG signals recorded from 15 subjects who performed several repetitions of hand's open and close always starting from a common neutral resting position. EEG signals recorded during resting condition (i.e., no motion planning) were also analyzed. First, a dataset of EEG epochs of 1s preceding motion onset was constructed. The onset of motion was determined through the signals collected by motion sensors embedded in a glove that the participant had worn throughout the experiment. Beamforming was then applied to EEG epochs to solve the inverse problem and reconstruct the electrical sources in the cortical locations belonging to the primary motor and premotor cortex (i.e., 210 cortical locations, as described in Sect. 2.2). Next, such premotor EEG source epochs (1s width) were used as input to a customized deep CNN to perform the following binary classifications: HC versus RE and HO versus RE, reporting very good discrimination performance: average accuracy rate up to 89:65 AE 5:29% and 90:50 AE 5:35%, respectively. Hence, the temporal trend of electrical sources in the motor cortex allows in principle for motion planning discrimination from resting phases. However, since the ultimate goal of the present study was to provide an in-depth understanding of which cortical locations contributed the most to the discrimination of HC/HO motion planning from resting phase in the frame of 1s preceding the execution of the movement, an occlusion sensitivity analysis was proposed. Specifically, after training and testing the proposed CNN, EEG time-source epochs were systematically occluded with a grey mask and used as input to the pre-trained deep CNN, producing the so called heatmaps or saliency maps. This technique allowed to highlight which areas of the input map (i.e., EEG time-sources epochs) were relevant to the decision making process. In order to detect the high saliency regions in the map, k-means clustering technique was applied and the high-saliency cluster was identified as described in Sect. 5.2. By detecting the high saliency areas in the timesources maps, the corresponding highly relevant source locations could be pinpointed. The more relevant sources were then mapped onto the cortical surface and represented with red dots. As can been seen in Fig. 6, a recurring pattern can be detected in each subject. Specifically, the cortical sources located in the central area of the motor cortex (close to the longitudinal fissure) and the temporal zone of the right motor cortex resulted highly relevant during HC/HO movement's planning preparation. It is to be noted that, to date, it is still not well known whether and how motion planning is spatially organized over the motor cortex. A contralateral involvement of the premotor cortex in motion planning was reported in the literature [56] but further details about sub-areas involvement are still to be investigated. Hence, our findings may shed a new light on motor preparation and suggest that the aforementioned motor cortical regions (i.e., central and temporal right) are the mostly involved in the HC/HO sub-movement preparation. It is also worth noting that intra-subject differences can be observed. For example, in Subjects 01 and 07 (Fig. 6), only the right-central subregion resulted highly relevant to HC detection; whereas, the left-central and also the right-temporal subregions look involved in HO detection. To the best of our knowledge, this is the first attempt to study motor preparation through explainable machine learning. Furthermore, this is the first work that attempts to detect the subareas of motor cortex that are more salient to the preparation open/close hand's movement. Recurrent spatial patterns of cortical activation could be detected across subjects, namely, the central area close to the longitudinal fissure and the right temporal area of the premotor and primary motor cortex. However, the proposed methodology has some limitations. First, the number of EEG channels used in this study was 61, we think that using a higher number of electrodes would have a positive impact on inverse problem solution, leading to a more accurate cortical source reconstruction. Second, the number of EEG epochs used to train the proposed CNN was limited. Overall, each class (HC, HO, RE) included only 60 EEG epochs. Third, movement's onset was marked by processing the data collected by the motion sensors embedded in the glove that the participant used to wear during the experiment. Motion data collected through the glove are smooth and do not allow to detect onset instantaneously, which means the epochs used for training may have captured the early ms of motion implementation, causing, in principle, the similar activation patterns visible in HC and HO (Fig. 6). For the aforementioned reasons, in the future, we intend not only to enroll a larger cohort of subject and record high-density EEGs (128-256 channels) but, for a more precise motion onset detection, EEG will be co-registered with electromiography (EMG). In addition, the analysis of EEG data is always a spatio-temporal process that is first related to the spiking activities of cortical circuits, i.e., by individual neurons cooperating to a task. This process has also a spectral component that is superimposed in the global approach proposed in [24], the NeuCube computational architecture based on brain-Inspired approaches. The concept of the spiking neural networks (SNN) is at the basis of the complex model that allows for on-going learning and classification over time [23,51]. NeuCube allows for generating a deep unsupervised learning spatiotemporal spike sequences in a scalable 3D SNN reservoir. This can be relevant for adaptation to new data, possibly in real-time modality, which is one of the future objectives also for the architecture here proposed. Conclusion In this paper, we proposed a novel deep CNN capable of classifying time-source maps (i.e., EEG sources epochs) related to hands' sub-movements (open/close) phase from resting state, achieving remarkable results, namely average accuracy of 89:65 AE 5:29% and 90:50 AE 5:35% in HC versus RE and HO versus RE discrimination task, respectively. Furthermore, in order to investigate which cortical source has mostly contributed in the classification of hand's motor preparation phase, EML was applied. Occlusion sensitivity analysis allowed to produce suitable saliency maps, from which to identify the most relevant areas of the input. The highest saliency region was detected though k-means clustering technique and the enclosed cortical sources were mapped onto the cortical surface. Experimental results mainly showed that the central and the right-temporal cortical sub-regions are activating while the subject was planning hand's movements (i.e., HC/HO). It is to be noted that the cortical activation rules that govern the motion planning are still not well known. Hence, on the basis of the achievements here reported, we believe that the proposed approach may be considered to be an interesting breakthrough in BCI applications. Funding This work was co-funded by the European Commission, the European Social Fund and the Calabria Region (code: C39B18000080002). The authors are the only responsible for this publication and the European Commission and the Calabria Region decline any responsibility for the use that may be made of the information in it held. This work was also supported by the UK Engineering and Physical Sciences Research Council (EPSRC) (EP/ M026981/1, EP/T021063/1, EP/T024917/1).
7,765.8
2021-03-06T00:00:00.000
[ "Computer Science" ]
Remote analysis of sputum smears for mycobacterium tuberculosis quantification using digital crowdsourcing Worldwide, TB is one of the top 10 causes of death and the leading cause from a single infectious agent. Although the development and roll out of Xpert MTB/RIF has recently become a major breakthrough in the field of TB diagnosis, smear microscopy remains the most widely used method for TB diagnosis, especially in low- and middle-income countries. This research tests the feasibility of a crowdsourced approach to tuberculosis image analysis. In particular, we investigated whether anonymous volunteers with no prior experience would be able to count acid-fast bacilli in digitized images of sputum smears by playing an online game. Following this approach 1790 people identified the acid-fast bacilli present in 60 digitized images, the best overall performance was obtained with a specific number of combined analysis from different players and the performance was evaluated with the F1 score, sensitivity and positive predictive value, reaching values of 0.933, 0.968 and 0.91, respectively. Introduction Tuberculosis (TB) is a leading cause of morbidity and mortality worldwide. Although the development of Xpert MTB/RIF has recently become a major breakthrough, smear microscopy remains the most widely used method for TB diagnosis, especially in low-and middle- income countries [1]. Given its low sensitivity, the World Health Organization (WHO) recommends that three sputum specimens should be examined for each TB presumptive case. Furthermore, in clinical practice, 100 high-power fields need to be examined in order to classify a smear as negative. Acid fast bacilli (AFB) smear reading requires a skilled microscopist and considering the lab workload associated with smear reading, a microscopist can only examine an average of 20-25 smears/day [2]. In addition, smear reading is subject to human error and prone to considerable interobserver variability [3]. Novel approaches, such as automated image analysis through convolutional neural networks, have recently shown promising results performing microscopy tasks as diagnosis of malaria in thick blood smears, tuberculosis in sputum samples, and intestinal parasite eggs in stool samples [4]. Detecting acid fast bacilli in sputum smear samples is a challenge that has been addressed before. In 2008, M.G. Costa et al. [5] published a method based on global adaptive threshold applied to Red and Green color channels of conventional microscopy images, obtaining a sensitivity of 76.7%. In 2018, Kant et al. [6] developed a system based on convolutional neural networks that achieved a recall of 83.8% and a precision of 67.6%. The same year, R.O. Panicker et al. [7] proposed a method that performs detection of tuberculosis bacilli by image binarization and subsequent classification of detected regions using a convolutional neural network obtaining a precision of 78.4%, a recall of 97.1% and a F1 score of 86.8%. To the best of our knowledge, this is the first crowdsourced approach to detect acid fast bacilli in sputum smears samples. Crowdsourcing methodologies leveraging the contributions of citizen scientists connected via the Internet have shown utility to solve biomedical challenges involving "big data" analysis that cannot be entirely automated [8]. The "gamification" of crowdsourced tasks untaps a resource for scientific research such as biomedical image analysis [9,10]. In this context, we aimed to evaluate the feasibility of a crowdsourced approach to sputum smear microscopy analysis for the diagnosis of tuberculosis. The gaming platform TuberSpot (www.tuberspot.org) is an online game for mobile and PC launched on the 24 th of March 2015. TuberSpot players score points by identifying correctly M. tuberculosis bacilli in digitized sputum slide fields of view (FOVs) with Ziehl-Neelsen stain (Fig 1). Gamers play with several fields images (FOVs) during each game. A backend server shares out randomly the different FOVs to the players in real time. Once the game starts, the player sees a FOV on the screen and, within a limited time, has to click in the places where bacilli are believed to be present. Once all bacilli are found, players pass to the next level. We have digitally introduced one synthetic bacillus (fake) in each of the negative FOVs, which cannot be distinguished from a normal one, ensuring that enough time is spent in the FOV even if originally there were no bacillus in it and allowing the introduction of negative FOV in the game. At the beginning of the game, there is a short tutorial showing how a bacillus looks like. Dataset The game database consists of 60 digitized FOVs from anonymous samples: 20 images of fields without any bacilli, 20 images with 1-10 bacilli and 20 images with 10-40 bacilli. Digitized smears were provided by the Centro de Investigação em Saúde de Manhiça (Mozambique) and Hospital Clínico San Carlos (Spain). The 60 images come from all types of sputum smear examination reports (negative, scanty, +1, +2, +3). Digitalization of the samples was made with a smartphone (Sony Xperia Z2) attached to the microscope eyepiece by an adapter (Celestron Universal Digiscoping Adapter). A gold standard for each FOV has been determined by three different expert microscopists, reporting the position and number of bacilli. Crowdsoucing scheme Collective detection is defined as the number of bacillus found in a single FOV based on the combination of the gameplays from different players over the same FOV. In order to exploit the redundant information produced by multiple independent players over the same FOV, an algorithm was implemented considering that there is a bacillus in a certain area of the FOV if enough individual players in a larger group have clicked ("voted") in that area of the same FOV [10]. Taking into account that players do not click exactly on the same pixel of the image we applied a clustering strategy. Each point was clustered with the closest neighboring point if the distance between the two points was shorter than the typical size of a bacillus. To classify a point in the FOV as a bacillus a given number of players must agree: this number is denominated as Quorum (Q). Group sizes (GS) from 1 to 30 gameplays and quorums, from 1 to the maximum number of gameplays, were tested to maximize the performance in the whole test dataset. The performance of the collective detection algorithm has been evaluated for each quorum (Q) and each size of players groups (GS) with respect to the gold standard measuring: the positive predictive value (precision)(1), the sensitivity (recall or true positive rate (TPR)) (2), the F1 score (3) and the specificity (true negative rate TNR) (4). Collective detections with a given Quorum and Group Size were considered true positives (tp) if the positive cluster distance to a gold standard detection is shorter than the typical size of a bacilli. Accordingly, collective detections with distance greater than the typical bacilli size with respect to a reference bacillus are considered as false positives (fp). All the bacilli that were not collectively detected were considered as false negatives (fn). To calculate the true negatives (tn) we measured the number of equivalent bacilli in the area of the field of view were no bacilli were identified by the experts nor by the collective detections. To this end the area of a bacillus and its immediate surroundings (bacillus area) is used to divide the area of the FOV free of bacilli and collective detections. Where: tp = Bacillus detected by at least Q out of GS. fp = Point, that is not a bacillus, voted by Q out of GS. fn = Bacillus that it is not "voted" by Q out of GS. tn = Number of bacillus equivalent area in a FOV were bacilli are not present and not voted by Q out of GS. Additionally Cohen's Kappa was computed to assess the agreement between the collective assessment and the reference gold standard. In Fig 2 there is an example of a field of view in which there are 18 bacilli (red mask), green and red crosses are points where the 20 players clicked on during the gameplay. In the first image, with Q = 2, 2 people out of 20 had to agree clicking in the same area in order to consider that area as a bacillus, with that metric the result would be 18 true positives and 3 false positives. In the second image, with Q = 20, taking into account that group size is 20, all the players of the group had to click on one area to consider it as a bacillus, the result for that experiment is 9 true positives and 9 false positives. Experimental setup The images for this experiment were uploaded to the TuberSpot online platform in April of 2017 and the analysis was performed in February of 2018. During that period, 1790 players analyzed the digitized FOVs reaching a total of 14749 individual FOVs-analysis. The players were not given a specific number of FOV to analyze as a task to complete, every FOV they analysed was considered independently of the number of images they played. The performance of the collective detection was evaluated for group sizes from 1 to 30 gameplays considering a gameplay as a FOV analyzed by a single player. We have analyzed 160 random combinations of gameplays for each group size and each one of the 60 images. Based on that analysis, we have identified the minimum number of players that provide the highest F1 score over all our test dataset. The collective detection based on the optimal size of the groups of players has been evaluated with a confusion matrix with three relevant classes for diagnostic purposes of the FOV: no AFB (or a fake), between 1 and 10 AFB and more than 10 AFB. Results and discussion The best overall performance was obtained for a mean F1 score of 0.933, a sensitivity of 0.968, a positive predictive value of 0.916, specificity of 0.998 and a kappa statistic of 0.927 for the combination of 29 gameplays and quorum 18 in comparison to expert microscopists. A very competitive result considering a smaller group size is achieved for group size 8 and quorum 5, for this combination the F1 score is 0.917, with a sensitivity of 0.963, a positive predictive value of 0.893, specificity of 0.998 and a kappa statistic of 0.905 (Fig 3). According to the guideline for sputum examination for tuberculosis by direct microscopy in low income countries proposed by the IUATLD [2], there are three different classifications regarding the AFB counts per FOV: no AFB, 1-10 AFB and >10 AFB. Based on that classification per FOV and the quantity of FOV per smear sample with a specific classification, the severity of the disease is determined. In order to test our methodology following this guideline, we classified the FOV analyzed for the combination of 8 gameplays with quorum of 5 (Table 1). This analysis shows that it is possible to identify and count M. tuberculosis AFB in digitized sputum smears based on the data produced by a number of non-expert on-line volunteers playing a video game over the same FOVs. Results from the collective detection with high accuracy for a group size of 29 players and a quorum of 18 against expert microscopists as gold standard, and a very competitive result for a smaller number of gameplays, group size 8 and quorum 5. According to the TB reporting IUATLD guideline it is necessary to count the AFB present on many FOVs in order to report the grade of infection of the patient. Therefore, further experiments with our system should be done to evaluate the performance following the entire diagnostic protocol for which specificity and sensitivity should be reported. Broadly, this research has defined the design criteria for a real-time remote analysis system for performing routine M. tuberculosis quantification that could be applied in endemic settings, characterized by a lack of expert microscopists [10]. Current trends and recommendations for TB diagnostics are shifting from microscopy confirmation towards molecular methods such as Xpert, Turenat etc. . . However, according to the latest Global Tuberculosis Report by WHO [11] rapid molecular tests were only used in 33% of the total people newly diagnosed in 2020 due to lack of accessibility. In this context, the proposed concept could still have potential in those settings where AFB is the main diagnostic tool in place and could overcome some of the limitations associated with AFB reading (mainly single operator dependent reading, lack of training of readers in many settings and time consumption in already overburdened lab technologists). As limitations, our solution could not be useful for TB diagnosis in vulnerable groups such as people living with HIV and children given the low sensitivity of sputum smear tests in these groups. On the other hand, in remote high burden settings, mobile phone and SMS-based technologies are emergent enabling tools to increase the rate of case detection by improving the efficacy of specimen collection and reporting results of acid-fast bacilli (AFB) microscopy, or to improve reporting and management for rapid diagnostic testing of HIV and malaria [12,13]. Such system could be operationalized obtaining images from a mobile microscope system [14,15] and the distribution of images through the internet via the mobile network, provided that there is connectivity which might not be the case in remote and rural environments. Moreover, it would be necessary to include a step to identify color blindness, or develop alternative representations for wider accessibility, as the analysis could be altered if this disease is considered. This concept has been piloted in relevant operational conditions including the acquisition of the data through a mobile phone adapted to a conventional microscope, data being sent to the game through the mobile phone data connection and receiving the assessment from the online game of the uploaded FOV after a given time. The system was tested during a specific campaign taking place within a few days at Centro de Investigação em Saúde de Manhiça (CISM) in Mozambique showing an assessment turnaround per case of 15-30 min after the image was uploaded into the game, with around 100 simultaneous players. This technological set up has the potential to be converted into a remote diagnosis platform connecting volunteer players, microscopists and specifically trained remote digital workers (micro-workers) that could get an incentive for their assessment generating flexible labor posts Table 1 . Confusion matrix of the reference number of M. tuberculosis per image (rows) vs number of M. tuberculosis found by players (columns). Results shown in this matrix were achieved for a collective detection made by groups of 8 people with a quorum of 5.160 random groups of 8 players for each one of the FOVs. Numbers in the confusion matrix represent the percentage of the gameplays analyzed that were classified as a negative FOV (with zero bacillus), as a FOV with a fake bacillus, as a FOV with 1 to 10 bacilli or as a FOV with more than 10 bacilli. in a similar scheme to Amazon Mechanical Turk. Players would be rated on their level of expertise and their performance within the game. The representative FOVs (negative FOVs, FOV with big number of bacilli and FOV with a medium number of bacilli) of crowdsourced samples FOVs assessed by less experienced players would be sent to expert microscopists for diagnosis confirmation integrating the results of the crowdsourcing system. Another possible way to exploit the diagnostic utility of this technology would be to prioritize reading for human experts based on the sample classification performed by the players. The samples in which the players identified the highest number of bacilli would be sent first to expert microscopists to obtain a confirmation as soon as possible using the crowdsourcing system report and the collective detections on the images to facilitate the process. Following, the cases with less detected bacilli, and specially the challenging cases with few or no identified bacilli, would be sent to experts to obtain an additional detailed reading and confirmation. Collective Detection Vs. Gold standard For both scenarios, specific studies would be necessary to assess the speed with which digital workers and players respond as well as the stability of the network that is required to ensure that diagnosis would arrive rapidly to remote areas. Certification of the system and the workflows should follow these studies before they could be integrated in the real clinical settings. Furthermore, there is a relevant potential of incorporating this type of strategies as an evaluation of External Quality Assurance schemes, validating the trainee's performance in a more friendly way for a certain period of time, increasing, accordingly to the complexity of the images, the length of the training and the digital game levels. Although fluorescence microscopy has better accuracy, in the settings where this technology would be useful this technique is not widely available. Comparing the results obtained through this crowdsourced approach to the ones obtained with deep learning techniques [4][5][6][7], we believe crowdsourcing methodologies can provide added value to traditional image-based diagnostics. Additionally, as recently published for helminthiasis samples [16], this type of systems can produce expert level labelled data that can be used to train artificial intelligence systems and contribute to the definition of new digital diagnostic methodologies that combine artificial intelligence systems [17,18] and human intelligence. Lastly, this approach might also be very relevant for educational purposes and a powerful tool for advocacy, especially among young people. This has been proven during the past years, more than 5000 children and young people in Spain have participated in workshops with videogames for global health.
4,043.2
2022-05-19T00:00:00.000
[ "Medicine", "Computer Science", "Environmental Science" ]
Switchable and Reversible Superhydrophobic Surfaces: Part One Switchable and Reversible Superhydrophobic Surfaces: Part One In this chapter, most of the methods used in the literature to prepare switchable and reversible superhydrophobic surfaces are described. Inspired by Nature, it is possible to induce the Cassie-Baxter (cid:1) Wenzel transition using different external stimuli such as light, temperature, pH, ion exchange, voltage, magnetic field, mechanic stress, plasma, ultrasonication, solvent, gas or guest. Such properties are extremely important for vari- ous applications but especially for controllable oil/water separation membranes, oil-absorbing materials and water harvesting systems. Introduction Superhydrophobic surfaces are characterized by a water apparent contact angle (θ w ) above 150 and ultra-low water adhesion or hysteresis (H). The obtaining of superhydrophobic surfaces is crucial in a theoretical point of view and also for various applications such as in self-cleaning windows and textiles, antifingerprint or antireflective properties for optical instruments and mobile phones, liquid transportation, separation membrane, cell and antibacterial adhesion. In Nature, many plants and animals have superhydrophobic properties [1]. These surface properties are extremely important for example to survive against predators or in hostile or arid environments. One can cite the famous Lotus leaves with their self-cleaning properties and also other plants and animals able to slide on the water surface, to see in fogging environments, to walk on vertical substrates, to breath underwater or to swim very rapidly ( Figure 1) [2][3][4][5][6][7][8][9][10][11][12][13][14]. For practical applications, it is often necessary to have "robust" superhydrophobic properties, which is possible combining appropriate surface structures and low surface energy materials. Indeed, robust superhydrophobic surfaces are obtained if the surface is able to stabilize the Cassie-Baxter state. Using an extern pressure, it is possible to induce the Cassie-BaxterÀWenzel transition but the transition is irreversible. Hence, in order to induce reversible Cassie-BaxterÀWenzel transition, extern stimuli are often used. In this chapter, most of the methods Reprinted with permission from American Chemical Society, USA. B (Strelitzia reginae leaves) Ref. [5], Copyright 2012. Reprinted with permission from American Chemical Society, USA. C (rose petals) Ref. [9], Copyright 2008. Reprinted with permission from American Chemical Society, USA. D (springtails) Ref. [8], Copyright 2013. Reprinted with permission from American Chemical Society, USA. E (insect and animal foot) Ref. [13], Copyright 2009. Reprinted with permission from American Chemical Society, USA. F (Juncus pith) Ref. [14], Copyright 2017. Reprinted with permission from American Chemical Society, USA. used in the literature to obtain switchable and reversible superhydrophobic surfaces are summarized. Indeed, different extern stimuli can be used such as the light, temperature, magnetic field, mechanical stress or ion exchange. Such materials are extremely used for applications in controllable oil/water separation membranes and water harvesting. One of the main applications is membranes with controllable wettability for oil/water separation. This application is extremely important to find solutions the spill of oil tankers. Another application is their use in car or building windows in order to see clearly even when it is raining. Water is also not wanted in building materials because it has a high thermal conductivity. Methods to remove quickly water are highly expected. Water harvesting is another important application and systems able to control water wettability are extremely promising especially in hot and arid environments. Theoretical part Both the surface energy (γ SV ) and the presence of surface roughness are key parameters to reach superhydrophobic properties. As reported by Young, the contact angles of a "smooth" substrate are governed by three surface tensions following the reaction: cos θ Y = (γ SV À γ SL )/γ LV , where γ SV , γ SL and γ LV are the surface tensions at the solid-vapor, solid-liquid and liquid-vapor interfaces, respectively [15]. However, the presence of surface roughness is fundamental to reach contact angles above 150 , as reported by Wenzel and Cassie-Baxter [16,17] (Figure 2). These two equations take into account the effect of surface roughness, contrary to the Young equation but are also related to the Young equation. When the water droplet follows the Wenzel regime, it penetrates inside all the surface roughness leading to a full solid-liquid interface but amplified by the roughness parameter following the equation: cos θ = r cos θ Y (r is the roughness parameter) [16]. Hence, the adhesion of water droplet is important because the roughness parameter increases the solid-liquid interface. Moreover, it is possible to reach contact angle above 150 but only using intrinsically hydrophobic materials (θ Y > 90 ). However, it is now admitted the possible to obtain superhydrophobic and even superoleophobic properties using intrinsically hydrophilic and oleophilic materials, respectively. This is possible only if air is present inside the surface roughness, as reported by Cassie-Baxter [17]. The Cassie-Baxter equation has to be applied when there is air trapped inside the surface roughness between the water droplet and the surface. The Cassie-Baxter equation is cos θ = r f fcos θ Y + f À 1 where r f is the roughness ratio of the substrate wetted by the liquid, f the solid fraction and (1 À f) the air fraction. Moreover, with the Cassie-Baxter, it is possible to obtain superhydrophobic properties with ultra-low adhesion if the air fraction between the water droplet and the surface is extremely important. The Wenzel and Cassie-Baxter are two extreme states, and it is possible to induce the Cassie-Baxter-to-Wenzel wetting transition by applying an extern pressure. Indeed, the Cassie-Baxter equation is a metastable state, and it is possible to switch from the Cassie-Baxter to the Wenzel state by supplying a sufficient energy. "Robust" superhydrophobic surfaces are surfaces that can repel water even if a high pressure is applied [18,19]. This is the case of the lotus leaves, which remain superhydrophobic even during rainfalls. It was also shown that the presence of re-entrant surface structures often to increase the surface robustness [20][21][22][23]. However, the Cassie-Baxter-to-Wenzel wetting transition by applying an extern pressure is irreversible because the dewetting forces are too strong [24,25]. In this review, by supplying other energies to the system, it will be shown how it is possible to obtain reversible Cassie-Baxter-to-Wenzel wetting transition with the possibility to obtain, for example, reversible superhydrophobic-tosuperhydrophilic properties. Indeed, it is possible to obtain reversible superhydrophobic properties if this energy modifies the surface energy (γ SV ) and/or the surface roughness. Most of the extern stimuli used in the literature to obtain reversible superhydrophobic properties will be reviewed. Reversible superhydrophobic surfaces The surface energy and surface morphology are two main key parameters governing surface wettability. Extern stimuli are very interesting approaches to induce a change in surface energy and/or surface morphology and lead to a transition from hydrophobic/superhydrophobic to hydrophilic/superhydrophilic. The stimuli used in the literature will be described in order to induce reversible changes in surface wettability ( Figure 3). UV light Light is one of the major extern stimuli used in the literature because of the easiness of utilization and high changes in surface wettability [26]. Various photosensitive inorganic oxides and organic polymers can undergo transitions from hydrophobic/superhydrophobic to hydrophilic/superhydrophilic after UV light irradiation and come back to the original state after storing in dark or exposing to visible light (VIS). This transition is often reversible during many cycles. Inorganic materials Among the photosensitive inorganic oxides, TiO 2 and ZnO are the most studied semiconductors. TiO 2 films are now largely used as steamtight and self-cleaning windows for their intrinsic photocatalytic properties and photo-induced hydrophilicity. Indeed, as shown in Figure 4, the presence of UV irradiation induces the formation of photoexcited electrons, which can reduce O 2 to generate superoxide radicals ( • O 2 À ) or hydroperoxyl radicals (HO 2 • ). These reactive oxygen species are able to convert organic pollutants into CO 2 and water and as a consequence clean the surface [27,28]. In 1997, Watanabe et al. [29,30] showed that the water contact angle (θ w ) of polycrystalline anatase TiO 2 was 72 AE 1 and that their wettability properties could reversely change after UV light irradiation. Indeed, as shown in Figure 4, the surface of TiO 2 consists of oxygen bridges and UV irradiation creates oxygen vacancies converting Ti 4+ into Ti 3+ . These defects can then react with water forming hydrophilic group at the surface and as a consequence increase the surface hydrophilicity. Then, the wettability conversion was observed on both polycrystalline/monocrystalline anatase and rutile [31,32]. Many works were dedicated to the modulation of the wettability of TiO 2 films [33,34]. Among these works, low surface energy coatings were used to enhance the surface hydrophobicity. For example, the contact angle of colloidal crystal of TiO 2 films modified by fluoroalkylsilanes (FAS) was 100 [35]. Then, as observed in Nature, a huge attention was dedicated to the increase in surface roughness of TiO 2 films in order to obtain superhydrophobic properties (θ w > 150 and low water adhesion). For example, based on Al 2 O 3 colloids with flower-like morphology, rough colloidal TiO 2 films modified by fluoroalkylsilanes (FAS) was prepared in 2000 [36]. The combination of surface microstructures with low surface energy materials allowed reaching superhydrophobic properties with θ w > 150 . After exposure to UV light irradiation, the films became superhydrophilic with θ w < 5 . Vertically aligned TiO 2 nanotubes were also reported by anodization of Ti substrates in the presence of F À [38][39][40]. The tube diameter and length were 175 nm and 3.3 μm, while the density of TiO 2 the nanotubes was 2.3  10 7 tubes mm À2 . After modification with a fluoroalkylsilane, the substrates displayed superhydrophobic properties with low water adhesion before UV irradiation and parahydrophobic with high water adhesion after UV irradiation. Moreover, the substrates could reversely switch from non-sticky to sticky by UV irradiation and heat annealing. Other authors also report the possible switching from highly hydrophobic and superhydrophilic using N-doped TiO 2 nanotubes but without low surface energy materials ( Figure 5) [39]. Superhydrophobic TiO 2 surfaces with nanostrawberry-like morphology were also reported using a seeding growth process [41]. Now, TiO 2 -based superhydrophobic surfaces with reversibility are largely used for the conception of smart surfaces and other functional materials. However, some rough morphologies lead to a severe dispersion of the light if their roughness is higher than the wavelength of the light and as a consequence to a loss in transparency. Hence, a promising strategy is the use of surfaces with low surface roughness [42]. In order to obtain an easy and reproducible method, Fujishima et al. used a CF 4 plasma etching to reach microstructured TiO 2 -based superhydrophobic properties [43,44] after coating with octadodecylphosphonic acid (ODP). After an etching time of 30s, surfaces with θ w > 165 were obtained with reversible conversion by UV irradiation. Ti substrates with switchable and reversible wettability from underwater superoleophobic to superoleophilic were also obtained by femtolaser laser treatment. The substrates are excellent candidates for separating oil/water mixtures [45]. TiO 2 nanoparticles were also deposited on microstructured surfaces in order to enhance the surface properties [45][46][47][48][49][50][51][52][53][54]. For example, Franssila et al. used substrates with microscale overhang pillars before depositing TiO 2 nanoparticles by atomic layer deposition [46,47]. Depending on the UV irradiation time, the surfaces could switch from superhydrophobic to parahydrophobic (1 min), hydrophilic (5 min) or superhydrophilic (10 min). TiO 2 nanoparticles were also deposited on pre-patterned substrates such as paper, membranes or sponges in order to induce different special wettabilities [51][52][53][54]. ZnO is another extremely important photosensitive semiconductor for its intrinsic optical, electronic and acoustic properties, reacting similarly to TiO 2 [55,56]. Here, also many works were dedicated to induce ZnO structures with high roughness . Jiang et al. reported the obtaining of ZnO nanorod arrays using hydrothermal processes ( Figure 6). Their diameter and length were 50-150 nm and 1.2 μm, respectively. The surfaces displayed switchable and reversible properties from superhydrophobic (θ w = 161.2 ) to superhydrophilic by alternating UV light irradiation and dark storage [57]. These kinds of materials could also be used as memristors controllable with the illumination direction [60]. Another application is the preparation of controllable membranes for oil/water separation with specific wetting properties. For example, Jiang et al. developed switchable and reversible superhydrophobic-superhydrophilic and underwater superoleophobic properties by growth of ZnO nanorods on stainless steel meshes ( Figure 7). More precisely, the meshes were both superhydrophobic and underwater superoleophilic but became both superhydrophilic and underwater superoleophobic after UV light irradiation [61]. Various other oxides, including WO 3 , V 2 O 5 , SnO 2 , CuO, Fe 2 O 3 , In 2 O 3 , SiC and GaN, were used to reversibly change the surface wettability from superhydrophobic to superhydrophilic by alternating UV light irradiation and dark storage or heat treatment [80][81][82][83][84][85][86][87][88][89][90][91][92]. For example, Wang et al. showed that the protein adsorption and cell adhesion on GaN nanowires can be modulated by UV irradiation because the surface wettability changes from superhydrophobic to superhydrophilic. It was also sometimes necessary to add a hydrophobic molecule to enhance the surface hydrophobicity and the UV treatment is often able to remove this molecule [93][94][95][96][97]. For example, Bi 2 O 3 hyperbranched dendritic structures were superhydrophobic but only after immersion in stearic acid solution [96]. Then, the UV irradiation was able to remove stearic acid and the surface became superhydrophilic. However, to obtain superhydrophobic properties again, it was necessary to add stearic acid again. Similarly, carbon-based materials, including carbon nanotubes and graphene films, were also found to change from superhydrophobic to superhydrophilic by UV light irradiation and dark storage [98][99][100][101]. Here, the authors proposed that UV irradiation allows to change the absorbed O 2 molecules into hydrophilic groups such as hydroxyl ones [98]. Moreover, various inorganic oxides (such ZnO, p-Si, Al 2 O 3 , SrTiO 3 , Sn, ZnS, CuO, Ag 2 O and Cr 2 O 3 ) were found to be also sensitive to X-ray with reversible wettability [102]. reported that the trans isomer is more hydrophobic because it has a smaller dipole moment and a low surface energy, in comparison to the cis isomer. Indeed, the benzene substituent is more present at the extreme surface in the trans isomer. However, the changes in θ w on smooth substrates are lower than 10 after UV irradiation [122]. In 2005, Jiang et al. prepared a rough micro-patterned silicon substrate by photolithography and deposited on it a monolayer of azobenzene [121]. They showed that the difference in θ w between the trans and cis isomer is highly depending on the spacing between the pillars ( Figure 9). The highest θ w difference was obtained for a spacing of 40 μm, for which a change from 152.6 to 78.3 was observed after UV light irradiation. Hence, the maximal difference observed was 66.3 . In order to enhance the surface properties, hydrophobic substituents such as CF 3 were grafted on the benzene ring of azobenzene groups [123]. When azobenzene is in the trans form, the CF 3 groups are at the extreme surface and the surface is expected to be more hydrophobic than without CF 3 groups. Combining fluorinated azobenzene with high roughness, Cho et al. were the first to show the possibility reversibly switch from superhydrophobic to superhydrophilic during the trans/cis transition [124,125]. Using a layer-by-layer strategy alternating poly (allylamine hydrochloride) (PAH) and SiO 2 nanoparticles to obtain rough surfaces, the azobenzene substituents were grafted during the last step. Even if the UV irradiation induced a small θ w difference (5 ) for the smooth substrate, the increase in roughness induces a huge θ w difference up to 147 for nine deposition cycles. Similar results were obtained using coreshell Fe 3 O 4 @SiO 2 nanoparticles [126]. The surface hydrophobicity could be easily controlled with the UV or visible light illumination time ( Figure 10). These materials could also be used to selectively induce water permeation inside membranes. Other works showed the possibility to modify cotton and paper substrates with these kinds of photosensitive polymers [127][128][129]. Using polyhedral oligomeric silsesquioxane (POSS) and fluorinated azobenzene, Gao et al. reported the possibility to obtain cotton fabrics with switchable from superhydrophobic/superoleophobic to highly hydrophobic/oleophobic [128,129]. Indeed, many works were dedicated to the switching from superhydrophobic (low adhesion) to parahydrophobic (high adhesion) after UV irradiation. In order to achieve these properties, many strategies were employed in the literature [130][131][132][133][134][135]. For example, Xu et al. used an organotellurium-mediated controlled radical polymerization (TERP) in order to achieve polymers with micro/nanostructures [130][131][132]. Hu et al. used SiO 2 nanoparticles and polydopamine in order to graft the azobenzene moieties on SiO 2 nanoparticles [131]. By contrast, other groups deposited azobenzene-based materials on pre-structured surfaces [133][134][135]. For example, Rühe et al. deposited the azobenzene moieties on Si nanograss obtained by etching Si substrates with C 4 F 8 , SF 6 and O 2 . The surfaces could switch from low adhesion to completely sticky after UV irradiation [133]. Yu et al. used micro and nanostructures substrates obtained by photolithography and etching before depositing the azobenzene moieties [134]. The authors measured an adhesion force of 60.6 AE 12.3 μN and 80.8 AE 4.9 μN before and after UV illumination, respectively. Liu et al. used anodized aluminum substrates with a "building blocks" morphology. After coating with a PDMS polymer grafted with azobenzene moieties, the substrates displayed switchable wettability from superhydrophobic (low adhesion: 6.2 μN for the trans isomer) to parahydrophobic (high adhesion: 44.8 μN for the cis isomer) properties after UV irradiation [135]. Diarylethene derivatives were found to be another excellent choice for light-sensitive switchable wettability ( Figure 11). In this case, the light induces a change in the chemical structure from open-ring isomer to closed ring isomer. Uchida et al. reported the unique behavior of this molecule. Upon UV light irradiation, the film became superhydrophobic with θ w = 163 due to the formation of microfibrils of diameter around 1 μm [136,137]. Upon visible light irradiation, the surface again became flat with θ w = 120 . The chemical structure of the diarylethene can also be changed in order to modify the microcrystalline structures. For example, the surface morphology could be modified by sulfonation of the thiophene rings [138]. Different substituents were also introduced to change the material crystallinity [139][140][141]. The authors demonstrated that in order to obtain superhydrophobic properties with θ w > 170 , it is preferable to form densely submicrometer sized needle-shaped crystals [139]. For that, it is important that the eutectic temperature of the two isomers of the diarylethene is above that the temperature of formation. Otherwise, large crystals are formed ( Figure 12). Spiropyran is another kind of photochromic organic moiety with wetting properties sensitive to light. Its closed form is apolar and hydrophobic, whereas its open form is polar and hydrophilic ( Figure 13). These two forms can be reversely switched by UV and visible light irradiation [142][143][144]. In order to obtain superhydrophobic, spiropyran-based molecules can be deposited on rough surface [145][146][147][148]. For example, the deposition on Si nanograss gave rise to superhydrophobic properties. Moreover, the authors observed a change from superhydrophobic (low adhesion) to parahydrophobic (high adhesion) properties upon UV light irradiation [145]. Smirnov et al. also reported the possible control of water into a nanoporous aluminum membrane containing a spiropyran moiety using light [147]. Here, the photosensitive membrane acts as a burst valve, allowing the transport of water and ions across the membrane. Lu et al. also reported the formation of melamine-formaldehyde sponge with spiropyran moiety for oil recovery. The sponge was able to control oil absorption and desorption under light illumination [148]. Coumarin was also used in the literature to change the surface wettability. Here, the UV light induces the dimerization of coumarin as shown in Figure 14. Hampp et al. deposited a selfassembled monolayer (SAM) with coumarin moieties [149]. They observed a change of θ w from 70 to 55 . Xu et al. grafted coumarin on SiO 2 nanoparticles [150]. The authors observed in a change of the surface morphology from random nanoparticle aggregates to rings accompanied with a change of θ w from 102 to 163 . Temperature The reversibility of surface wettability by thermal treatment has given rise to a huge interest during the last years [151,152]. Poly(N-isopropylacrylamide) (PNIPAAm) has been extensively used as an example polymer with thermal response, which has a low critical solution temperature (LCST) of around 32-33 C [151]. On smooth substrate, the θ w of modified PNNIPAAm can changed from hydrophilic to hydrophobic when the temperature is over LCST, resulting from competition between intra-and intermolecular interactions, as shown in Figure 15. By grafting the polymer on rough silicon surface obtained by etching, the surface wettability could be changed from superhydrophilic to superhydrophobic with θ w = 149.3 when the temperature changed from 25 to 40 C. Other works also reported this possibility using different strategies [153][154][155][156]. PNIPAAm/PS and PNIPAAm/poly(L-lactide) (PLLA) nanocomposites were also produced by electrospinning [157,158]. Depending on the concentration of the constituents, the surface morphology could be changed from beads to long nanofibers. At high concentration of PNIPAAm, the surface could change from superhydrophilic to superhydrophobic when the temperature changed from 20 to 50 C. It was also shown that the response time to switch is depending on the size of the fibers [159]. When the diameter of the fiber was small (around 380-1500 nm), the response time was 4-5 s [160]. Other nanocomposites were also reported with this technique. PNIPAAm/PS blends were used to obtain densely packed nanocupules of 284 nm diameter and 31 nm wall thickness using an anodized aluminum oxide (AAO) template ( Figure 16) [161]. Here, the surface could switch by changing the temperature from parahydrophobic (high adhesion) to superhydrophobic (low adhesion) with a difference in adhesion force of around 20 μN. PNIPAAm was also polymerized on an elastic polyurethane (PU) microfibrous membrane by free radical polymerization [162]. The membrane could be used for controllable oil/water separation. At 25 C, the membrane was underwater superoleophobic, while at 45 C the membrane was underwater superoleophilic (Figure 17). Xin et al. reported the preparation of PNIPAAm-cotton fabrics able to collect different amount of water from fog [163]. At room temperature, the cotton showed a water uptake of 340%, while at 40 C the uptake was only 24%. Such materials are extremely interesting for water harvesting systems. Microfluidic thermosensitive valves were also prepared [164,165]. After coating with PNIPAAm, the valve was hydrophilic at room temperature and allowed the flow (opening status), while at 70 C, the valve was superhydrophobic and stopped the water flow (closing status). Using a similar idea, an "ON-OFF" switchable enzymatic biofuel cell was reported [166]. Here, gold nanoparticles protected glucose oxidase and laccase were entrapped into PNIPAAm chains. At room temperature, the fuels and the mediator could access to the catalytic centers of enzymes ("ON" state), while at 50 C the process of reactant transmission was blocked ("OFF" state). Poly(ε-caprolactone) (PCL) was also tested as a thermosensitive polymer with a transition from crystalline phase to amorphous phase ( Figure 18) [167]. Jiang et al. showed that PCL 10000 is an ideal material. For a smooth surface, θ w of PCL 10000 was 88.1 C at room temperature because the polymer chains are frozen by crystallization. However, at 60 C, θ w was 60.8 C because water can induce the reorientation of the hydrophobic/hydrophilic groups. Moreover, by depositing this polymer rough substrate composed of arrays of square pillars (10 μm  10 μm in width, 30 μm in height), a change from superhydrophobic to superhydrophilic was observed after heat treatment. The highest properties were obtained groove spacing of 40 μm. SiO 2 and carbon nanotube/PCL nanocomposites were also used in the literature [168,169]. For example, using carbon nanotubes, it was reported the possibility to switch from hydrophobic to hydrophilic or from superhydrophobic (low water adhesion) to parahydrophobic (high water adhesion), dependent on PCL concentration. Liquid crystalline polymers also showed thermosensitivity when the temperature induces a reversible change from liquid crystalline to isotrope. After grafting liquid crystalline segments (butyl-oxy biphenylcarbonitrile) on a smooth PDMS elastomer, the authors observed a change of θ w from 92.4 to 89.3 due to a change of the polymer from smectic A to isotrope [170]. The same polymer was also used to cover rough substrates composed of arrays of square pillars (10 μm  10 μm in width, Figure 17. PU membrane grafted with PNIPAAm to induce reversible change from underwater superoleophobic to underwater oleophilic by heating and cooling. Ref. [162], Copyright 2016. Reprinted with permission from American Chemical Society, USA. 30 μm in height). A huge influence of the groove spacing was observed. Interestingly, a change from superhydrophobic (low water adhesion) to parahydrophobic (high water adhesion) was observed for a groove spacing of 15 μm. Liquid crystalline elastomers were also prepared using a side-on liquid crystalline monomer 4 00 -acryloyloxybutyl 2,5-di(4 0 -butyloxybenzoyloxy)benzoate [171]. Here, a change from nematic to isotrope was observed at a temperature up to 70 C depending on the used polymer. By depositing the polymer on a smooth substrate, a change in θ w of only 3 was observed, while by depositing rough substrates composed of arrays of cylindrical pillars (3 μm in diameter, 6 μm in height, 1.5 μm in spacing), a change from 127 to 86 was measured. Various inorganic materials also showed thermal response. Shirtcliffe et al. studied the wettability of porous SiO 2 foams obtained by sol-gel from methyltriethoxysilane (MTEOS) [172,173]. The resulting materials displayed switchable wettability from superhydrophobic to superhydrophilic (Cassie-Baxter-to-Wenzel transition) when they are heated at 400 C. To become hydrophilic, the surface must become more polar. The authors think that this could occur by the formation of new groups or by a change in the relative abundances of apolar methyl groups and polar silica species. Sol-gel foams were also prepared using varying proportions of phenyltriethoxysilane (PhTEOS) and TEOS. The temperatures at which switching occurred were increased when larger fractions of PhTEOS and reversely. SiO 2 suspensions, made from SiO 2 nanoparticles hydrophobically modified with chlorotrimethylsilane and PDMS vinyl terminated, were deposited by spraying [174]. The resulting substrate could reversely switch from superhydrophobic to hydrophobic after cooling at very low temperature (À15 C). Here, the authors attributed this possibility to water vapor condensation on the surface. When the subfreezing film was placed in ambient environment, the humidity in the air condensed to the subfreezing surfaces and increased the surface hydrophilicity. Otherwise, inorganic materials could also be coated using a hydrophobic material in order to achieve superhydrophobic properties [175][176][177][178][179]. Here, the heat treatment could induce the desorption of the hydrophobic material and switch the surface from superhydrophobic to superhydrophilic. However, these kind of materials are reversible but only after surface remodification with the hydrophobic material. pH Materials containing functional acid or basic groups such as amines or carboxylic acids can be used to induce switchable properties by pH changing [180,181]. For example, at low pH, the COOH group is protonated, while at high, pH it is deprotonated (COO À ) with a much higher hydrophilicity [182]. Zhang et al. modified rough gold substrates with micro/ nanostructures by self-assembly of different thiols. They used the dendron thiol 2-(11mercaptoundecanamido)benzoic acid (MUABA) [183] or mixed solution of HS(CH 2 ) 9 CH 3 and HS(CH 2 ) 10 COOH [184]. Depending on the surface roughness and the pH, it was possible to obtain switchable surface from superhydrophobic to superhydrophilic. Using mixed solution of HS(CH 2 ) 9 CH 3 and HS (CH 2 ) 10 COOH, the wetting properties were highly dependent on the percentage of each constituent [185,186]. Using 40 mol% of HS(CH 2 ) 10 COOH, the surface could change from superhydrophobic (θ w = 154 ) to superhydrophilic (θ w ≈ 0 ) as the pH increases. Mixed solution of HS(CH 2 ) 9 CH 3 and HS(CH 2 ) 10 COOH was also used on rough mesh substrates [187][188][189][190][191]. Cu(OH) 2 nanoneedles were grown on copper meshes by anodization in KOH solution or by immersion in (NH 4 ) 2 S 2 O 8 and NaOH (Figure 19) [187][188][189]. After surface modification with mixed solution of HS(CH 2 ) 9 CH 3 and HS(CH 2 ) 10 COOH, the best properties were obtained with 60 mol% of HS(CH 2 ) 10 COOH. The best properties were also obtained for a mesh pore size of 58 μm. Indeed, the authors showed that the pressure that the meshes can support is depending on the mesh geometry and pore size, formation of surface structures on the meshes (nanoneedles) and the surface energy, which here changes with the pH [187][188][189][190]. For acidic and neutral water, the meshes were superhydrophobic and underwater superoleophilic. For basic water, the meshes were superhydrophilic and underwater superoleophobic. Here, both the immiscible oil/water mixture and oil-in-water emulsions could be separated on-demand through changing the water pH and with high efficiency and high flux. pH-responsive fabrics were also reported after growth of Ag structures and surface modification with mixed solution of HS(CH 2 ) 9 CH 3 and HS (CH 2 ) 10 COOH [192]. The change of wettability of DNA nanodevices was also studied [193]. DNA molecules modified with fluorinated hydrophobic groups were fixed to gold substrates by SAM. The conformation of the DNA molecules on the substrate could change with the pH. The substrate was superhydrophilic at low pH and superhydrophobic at high pH. Various polymers with pH-sensitive groups were also used in the literature. Polymers with carboxylic groups were reported [194][195][196][197][198][199]. In 2006, Jiang et al. deposited colloidal crystal films made of poly-(styrene-methyl methacrylate-acrylic acid) via a batch emulsion polymerization in the presence of sodium dodecylbenzenesulfonate (SDBS) (Figure 20) [194]. At pH 6, the carboxylic groups are in the protonated state (COOH), which could do hydrogen bonds with the SO 3 À groups of SDBS. As a consequence, the hydrophobic tails of the SDBS are spread toward air and the surface was superhydrophobic (θ w = 150.4 ). At high pH (pH = 12), the COOH groups are deprotonated (COO À ) suppressing the hydrogen bonds. Here, the surface was superhydrophilic due to the presence of both COO À and SO 3 À . Figure 19. Cu(OH) 2 nanoneedles grown on copper steel meshes. The resulting meshes could switch from superhydrophobic and underwater superoleophilic to superhydrophilic and underwater superoleophobic by changing the pH. Ref. [188], Copyright 2015. Reprinted with permission from American Chemical Society, USA. Orthophosphoric acids (ROPO 3 H 2 ) were also studied. These acids are diacids with a pKa1 between 1 and 2 and a pKa2 between 6 and 7 ( Figure 21). Three different acids are present dependent on the pH [200][201][202]. Poly(methacryloyl ethylene phosphate) (PMEP) brushes were used. At pH > 8, the phosphate groups are deprotonated and the electrostatic repulsions between the charged polymer chains led to a swollen state with high hydrophobicity, while at pH < 2, the brushes are protonated and in a collapsed state. In order to induce basicity, amino groups were also highly used in the literature using different strategies [203][204][205]. Liu et al. used a triblock copolymer: one block with a hydrophobic group, one block with a pH-sensitive amino group and another one with a functional group for grafting on SiO 2 nanoparticles [203]. The material could be dip-coated on different substrates such as cotton fabric, filter paper and PU foam and could be used for pH-responsive oil/water separation membranes. Among the basic groups, pyridine was also reported. Wang et al. reported the grafting of block copolymer brushes of poly(4-vinylpyridine-block-dimethylsiloxane) (P4PV-b-PDMS) on SiO 2 nanoparticles [206]. After casting the suspension particles on non-woven cellulose textiles and PU sponges, the resulting materials displayed superhydrophobic and underwater superoleophilic properties at pH 6.5, and superhydrophilic and underwater superoleophobic properties at pH 2.0. Such materials could also be used for controlling the separation of oil/water mixtures by changing the pH. Graphene foams with switchable oil wettability were also reported by grafting block copolymer brushes of poly(2vinylpyridine-block-hexadecyl acrylate) (P2PV-b-PHA) [207]. By contrast, other authors chose to graft the polymer directly on substrates [208,209]. Luo et al. also reported the fabrication of fiber membrane by electrospinning of the block copolymer poly(4-vinylpyridine-block-methyl methacrylate) (P4PV-b-PMMA) on stainless steel meshes ( Figure 22) [210]. Using oil/water mixtures, oils can selectively pass through the membrane at pH 3, while at pH 7, water pass selectively. Finally, other authors used block copolymers with both acid and amino groups [211,212]. For example, Zhou et al. showed that using these kind of polymers it is possible to control the slip length of fluids by changing the pH. Voltage The best advantage of using electrical sensitivity as extern stimulus is the rapidity of implementation [213,214]. Among the most used materials, conducting polymers are extremely interesting because they can exist in different doping states. The neutral dedoped state is uncharged, while the doped states are charged ( Figure 23). Moreover, in their doped states, conducting polymers incorporated doping agents (most of the time counter-anions) in order to neutralize the charges present inside the polymer backbone. ) were introduced. The authors observed that all the anions induced a decrease of θ w . The highest decrease (from 105.9 to 76.7 ) was observed with SO 4 2À anions. In order to enhance the wettability difference between the reduced and the oxidized state, the authors also deposited micro-patterned substrates. Then, they observed a much higher decrease from 147.4 to 62.2 . Otherwise, various other techniques can be used to prepare structured conducting polymer films. Among them, using an electrochemical cell, the electropolymerization allows in one step having polymerization, deposition of conducting polymer film and obtaining of structured films. The surface structures are highly dependent on electrochemical parameters (deposition method, time, solvent, electrolyte…) and on the monomer used [216][217][218][219][220][221][222]. For example, superhydrophobic rough polypyrrole films were reported by electropolymerization of pyrrole by galvanostatic deposition (constant current of 0.25 mA cm À2 ) in the presence of highly hydrophobic perfluorooctanesulfonate (C 8 F 17 SO 3 À ) doping ions and also FeCl 3 in order to induce by polymerization and electropolymerization [162]. Here, the surface structures consisted in submicron particles (1-3 μm) forming a porous film. The surface could easily and reversibly switch from superhydrophobic to superhydrophilic by oxidation/reduction using different voltages. Moreover, Chang et al. reported a faster electrical process (3 s) and also eliminated the need to immerse the substrate within an electrolyte [165]. Jiang et al. also reported that the oil adhesion can also be controlled during the doping/dedoping process [222]. Other monomers were also studied [223][224][225][226][227][228]. Yan et al. reported the use of aniline to produce helical polyaniline fibers in aqueous electrolyte and in the presence of perfluorooctanesulfonic acid by galvanostatic deposition (constant current of 0.2 mA cm À2 ) [223]. Polyaniline is an interesting polymer because different chemical forms can be produced also depending on the pH. In the presence of tetraethylammonium perfluorooctanesulfonate, the authors reported the possible switching from superhydrophobic (emeraldine salt form) to superhydrophilic (leucoemeraldine base form) by changing the voltage. Poly(3,4-ethylenedioxythiophene) (PEDOT) was also used ( Figure 25) [224]. Here, two different fluorinated electrolytes were chosen: tetrabutylammonium nonafluorobutanesulfonate (Bu 4 NC 4 F 9 SO 3 ) and tetrabutylammonium heptadecafluorooctanesulfonate (Bu 4 NC 8 F 1 7SO 3 ). Their electropolymerization was performed in acetonitrile and at constant potential. Porous films were obtained and the surface morphology was highly dependent on the electrolyte. Superhydrophobic properties were obtained with Bu 4 NC 8 F 17 SO 3 and using a deposition charge (Qs) of 300 mC cm À2 [226,227]. Lu prepared first a porous PEDOT film on which a second was electrodeposited by cyclic voltammetry. Using poly(3-methylthiophene), a switchable and reversible surface from superhydrophobic to superhydrophilic was obtained after doping/dedoping in the presence of ClO 4 À anions [170]. By contrast, using poly(3-hexylthiophene), the surface could switch from superhydrophobic to parahydrophobic (high water adhesion) [227]. The surfaces could also induce switchable cell adsorption [228]. Advincula created first polystyrene colloidal crystals in hexagonal packing, on which a polythiophene film with short alkyl chains was electrodeposited by cyclic voltammetry [229]. The surface could switch from superhydrophobic to highly hydrophilic. Here also, the protein and bacterial cell adsorption could also be switched at the same time [230]. Otherwise, different strategies were employed to create nanostructured conducting polymers in solution. For that, polyaniline is a choice material due to the presence of amine groups that allow to induce self-assembly by hydrogen bonds [231][232][233][234][235][236][237]. Jiang et al. reported the polymerization in-situ on fabrics in the presence of perfluorosebacic acid (HOOC-C 8 F 16 -COOH) and FeCl 3 , as dopant and oxidant, respectively [232]. Nanoparticles were formed on the fabrics. The resulting fabrics could switch from superhydrophobic to superhydrophilic by doping/ dedoping while the dedoping could be performed in the presence of NH 3 gas. Fabrics with switchable wettability from superoleophobic to superoleophilic were also reported using perfluorooctanoic acid [233,234]. In order to prepare membranes with selective responsivity for oil/water separation, stainless steel meshes were coated with root-like polyaniline nanofibers fabricated by emulsion polymerization [235]. The meshes could switch from superhydrophobic to superhydrophilic at different voltages. Metal ions and organic molecules sensitive to redox reactions can also be used to switch the surface wettability by voltage [238,239]. For example, Ag + Àbiphenyldithiol (BPDT) SAMS could be converted to Ag 0 -BPDT by applying a difference potential [238]. [241]. The reorientation of polyelectrolyte conformation is another phenomenon induced by electric potential [242,243]. Choi et al. observed that a SAM of (16-mercapto)hexadecanoic acid (MHA) deposited on a gold substrate could undergo a transition from a straight conformation to a curved one by applying an electric potential. The molecules in the straight conformation are hydrophilic due to the presence of carboxylate ions and that in the curved conformation are hydrophobic due to the presence of the hydrophobic chains. Electrowetting is another method allowing the control of the surface wettability by applying an extern electric field. In this process, a water droplet is placed on a superhydrophobic surfaces coated with an insulating layer. The applying of the electric field induces an accumulation of charges and decreases the solid-liquid interface (γ SL ) and as a consequence the surface hydrophobicity, as shown in Figure 26 [244,245]. In 2004, Krupenkin et al. studied the electrowetting of superhydrophobic substrates prepared by modifying nanostructured silicon substrates with a low surface energy material [246]. After electrowetting, they could change the surface wettability from superhydrophobic to superhydrophilic. Vertically aligned superhydrophobic carbon nanofibers and ZnO nanorods were also highly used in the literature to induce a switch from superhydrophobic to hydrophilic or superhydrophilic [247][248][249][250][251]. Boukherroub et al. reported the possible obtaining of reversible electrowetting on silicon nanowires with double nanotextures (length of 10 and 30 μm) [252][253][254][255]. They found a relationship between the resistance to drop impact impalement and electrowetting impalement ( Figure 27) [254]. The thresholds for drop impact and electrowetting irreversibility increase and the contact angle hysteresis decrease when the length and the density of nanowires increase. Other mechanisms for reversible electrowetting were also reported in the literature [256,257]. Otherwise, electrowetting could also be used to control protein adsorption or for accelerating reaction by mixing liquid droplets [258,259].
9,119.6
2018-03-28T00:00:00.000
[ "Materials Science" ]
Pulsating strings on (AdS3 × S3)ϰ We derive the energy of pulsating strings as a function of adiabatic invariant oscillation number, which oscillates in Sϰ2. We find similar solutions for the strings oscillating in deformed AdS3. Furthermore, we generalize the result of the oscillating strings in anti-de Sitter space in the presence of extra angular momentum in (AdS3 × S1)ϰ. Introduction The conjectured duality between the supersymmetric Yang-Mills theory in four dimensions and type IIB superstring in the compactified AdS space [1] has been the major research area for recent few years. Though solving exact free string spectrum on a generic given background is highly non-trivial problem, robustness of integrability in the both side of the conjecture played a key role in reducing the problem of solving the spectra in the large charge limit to the the problem of solving a set of algebraic Bethe equations. The fact that the lagrangian field equations of the AdS 5 × S 5 theory can be recast in the zero curvature form [2] introduces the integrability on the anti-de Sitter side of the correspondence which ensures the existence of an infinite number of conserved quantities. The integrability arises as a quantum symmetry of operator mixing in CFT side [3,4] and as a classical symmetry on the string world-sheet in AdS space [2]. Under the assumption that integrability continues to hold at the quantum level, the spectrum of the AdS 5 × S 5 superstring is determined by means of the thermodynamic Bethe ansatz applied to a doubly Wick rotated version of its world sheet theory [5,6]. Precisely, the integrability has improved the understanding of the equivalence between the Bethe equation for the spin chain and the corresponding classical realization of Bethe equation for the classical AdS 5 × S 5 string sigma model [7,8]. The corresponding Bethe equations are based on the knowledge of the S-matrix which describes the scattering of world-sheet excitations of the gauge-fixed string sigma model or the excitations of a certain spin chain in the dual gauge theory [7,[9][10][11][12][13]. To improve our understanding of the relationship between integrability and the amount of global symmetries preserved by the target space-time, one should explore possibilities of various deformations of the string target space time that preserve the integrability of the two-dimensional quantum field theory on the world sheet. Integrable deformations of AdS 5 ×S 5 can be achieved by a combination of T-duality and shift transformations [14,15]. This geometric approach results in a new class of deformations which can be described in JHEP03(2015)010 terms of original string theory and the deformations result into quasi-periodic but keeping the integrability intact. The other way is an algebraic approach based on q-deformations of the world sheet S-matrix [16][17][18][19][20][21][22][23]. Recently one real deformed parametered integrable qdeformed AdS 5 × S 5 super coset model with fermionic degree of freedom was found in [24]. The deformed background breaks the symmetry of AdS 5 ×S 5 to [U(1)] 6 , which urges to the new insight of its dual field theory which has to be explored yet. In order to understand the various aspects of the background one can look in to [25][26][27][28][29][30][31][32][33][34]. The perturbative world sheet scattering matrix of bosonic particles of the model was computed in [25]. The maximal deformation limit of this model is T-dual to a flipped double Wick rotation of the target space and in the imaginary limit it becomes that of a pp-wave background with a curved transverse part [26]. Thermodynamic Bethe Ansatz description of exact finite size spectra concludes that this model maps on to itself under double Wick rotation [27]. The classical integrable structure of anisotropic Landau-Lifshitz sigma models has been derived by taking fast moving string limits in the bosonic sub sector of this model [28]. This background is formally related to dS 5 ×H 5 by a double T-duality with hidden supersymmetry [31]. The bosonic spinning strings on this background can be viewed as the solution to a deformed Neumann model [32]. In this deformed supercoset model corresponding type IIB supergravity solutions in the subset of AdS 2 ×S 2 and AdS 3 ×S 3 have been computed with non-trivial dilaton and RR scalar with a free parameter dependency on the solution [35]. Further by following the Yang-Baxter sigma model approach with classical r-matrices which satisfy the classical Yang-Baxter equation and carry two parameters and three-parameter generalization, type IIB supergravity solutuions have been found in [36]. However, the existence and properties of a gauge theory dual to string theory in the deformed background is still an open question. In this connection giant magnons and its finite size correction [29,30,33] have been computed for rotating string in string theory side. Here we wish to study pulsating string solution in the sub sectors of the deformed background as they are more stable than rotating ones [37]. After the inception of the pulsating string in [38], they have been studied both in AdS and non-AdS background [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56]. The rest of the paper is organized as follows. In section 2, we preview the truncated models of κ-deformed AdS 5 × S 5 . In section 3, we study the semiclassical oscillating string solution in the deformed R × S 2 . In section 4, we analyze the solution in terms of energy as function of oscillation number for a class of pulsating strings in the deformed AdS 3 . In section 5, we generalize the previous section with an extra angular momentum in the S 1 . In section 6, we conclude with some remarks. 2 Consistent truncations of (AdS 5 × S 5 ) κ κ κ As (AdS 5 × S 5 ) κ is a classically integrable background, its consistent truncations must be classically integrable. Truncated lower dimensional integrable string models have been computed in [26]. We write the relevant backgrounds here. . With φ = ψ = 0, we get the lower dimensional consistent background as Here co-ordinates have their usual range as in case of undeformed one and κ ∈ [0, ∞). 3 Pulsating string in S 2 κ κ κ Here we wish to study the string solution to a class of pulsating string which is oscillating in deformed S 2 . In order to get the metric, we substitute the followings in the equation (2.2) and get The Polyakov action of the metric is given by where the 'dot' and 'prime' denote the derivatives with respect to τ and σ respectively and λ = λ(1 + κ 2 ), where λ is the 't Hooft coupling constant. We write the following anstaz for the pulsating string Equation of motion for ψ is given by From the Virassoro constraint we get The energy for this string configuration is given by JHEP03(2015)010 The canonical momentum associated with ψ is We can compute the oscillation number which should take integer values in quantum theory as (3.10) Substituting sin 2 ψ = z in the above equation (3.10) we get, To find out this integration, we have taken derivative of N with respect to m, i.e where a > b > c are roots of the polynomial And a = 1, b = ε 2 (1+κ 2 ) m 2 +ε 2 κ 2 , c = 0, d = 1+κ 2 κ 2 . The above integral in (3.12) can be written as sum of two integrals i.e. ∂N ∂m = I 1 + I 2 , Where and where K, Π are complete elliptical integral of first and third kind respectively and Expanding the equation (3.16) for small value ε in the short string limit Taking integration with respect to m we get Reversing the series we get In the above dispersion relation ε < m gives an upper bound for N , so one cannot take the large N limit. This gives the short string or small oscillation number expansion of the classical energy. If we put κ → 0 in the above equation (3.19), we get the exact expression for undeformed S 2 as found in [53]. Pulsating string in deformed AdS 3 In this section we study the semiclassical quantization of a class of strings which is oscillating in the radial ρ direction of AdS 3 . We get the relevant metric for this from equation (2.1) (taking only AdS part) with the substitution of ρ = sinh ρ We chose the ansatz for this configuration as The polyakov action of the given metric is given by JHEP03(2015)010 The Virasoro constraint gives us The energy of the oscillating string is given by The canonical momentum associated with ρ is Using the equations (4.6) and (4.7), we can geṫ With the help of equation (4.8), we can write This may be interpreted as an equation for a particle moving in a potential which is growing to infinity at ρ → ∞. The coordinate ρ(τ ) thus oscillates between 0 and a maximal ρ value (ρ max ). Since the string is oscillating along ρ direction, we can define the oscillation number as Taking sinh 2 ρ = z N = 1 2π To make the integration simple we make the derivative with respect to m where R 1 > R 2 > R 3 are roots of the polynomial (4.14) JHEP03(2015)010 And The above integral can be written in the standard elliptical integrals as (4.15) Now expanding the above equation for a small oscillation number with small ε we will get Integrating with respect to m we get Reversing the series This is the classical energy expression in the small energy limit for short string configuration in deformed AdS 3 . After substituting κ = 0, we can get the the flat-space dependence which is expected in the small-energy limit where the string oscillates near the center of AdS 3 which can be found in [47]. The result found in [53] differs by a factor 2 as they have defined the oscillation number accordingly. Pulsating string in (AdS In this section we generalize the previous section where we study a class of oscillating string solution which is oscillating in the radial ρ direction of AdS 3 with an extra angular momentum along S 1 . In order to get the consistent truncated metric, we substitute the following in the equation (2.1) Now the relevant background is given by Choosing the ansatz as 3) JHEP03(2015)010 we write down the Polyakov action of the above metric Equation of motion for t is The Virassoro constraint is given by Conserved quantities are The canonical momentum associated with ρ is From the equation (5.7), with the help of the equation (5.8) we geṫ With the help of the equation (5.9), the above equation can be written as This is similar to the previous section (eq. (4.10)) with an extra additive term. This is similar to an equation for a particle moving in such a potential so that the coordinate ρ(τ ) oscillates between 0 and a maximal ρ value (ρ max ). Now we can write the oscillation number as JHEP03(2015)010 Taking sinh 2 ρ = z, then differentiating with respect to m where R 1 > R 2 > R 3 are roots of the polynomial (5.14) And The above integral in (5.13) can be written as Integrating with respect to m and reversing the series we get where (5.18) JHEP03(2015)010 This is the classical energy expression for the small energy and angular momentum in the κ-deformed AdS 3 × S 1 . After putting κ = 0, we can get the energy for the short string which oscillates near the center of AdS 3 with an angular momentum in S 1 in undeformed AdS 3 × S 1 as computed in [55]. With both κ and angular momentum as zero we can get back the energy expression for the strings oscillating in one plane for small energy limit as in the [47]. Conclusion We have studied various pulsating string in the so called κ deformed AdS 3 ×S 3 background. We find the energy of the short string in the small energy limit for the pulsating strings in the κ-deformed S 3 κ subspace of the full (AdS 5 × S 5 ) κ background. κ = 0 limit agrees with the computation of the undeformed case and the κ infact enters in a vary natural way in the expression. It is perhaps along the expected lines as a theory with non-zero κ also provides an exact integrable sigma model background (with a redefined string tension) and hence the string configurations in the undeformed background must also have correspondence with the ones in the deformed case as well. We have further found out the short string energy as a function of N , m, κ. We have also analyzed case for the string with an extra angular momentum along the deformed S 1 . We wish to look for the field theory duals in these theories in future. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
3,075.8
2015-03-01T00:00:00.000
[ "Physics", "Mathematics" ]
‘And suddenly, two men. . .’: Moses and Elijah in Lukan Perspective This article argues that in the three instances in Luke-Acts where the phrase ‘And suddenly, two men. . .’ occurs, Luke 9, Luke 24 and Acts 1, the author expects us to understand that these men are Moses and Elijah, who are named in the first occurrence at the Transfiguration. This interpretation makes literary, audience expectation, and theological sense, creating a deeper understanding of the significance of the two prophets for the proclamation of the resurrection and the mission of the Church. It is argued that the interpretation that the ‘two men’ are ‘angels,’ like Gabriel, does not pay sufficient attention to the details of the text and reads across an understanding that the ‘men’ are ‘angels’ from Luke 24 to Acts 1 without warrant. Introduction In this article I hope to show two things. First, that on each of the three occasions Luke uses the words, 'And suddenly, two Men. . .', καὶ ἰδοὺ ἄνδρες δύο, 2 in Luke 9:29, Luke 24:4 and Acts 1:10, he intends his readers to understand that these two men are Moses and Elijah. I will argue that the passages have a climactic place in the structure of Luke-Acts, have linguistic and thematic links, and that the repeated dramatic wording, καὶ ἰδοὺ ἄνδρες δύο, would alert hearers to their identity. The second aim is to show how this identification of Moses and Elijah in Luke 24 and Acts 1 might add to our understanding of the text. I will note their role in traditions and their connections to relevant themes, showing how this can not only enhance our understanding of these texts, but also speak to the church today. 3 For more detailed accounts of the three redactions, see Heil,Transfiguration,[21][22][23][24][25][26][27][28][29][30][31]and Bovon,Luke 1,370. John does not include the transfiguration in his Gospel. 4 Heil,Transfiguration,32;Ramsey,Glory,112;Fitzmyer,Luke,792. 5 Bovon, Luke 1, 357ff. 6 Cf. 1 Kings 17:2-16. 7 Cf. Exodus 16:1 16. 8 Gause, Transfiguraton, 107ff. 9 Gause, Transfiguration, 85ff. 10 For a wide-ranging look at intratextual connections see Trites, 'Transfiguration'. 11 Fitzmyer,Luke,134. 12 See, e.g. Fitzmyer,Luke,135. narrates the event in a slightly different form. 3 This is done to emphasize aspects that are of particular interest to the three evangelists. For example, Mark's order of the names of Jesus's companions, Elijah and Moses, may be intended to strengthen the link between Elijah and John the Baptist, which is a feature of Mark's earlier narrative (1:6). Matthew reverses that order to Moses and Elijah, possibly for chronological reasons, or to align with his phrase, 'the law and the prophets,' or because his Gospel makes so much of the role of Moses and how Jesus re-enacts and surpasses it. All use the story to highlight the superiority of Jesus, but it is Luke who further develops the transfiguration narrative by his additions to Mark's account. 4 He expands the story with greater focus on Moses and Elijah than either of the other synoptic accounts. Thus he describes in greater detail the appearance of Moses and Elijah and their actions during the scene. Those details will be addressed shortly but, first, the story must be set in its Lukan context and within the structure of Luke-Acts to understand its wider theological significance for the evangelist. The transfiguration is firmly linked in several ways to what has been recently narrated by Luke. First, this last section of the Gospel before the travel narrative is largely concerned with the revelation of the identity of Jesus, a question raised first by Herod (9:9) and then by Jesus (9:20). Various names are proposed including that of Elijah. 5 From the beginning of chapter nine, Elijah is mentioned by name twice, and possibly alluded to once in the miracle of feeding, 6 while Moses is alluded to in the lack of food in the wilderness and the miracle of feeding. 7 Luke's narrative explores this question of identity, ending on the mount of transfiguration with a definitive answer from above and thus imparting to Luke's audience a clear message about who exactly is making his way to Jerusalem. Second, within the chronology of the narrative the transfiguration occurs a week or so after Peter's declaration that Jesus is 'the Messiah of God,' ό χριστὸς τοῦ θεοῦ. But Jesus recasts Peter's confession of this identity in terms of 'the Son of Man,' ὁ υἱὸς τοῦ ἀνθρώπου, who must suffer and be crucified, and then be glorified. In the transfiguration narrative, Luke picks up on the theme of glory, δόξα, 8 which is used not only to describe Jesus, but also his two interlocutors. Third, before Luke relates the transfiguration, in 9:27 he records that Jesus said that some among his hearers will not see death before they see the Kingdom of God. Given Luke's connecting link to the transfiguration story, it seems clear that he is portraying the transfiguration as that visual encounter with, and experience of, the Kingdom. 9 It is the larger context of Luke's writings that is the most relevant for understanding the transfiguration's connection to Luke 24 and Acts 1. 10 While the gospel can be divided into several sections across the whole, 11 such as the Infancy Narratives and the Jerusalem ministry, it is clearly divided into two major parts, with a division at the end of 9:50. 12 To this point, everything has been moving inexorably to the mount of transfiguration, but now the ministry of Jesus changes direction, literally, as he turns from Galilee towards Jerusalem and the path to his death and resurrection. In other words, the 13 Marshall,Luke,381;Thrall,'Transfiguration',311. 14 Fitzmyer,Acts,201. 15 NRSV, 'dazzling white'. 16 E.g., Aeneid 6. 375. 17 See, e.g., Heil, Transfiguration, 97ff; Gawse, Transfiguration, 56ff; Hooker, 'Elijah,' passim. Heil notes there is no consensus as to their significance. 18 Josephus, Antiquities, IV, 48, writes that Moses did not die, and although he recorded his own death, he was in fact, like Elijah, taken up alive, but in a cloud rather than a whirlwind. If Luke was 'published' after Josephus, then this may be relevant, implying that neither prophet in the story had died. Thrall,'Transfiguration',314,holds this view. 19 Exodus 33:7-11; 1 Kings 18:41-6. 20 Exodus 33:12-23; 1 Kings 19:9-18. 21 Exodus 12:31-42; 2 Kings 2:8. transfiguration story is part of the climax to this first section of Luke, setting out the definitive revelation of Jesus' identity, through the declaration of the divine voice from above. I would argue that the transfiguration account is intended as a parallel to the resurrection narratives which form a climax at the end of the gospel. Some scholars have rightly noted thematic connections between the two, 13 and this will be discussed later. The strategic positioning of the two stories, and the themes that connect them, indicates the likelihood that when discussing either or both, each might be used to shed light on the other, and each plays a climactic role to the two major sections of the Gospel. In Luke's second volume, with the departure of Jesus from the world, and the commissioning of the Apostles, the opening chapter of Acts is another strategically significant, if not programmatic passage at the very beginning of Luke's sequal. 14 So, there are three occurrences of precisely the same words introducing two men in three of the most strategic contexts in Luke-Acts: Luke 9:30 and 24:4 and Acts 1:10. Each time Luke uses the dramatic exclamation, 'And suddenly, two men. . .', καὶ ἰδοὺ ἄνδρες δύο. This appears to be a deliberate literary pattern and we must ask what we make of it. Linguistic and Thematic Links among the Texts: Lightning, Glory, Cloud, and Exodus In Luke 9, as well as changes in Mark's order of names, Luke's redaction includes some vivid colour with the image of Jesus's clothes 'white as a flash of lightning,' 15 λευκὸς ἐξαστράπτων. While conveying a similar idea to the other synoptic accounts about the radiance of Jesus's clothing, by using ἐξαστράπτων Luke adds the dynamic of heavenly disturbances, setting the scene for heavenly visitors and actions. Drawing on the LXX descriptions of the Sinai traditions in Exodus 19:16 with its thunder, lightning, άστραπταί, and cloud, νεφέλη, it is a powerful signal to Luke's readers acquainted with the LXX that God was present. The heavenly has broken through into the earthly realm. Yet it would also speak to those without a knowledge of the LXX and whose culture was rooted in the non-Jewish Greco-Roman world. Within that literature too, cosmic disturbances were an indicator of heavenly activity. 16 Much has been said about the significance of the two companions seen by the disciples on the mount of transfiguration, answering questions such as who are they, why are they here and what do they tell us? 17 They are identified first as Moses and Elijah by Luke as narrator, and then in his transmission of the words of Peter. Scholars have explored the links they have with Jesus: the long-dead Moses 18 and the long-absent Elijah are both closely associated with prayer, a major Lukan theme here; 19 their prophetic actions are associated with other mountains, Sinai and Carmel; both experience the dramatic and powerful presence of God on mountain tops, sheltered only by rock; 20 both are intimately linked to the theme of God's saving action as ἔξοδος. 21 27 Evans, Luke, 417. 28 Thrall, 'Transfiguration', 313, argues similarly in Mark's account. 29 See various essays by A J Mattill Jr, including, 'The Purpose of Acts: Schneckenburger Reconsidered,' in Gasque and Martin, Apostolic History, 108-22. 30 Siebenthal,Grammar,271,184h.2. 22 Found only in Luke's version, Fitzmyer, Luke, 794. See BDAG and BrillDAG for definitions. For further explorations of the word see Ramsey, Glory, 23-8; Jackson, Glory, 55-102. While the word has a wide range of meanings in Greek culture, its use by NT writers has been shaped by the way the LXX translators used it for ‫;בכוד‬ Bovon, Luke 1, 377; Gause,Transfiguration,55. 23 Μωυσῆς οὐκ ᾔδει ὅτι δεδόξασται ἡ ὄψις τοῦ χρώματος τοῦ προσώπου αὐτοῦ, Exodus 34:29 (LXX). 24 2 Kings 2:11. 25 Green, Luke, 381. 26 Bovon, Luke 1, 373; Carroll, Luke, 219. Thrall, 'Elijah and Moses,' 312, argues that it is effectively 'a scene set in heaven'. Hooker links the use of Jesus's δόξα here to its use in the Emmaus Road encounter. It is in Luke's additions that the greatest significance of these men is to be found. His way of alerting hearers to the identity of the men is his statement in 9:30f Suddenly they saw two men, Moses and Elijah, talking to him. They appeared in glory and were speaking of his [exodus], which he was about to accomplish at Jerusalem. καὶ ἰδοὺ ἄνδρες δύο συνελάλουν αὐτῷ, οἵτινες ἦσαν Μωσῆς καὶ Ἠλίας, οἳ ὀφθέντες ἐν δόξῃ ἔλεγον τὴν ἔξοδον αὐτοῦ, ἣν ἤμελλεν πληροῦν ἐν Ἰερουσαλήμ. (Lk 9.30-31). The use of δόξα 22 is not only an important link to what has been related immediately prior to this incident about the coming of the Son of Man in his glory (9:26) but also to the life-story of Moses. As he meets God on the mountain, Moses experiences the glorification of his face without realising it 23 , requiring the veiling of his face. Although δόξα is not used of Elijah in the LXX, he does encounter the fiery chariots and horses of Israel 24 as he is removed from earth. In Luke 9, it is the two men together who are seen ἐν δόξῃ and are thus pictured as among those 'sharing in the status of those who belong to the heavenly court.' 25 While the purpose of the story is to reveal the glorious heavenly identity of Jesus, 26 the δόξα of the prophets identifies them to be among those who dwell in the heavenly presence of God, sharing in that heavenly glory. 27 It was noted above that, given the position of each as a narrative climax, the transfiguration story could be illuminated by the resurrection stories and vice versa. 28 Luke-Acts is renowned for its intra-textual connections. 29 There are several post-resurrection episodes in Luke 24, but it is the first (24:1-12) that is of most significance in supporting this understanding of Moses and Elijah because of the verbal links found between chapters nine and twenty-four. In addition to the exact repetition of the words introducing the two men, there is a link in the description of the brightness of the clothes of Jesus and the two men. In Luke 9 ἐξαστράπτων, is predicated of Jesus at his transfiguration and a related though not identical term ἀστραπτούσῃ is used of the two at the scene of the resurrection. There appears to be an intentional echo in the language with the ἐξαστράπτων describing Jesus reflecting a difference in the intensity of the δόξα with Moses and Elijah, Jesus shining more brightly. Siebenthal notes that έκ as a prefix can indicate intensity. 30 The transfiguration story does, after all, emphasise the superiority of Jesus to Moses and Elijah. Whatever difference Luke implies, the sense is that the two men at the tomb share in a similar glorious heavenly existence as do Moses and Elijah at the transfiguration. The same can be said of the two men in Acts 1 where the less dramatic 'in white robes,' ἐν ἐσθήσεσιν λευκαῖς, is used. A further linguistic connection is to be found in Luke's use of cloud, νεφέλη. In Luke 9:34, a cloud envelopes the group who are standing, 33 Keener, Acts, 729, notes that it is used seventy-eight times in Luke-Acts. 34 Gadenz,Luke,390. 35 Cf. Pervo, Acts, 45-6. 36 Keener, Acts, 713ff. 37 Keener, Acts, 728. 31 Caird, 'Transfiguration,' 292, notes that Luke puts 'cloud' in the singular to make wider literary connections. 32 Bovon, Luke 1, 376. and in Acts 1:9, a cloud envelopes Jesus. 31 Once more these narrative statements evoke the exodus traditions of the cloud of the presence and the cloud at Sinai. Among Luke's thematic additions to the transfiguration narrative, it is 'exodus' that stands out. On the mount, Moses and Elijah speak with Jesus about the exodus he will fulfil at Jerusalem. Does this exodus theme then recur in Luke 24 and/or Acts 1? I think it recurs in both, but in slightly different forms. While Luke 9 looks forward to the exodus of Jesus, Luke 24 is, in essence, a declaration of its fulfilment. By his death and resurrection, heard in the light of the Last Supper Jesus has effected the previously discussed exodus, and what could be more appropriate than that Moses and Elijah declare this fact to the women at the tomb? They say, 'Why do you seek the living among the dead? He is not here, he has been raised. Do you not remember what he said to you in Galilee?' Jesus has broken the bonds of death. In Acts 1, the situation is different. Here, through the cloud, Jesus is making his personal exodus from the world to the heavenly realm and is passing on the mantle of declaring the Gospel of the Kingdom to the Apostles. It is, in effect, the culmination of his exodus, leaving the world for the heavenly presence of God. 32 These linguistic echoes and thematic links suggest that there is such a strong connection among all three passages that the two men in each of them could be the same. Hearers' Expectation It must be asked how early hearers of the Gospel and Acts would understand καὶ ἰδοὺ ἄνδρες δύο in these three passages in a culture where literacy was low and reliance on memory was high. I believe it would be difficult to expect Luke's hearers to think that Luke 24 and Acts 1 were anything other than additional references to Moses and Elijah when they heard the clarion textual alert 'And suddenly, two men. . ..' Although Luke uses the verbal signal ἰδοὺ in other places, 33 nowhere else in his writings does Luke use καὶ ἰδοὺ ἄνδρες δύο. Gadenz 34 observes that these three instances of the phrase are the only instances in Luke-Acts, and the only instances in the whole of the canon of Scripture. As such, they are a powerful textual indicator that the men referred to in the first instance are to be inferred in instances two and three. Having spoken with Jesus about his exodus, they are now here to bear witness to its reality. Adopting the perspective of hearer expectations, I will refer to two traditions in which Moses and/or Elijah appear that would link them thematically to the succession narrative of Acts 1. In his monumental commentary, Craig Keener, among others, 35 notes connections between the story of Jesus's ascension, passing on responsibility for the continuing work of the Gospel, with other such 'succession' narratives. He argues that the Elijah-Elisha narrative, with the ascension of Elijah in a whirlwind and the passing on of prophetic responsibility to Elisha, is the closest parallel to the ascension of Jesus and his commissioning of the Apostles. 36 He then compares Jesus and Elijah in Luke-Acts, showing how Jesus, although more often John the Baptist, is like Elijah, but greater. While Keener then dismisses the possibility that one of the 'two men' is Elijah, 37 for him, the shadow of Elijah hovers over this succession narrative. Contrary to Keener, I believe Luke's portrayal of the story is creating the expectation in the minds of hearers that one of the two men is Elijah. Given the thematic echoes of the Exodus 43 'In strictly literary terms, Luke would seem to want the reader to make this connection. . ..' Luke, 387. See also Johnson, Acts, 27. Dunn, Acts, 14, believes this to be 'plausible'. Caird, 'Transfiguration' 292, perceives a strong connection but stops short of making the identification. Holladay, Acts, makes the link to Luke 24, but not to Luke 9. Edwards, Luke, 709f, notes the strong links among the three passages, but '[w]e are not told they are the same two heavenly visitants'. Since beginning this article, two scholars have indicated their agreement in personal communication. 41 See, e.g., Blount, Revelation, 208, who writes that although 'he is certainly working from a Moses and Elijah connection, John is also thinking broadly.' Among the exceptions is Koester, Revelation, 496ff, who lays out several options and opts for the witness of the whole church. 42 For a detailed literary analysis see Aune,Revelation,[585][586][587][588][589][590][591][592][593][594][595][596][597][598][599][600][601][602][603] evoked by the cloud, and since he has already linked the two men at the transfiguration, the other may be identified as Moses. The second tradition is from Revelation 11. Dating NT writings is fraught with complexity, and little is certain. The range of dates for Luke-Acts stretches from before 70 ce and the fall of Jerusalem to the early decades of the Second Century ce. If this date range is the case, then the Apocalypse of John could be a rough contemporary of Luke. There are many who think that, although they are unnamed in the text, from the description of the two witnesses in Revelation 11 they are to be identified as Moses and Elijah. Several references point in this direction: using fire to consume their foes could refer to both prophets; 38 having authority to shut up the sky so that no rain falls is without doubt intended to refer to Elijah; 39 the mention of all kinds of plagues brings to mind the stories of Moses and the Exodus. 40 John the Seer may intend the identities of the witnesses to be more composite than this, including, for example, Enoch, but however composite the identity he intended, he clearly included Moses and Elijah within it. 41 It seems from Revelation 11 and elsewhere that there are traditions of eschatological appearances of ancient prophets, including Moses and Elijah, circulating. 42 At least four themes link the Moses and Elijah of Revelation 11 to themes in Luke 24 and Acts 1: testimony, the suffering of death, resurrection, and ascension. Did Luke know of this kind of tradition? We cannot say for certain, but with traditions swirling in the imagination of the Church of that era, it would not at all be surprising if he did. Regardless, the thematic links between Luke-Acts and these two traditions at least open the possibility that Luke has Moses and Elijah in mind in Luke 24 and Acts 1 since such traditions were 'in the air' at the time. On several occasions recently, I have undertaken an unscientific experiment in which I asked people to identify the character I described to them, a man who was tall, thin, wore a deerstalker, smoked a large pipe, played the violin well and had a liking for opium. On every occasion they answered, 'Sherlock Holmes.' While I did not name this character to them, they had heard or read about him in the past and recognised his description. This is exactly what I think happens in Luke 9, 24 and Acts 1: the names and descriptions were heard in Luke 9 and the descriptions recognised in Luke 24 and Acts 1, introduced with the same textual cue. Luke expects his hearers to make the connection. 'But the two in Luke 24 and Acts 1 are angels.' Among many recent commentators in English consulted, only one, Luke Timothy Johnson, affirms that the two men of Luke 24 and Acts 1 are the same as the two men of Luke 9, but he does not have the space to set out his arguments fully. 43 Others either do not consider it or dismiss the idea. There are two main objections. The first, in two parts, is that in neither Luke 24 nor Acts 1 are the men named, and that in 44 Witherington, Acts, 112, n. 31 rejects this identification on the basis that Theophilus could not have made the connection from 'such a vague and unspecified allusion.' Keener, Acts, 728, also rejects it on the grounds that they are not explicitly identified. Pervo, Acts, 46, connects the two of Acts 1 with the two of Luke 24, but does not reference the two of the Transfiguration. 45 Whether or not Luke and Acts are by the same author, the author of Acts clearly wanted them to be seen as a unit. 46 Barrett, Acts, 83, cites Luke 24:4 for 'the description of angels as men'. Cf. Conzelmann,Acts,7. 47 Together with others cited above, see Polhill,Acts,87. 48 See, among many others, Bovon, Luke 3, 349f. Other Gospels note one or two angels. Acts 1 there is not enough information to identify them. 44 Part one of the first objection fails to take seriously the abundant intratextuality of Luke-Acts, and part two assesses Acts as a book standing alone. 45 When Luke-Acts is read as a whole, after the first mention in Luke 9, καὶ ἰδοὺ ἄνδρες δύο would make the hearers alert to the identification in Luke 24 and Acts 1 on the basis that they were named and described in Luke 9. Theophilus would have no trouble identifying them. The second objection is that, since they are identified in Luke 24:23 as angels, they cannot be Moses and Elijah. 46 Many accept that the descriptive language shared between Luke 24:4 and Acts 1:10 indicates that it is the same two personages at the tomb and the ascension, but that they are angels. 47 This objection calls for some reflection on Luke's use of ἄγγελος. No Gospel writer tells exactly the same resurrection narrative as another, and the stories differ on the number of men or angels encountered. In the case of Luke, in Luke 24:4 he writes of 'two men.' They are then identified as angels in the reported speech in Luke 24:23. 48 But we cannot get round the fact that Luke, as 'omniscient narrator,' calls them 'men'. So, are they men, according to the narrator, or angels according to the reported speech, or are they both? Luke is the NT author most interested in angels in both volumes of his work, mentioning them almost fifty times. When he wishes to denote the presence of an angel of the Lord/ of God, such as in the Annunciation narratives, he uses ἄγγελος. But not every instance of its use refers to heavenly beings like the Angel Gabriel. In the resurrection story we have noted that Luke uses 'two men,' ἄνδρες δύο, and in reported speech the two are seen as 'a vision of angels,' ὀπτασίαν ἀγγέλων. There is a similar double identification of the messenger in the story of Cornelius. In Acts 10:3 Luke narrates that Cornelius saw 'an angel of God,' ἄγγελον τοῦ θεοῦ, while in 10:30 Cornelius says to Peter 'suddenly a man in dazzling clothes stood before me,' καὶ ἰδοὺ ἀνὴρ ἔστη ἐνώπιόν μου ἐν ἐσθῆτι λαμπρᾷ. Again, there is a double identity showing that Luke's understanding of ἄγγελος is not necessarily either angel or man. As Stephen prepares to give his witness to the Sanhedrin in Acts 6:15, his face appeared as that of an angel. Similarly, when Peter is rescued from prison by an angel of the Lord (Acts 12:7) and knocks on the door of John Mark's house, those inside did not believe it could be Peter, but rather it was his angel (Acts 12:15). Luke's use of ἄγγελος is more blurred than might at first be assumed and it cannot be ruled out that he sometimes means 'messenger' rather than 'heavenly being.' Identifying the two 'men' solely as angels, such as Gabriel, would rule out the possibility that they are Moses and Elijah. This is what many commentators do. In Luke 24:4 and Acts 1:10 they note similarities of language and themes with Luke 9, but in Luke 24 they privilege the identity as 'angels' and read that identity across to the men in Acts 1:10, side-lining the specific wording of the text. Given the varied ways that Luke uses ἄγγελος, there is no reason to assume that when Luke speaks of 'two men,' he means 'two angels (like Gabriel) that look like men'. At the tomb and the ascension, it is two men, ἄνδρες δύο, or two men-angels (unlike Gabriel), rather than two angels (like Gabriel) who are present. This attention to the detail of the text weighs against the objection that the two are heavenly beings rather than Moses and Elijah. Their identity as men should be privileged. Drawing the Threads Together I have argued that on the three occasions Luke uses, 'And suddenly, two men. . .' he intends hearers to understand that since they are referred to by name in Luke 9, these names can be inferred in Luke 24 and Acts 1. This interpretation makes sense at different levels. First, on a literary level, it makes good structural and narrative sense. Because the words occur at three extremely strategic points in the framework of Luke-Acts, the author has created a literary pattern that invites the hearers to assume that Moses and Elijah appear in all three places. That identification does not make the narrative jar in the mind. It is consonant with what has gone before. Second, with the very specific repeated wording, strong linguistic and thematic links it makes sense for audience expectation in a culture where reliance on memory and verbal cues is high, pointing in the direction of a continuity of dramatis personae. Finally, the identification makes good theological sense for Luke is the one who records that on the Emmaus Road, 'beginning with Moses and all the prophets, [Jesus] interpreted to them the things about himself in all the scriptures (24:27).' The prophets of the Hebrew Bible bear witness to Jesus as the one who effects a new exodus. There are none greater than Moses and Elijah. Why it Matters The identification of Moses and Elijah in all three passages that I have argued for heightens the important symbolic and theological significance of these two men in Luke 24 and Acts 1. Within much contemporary scholarship, it is diminished, and that diminishment has its roots in the most frequent explanation for the appearance of Moses and Elijah on the mount of transfiguration, that they are simply representatives of the Law and the Prophets. 49 While this is an obvious truth, it is an identity that lacks the profundity I believe Luke is seeking to convey. Through the episodes from their lives to which Luke alludes, noted above, and the traditions in Scripture and beyond with which these two men were associated, their importance lies in how Luke uses the themes of these events and traditions to speak to the church of his time and beyond. First, there is the theme of exodus fulfilment. At the Transfiguration, it is clear from Luke's language that there was a discussion about Jesus's exodus, rather than a monologue. While the content of that discussion is not recorded, the context suggests that it is likely to have included the suffering and cost that Jesus must endure in the days ahead. Moses and Elijah were not strangers to the costs of prophetic action. In the resurrection scene, there is a sense of triumph over suffering as well as rebuke in the words, 'Why do you look for the living among the dead? He is not here but has risen.' 50 Having adapted the Passover meal to speak of his own death on the cross, and having emerged in triumph from the tomb, Jesus has experienced suffering by which he has effected the exodus of his people from the slavery of sin and death. It is surely no accident that in Acts, following Jesus is called 'the way,' ἡ ὁδός, a way that also proved to be one of suffering. Second, Moses and Elijah each had a successor appointed to continue their activity. In addition to the appointment of Joshua to lead the entry to the land, Moses speaks of another prophet like himself who will arise. Elijah's successor, Elisha, had a double portion of his spirit. These succession narratives point to high expectations for the future ministry of the successor. At the Ascension, the Apostles received from Jesus the prophetic succession and are to bear witness to Jesus through the Spirit in ways that may result in opposition and suffering. Moses and Elijah challenge them to take up that role immediately in a way that will prove to be faithful and true. Having them at the scene heightens the expectation of the dynamic 50 This is still the case if the textual variant is adopted. 49 Almost all the commentaries cited make this point. Edwards,Luke,282,, sees them as 'the chief representatives of the prophetic tradition.' prophetic activity to come that Luke records in the pages of Acts. Third, there were times when both Moses and Elijah were sustained in their activity by the divine provision of nourishment for the pilgrim journey that took a route through wilderness experiences. As Luke relates, the believers in Acts, and beyond, faced their own trials, and hearing of the presence of Moses and Elijah at the Ascension would have been reassurance that God does not leave his people without spiritual sustenance for 'the way' they must take. It can be seen, then, that the identification of Moses and Elijah at these three crucial points in Luke's narrative can be developed in ways that speak not only of the exodus of Jesus himself, but also of what is expected of a prophetic, suffering church as the successor of his mission, now in his physical absence. Luke, speaking to a church in volatile and challenging times, uses Moses and Elijah to help it understand the significance of what has happened, to challenge it to prophetic ministry in the present and to reassure it of God's capacity to sustain it in the days to come.
7,418.6
2023-02-22T00:00:00.000
[ "Philosophy" ]
“Tell me about”: a logbook of teachers’ changes from face-to-face to distance mathematics education In 2020, the emergency due to the COVID-19 pandemic brought a drastic and sudden change in teaching practices, from the physical space of the classrooms to the virtual space of an e-environment. In this paper, through a qualitative analysis of 44 collected essays composed by Italian mathematics teachers from primary school to undergraduate level during the spring of 2020, we investigate how the Italian teachers perceived the changes due to the unexpected transition from a face-to-face setting to distance education. The analysis is carried out through a double theoretical lens, one concerning the whole didactic system where the knowledge at stake is mathematics and the other regarding affective aspects. The integration of the two theoretical perspectives allows us to identify key elements and their relations in the teachers’ narratives and to analyze how teachers have experienced and perceived the dramatic, drastic, and sudden change. The analysis shows the process going from the disruption of the educational setting to the teachers’ discovery of key aspects of the didactic system including the teacher’s roles, a reflection on mathematics and its teaching, and the attempt to reconstruct the didactic system in a new way. Introduction This study aims to explore how Italian mathematics teachers managed their teaching activities in the context of a total lockdown imposed as part of the government response to the COVID-19 pandemic. The lockdown was decreed in different parts of Italy at different times between the end of February and the beginning of March 2020, a timeframe in which activities in almost all sectors were interrupted and citizens were forced to stay at home. Italy's education systems did not constitute an exception to this norm: schools and universities were suddenly closed, and teachers and learners shifted from the usual face-to-face to distance education. As a consequence, the learning process moved from the physical space of the classrooms to the virtual space of an e-environment. The reorganization of didactics in the schools has not been structured. The responsibility of such reorganization has been in the charge of the didactic managers and the teachers. Each teacher or each institution created their own e-environment, choosing an online teaching platform with communication and collaboration facilities and eventually with software for specific domains. New teaching settings required the teachers to engage with new designs of their teaching processes, impacting also on the affective aspects of such processes. Our study took place about a month after the forced closure of schools, at a time that education was at full distance with no prospects for a return to the classroom. The situation in which our study moves is not like a typical distance setting before the pandemic in which participants do not know each other or they are located across large geographic areas. In our context, the pandemic forced teachers and students to move to the new online distance settings. As this move happened as a consequence of government decisions, teachers and learners had very limited prior technological and/or methodological preparation that foresaw the new settings. In this frame, we are interested in exploring how teachers of mathematics perceived and responded to the abrupt transition from face-to-face to distance education. In particular, we look at the teachers' attitude towards mathematics teaching within the new educational system. We base our study on the integration of two lenses: the e-learning tetrahedron model (Albano, 2017) and the teachers' attitude model towards mathematics and its teaching (Coppola et al., 2012). Offering a systemic view of distance or blended didactical environment for mathematics teaching, the e-learning tetrahedron model identifies four main actors-the Student, the Author, the Tutor, and the Mathematics-as vertices of the tetrahedron and main actors of the system. These actors move within a global technological environment that didactically is intentionally used for mathematics teaching. The teachers' attitude model towards mathematics and its teaching refers in turn to three attitudinal aspects, namely, the emotional disposition, the view, and the perceived competence. It is by integrating these two lenses that this article intends to read the teachers' attitudes during the movements within the didactical system. To this end, we conducted a survey among Italian mathematics teachers from primary school to the undergraduate level. We selected a narrative approach, asking teachers to tell about their experience, focusing on affective aspects, and reflecting on their role as teachers during this time of sudden change. Conceptual frameworks To date, literature concerning e-learning mathematics education is focused mainly on research in blended settings (online and face-to-face) (Borba & Llinares, 2012;Silverman & Hoyos, 2018;Engelbrecht et al., 2020). So as far as work on distance settings, this literature stream is 1 3 mainly concerned with massive open online courses (MOOCs) (see, for example, Borba et al., 2016;Taranto & Arzarello, 2019), in which there is only a virtual class or distance education practices for communities of learners located across large geographic areas with minimal opportunities for interaction (for example, see Lowrie & Jorgensen, 2012). Similarly, the literature regarding the role of affective factors (beliefs, emotions, attitude) in mathematics teaching and learning is widely consolidated (Batchelor et al., 2019;Schukajlow et al., 2017;Zan et al., 2006). There are however no studies of affect in distance learning settings for mathematics. Some studies focused on the management of emotions and affectivity in intelligent tutoring systems, where the detection of the emotional affective feeling of a learner is exploited to build a suitable and personalized support to stimulate attention and learning. 1 The unprecedented exceptionality of a crisis like the pandemic-the first event of its kind in almost a century-explains why there is no literature about mathematics education in distance learning or about mathematics teachers' affect concerning an anomalous educational context such as the one we are studying. Because of the context frame of our study, we think that two different theoretical frameworks can offer powerful tools for the analysis we want to perform. The first one is the e-learning tetrahedron model (Albano, 2017), which offers a systemic view of changes in an educational system and may allow us to frame teachers' reflections on their own roles within the "new" educational system. The second one is the teachers' attitude model (Coppola et al., 2012), founded on the attitude model of Di Martino and Zan (2010), which allows us to analyze teachers' attitudes along the changes that occurred within this "new" educational system. In the following, we present the two theoretical frameworks. We believe that the tetrahedron model can allow us to describe important elements of the teachers' perception of the changes. At the same time, we add a focus on the affective factors, which are not considered by the tetrahedron model yet are the primary objects of the teachers' attitude model. The tetrahedron model To model the educational system, we use the e-learning tetrahedron (Albano, 2017), which arises within blended education research and takes into account the dynamics of a (nonvirtual) classroom. Albano's model can be considered as an extension of the didactics triangle (Chevallard, 1985), bearing in mind the huge introduction and use of technology features and more finely articulating the teacher's role. The actors of the didactics system have been modeled as tetrahedron vertices ( Fig. 1) and are the following: -The Student, who is the one addressed by the teaching process -The Author, who is a collection of experts with various professional skills (technology, pedagogy, mathematics education), collaborating and looking at the educational project from different perspectives for a nontrivial and effective exploitation of technology -The Tutor, who takes care of scaffolding and fostering the student's learning process -The Mathematics, that is, the knowledge to be taught/learnt. Technology is considered inside and outside the didactic system: on one hand, we are immersed in a technology-connected world (outside the tetrahedron); on the other hand, the intentional use of technology for teaching and learning poses the technology within the didactic system (inside the tetrahedron). Looking at the faces of the tetrahedron can give a view on various facets of the educational process. The face Author-Mathematics-Student concerns a context where the Student can autonomously interact with the Mathematics, starting from a didactical transposition by the Author or constructing herself some new resources, so acting as an author. The face Mathematics-Student-Tutor highlights the mediation between students and knowledge to be constructed, realized by interactions and communication with an expert. The face Author-Mathematics-Tutor focuses on the design and validation of the activities in the e-environment, realized by the collaboration among people who plan and people who interact with the students. The face Author-Student-Tutor refers to bridging action of the Tutor between the Author and the Student, as a mediator in one direction and as a feedback collector in the converse direction. A further feature of the tetrahedron model is the dynamicity of its vertices, intended as positions/roles that any actor of the didactic system can play. This makes it possible to design and analyze didactical situations where the student can be in charge of some didactical functions (Albano, 2017), such as creating didactic materials (i.e., Author) or being a tutor among peer or younger students (i.e., Tutor), which adheres to the e-learning promise of putting the student at the center of the learning process, not only as someone who learns (Chevallard & Ladage, 2008). We assume that this dynamicity can also concern the teacher, especially in the case of moving from a face-to-face setting to a technology-based distance setting. The teachers' attitude model Recent developments in mathematics education research raised growing awareness on the importance of affective aspects. The field of affect developed over the past three decades, Fig. 1 The e-learning tetrahedron model "Tell me about": a logbook of teachers' changes from face-to-face… 1 3 in recognition of the relevance that such aspects held in studying the complexity of the process of teaching and learning mathematics (Di Martino, 2019;Di Martino & Zan, 2015;Hannula et al., 2018;McLeod, 1992). The relevance of this issue is strongly true for teachers, as noted by Zembylas (2005, p.467): Teacher knowledge is located in 'the lived lives of teachers, in the values, beliefs, and deep convictions enacted in practice, in the social context that encloses such practices, and in the social relationship that enliven the teaching and learning encounter' (Britzman, 1991, p. 50). These values, beliefs and emotions come into play as teachers make decisions, act and reflect on the different purposes, methods and meanings of teaching. Moreover, many studies showed how what teachers believe and feel has a clear influence on what students believe and feel (e.g., Tsamir & Tirosh, 2009). While this is true in general, it is even more important, in our analysis, to use the lens of affect research to investigate what is happening for teachers during the pandemic, when everyone had a very strong critical experience, approaching what Bruner (1990) calls turning points. To analyze teachers' affect, it is necessary to consider not only their attitude towards mathematics but also towards its teaching, as highlighted in many studies in mathematics education (for instance, Relich et al., 1994). More particularly, we are interested in the teachers' perception of their experience during the first lockdown period. As we will see below, we accessed these experiences through teachers' narratives, which focused on their awareness of different aspects of the lockdown situation as well as their awareness of how they are living the changes brought forward by the onset of state-sanctioned isolation and remote working. It is precisely to highlight these latter points that we opted to combine the affective aspects in the teaching and learning process within the didactic system described by the tetrahedron model, exploiting the three-dimensional model of attitude (TMA) of Di Martino and Zan (2010) and the teachers' attitude towards mathematics and its teaching (TAMT) model of Coppola et al. (2012). The TMA comes from a grounded definition of attitude that is a multidimensional characterization of the construct strictly linked to students' experience collected in a large sample of autobiographical essays. The TAMT is an extension of the TMA model coming from a study about attitude towards mathematics and its teaching in prospective teachers, aimed to make them aware of the influence of affective aspects on the teaching and learning process. In addition to the three TMA dimensions-emotional disposition towards mathematics, view of mathematics, and perceived competence towards mathematics-the TAMT model also considers the teachers' emotional disposition towards mathematics teaching, their views of mathematics teaching, and perceived competence towards mathematics teaching. The resulting six dimensions are strictly intertwined with each other, in particular regarding how a positive or negative relationship with mathematics can influence the way of teaching, in terms of view, emotions, and perceived competence. As anticipated in the Introduction, we aim to investigate the teachers' perceptions of the changes due to the transition from a face-to-face setting to distance education within the new educational system. The tetrahedron model and the teachers' attitude model fit very well with our aim to investigate the changes in teaching/learning processes starting from teachers' perceptions. Through this theoretical frame, we will consider the changes in the teachers' relationships with the vertices of the tetrahedron, paying particular attention to the linked affective factors. 3 3 Methodology At the end of March 2020, a month from the onset of total lockdown in Italy, when every face-to-face activity was suspended across the country, we distributed a call for essays among Italian teachers from various grade levels, since teachers of every level were involved in the shift to distance education. We conducted a qualitative study using a narrative approach. We set an online form with one open question, given in Italian, with the following instructions (in the English translation): We ask you to write an essay entitled 'Teaching/learning in the days of the coronavirus. From face-to-face to distance education. Logbook of a change'. You can deepen the aspects you consider most important, whether they are cognitive and methodological or affective (emotions -fears, enthusiasm, etc.; beliefs and expectations -about effectiveness, teacher-student 'contact', students' involvement, etc.) or metacognitive (reflections). Please, give more details also on how your choices and beliefs about the various aspects considered have changed as your distance learning experience has progressed. The essay can be of varying length, organized as you prefer (e.g., a single text or a daily journal). We disseminated the form link by means of emails sent to all teachers who are members of the Italian Association for Research in Mathematics Education and to teachers who are part of local communities known to the authors of this paper. The teachers joining the call had about 1 month to submit the essay. In line with established methodologies of narrative data collection (Kaasila, 2007), we preferred essays that described stories in which the narrator selected the most significant aspect independently, without answering others' questions or focusing on aspects they deemed to be irrelevant (Di Martino & Zan, 2010). Moreover, as Kaasila underlines, we can focus not only on narrators' experiences but also on how they describe such experiences. Indeed, Connelly and Clandinin (1990, p. 2) state that: "The main claim for the use of narrative in educational research is that humans are storytelling organisms who, individually and socially, lead storied lives. The study of narrative, therefore, is the study of the ways humans experience the world." A priori we chose to frame the collected data from a systemic point of view, using the tetrahedron model, by taking into account the affective aspects involved, using the teachers' attitude model. The qualitative analysis of the essays showed that the changes described by the teachers can be interpreted by us as "movements" in the tetrahedron, where the affective aspects assume an important role. We refer to "movement towards a vertex" when the teacher assumes the role, respectively, of Student, Author, Tutor, and of a mathematics scholar who reflects on Mathematics and on its teaching. In this perspective, we highlight how the various teachers' movements emerge from their narratives, as follows: -The teacher moves towards the Student The teacher describes herself as a student, that is, one who must/wants to learn something new. In our context, we look for excerpts showing the teacher's need/wish to learn (e.g., to learn how to use the new tools, how to manage the new situation, how to use the technological tools). -The teacher moves towards the Author The teacher describes herself as someone who feels the need to take part in the process of designing resources, tasks, and activities and setting up teaching/learning situations suitable for given didactical objectives. Here, we look for excerpts wherein the teacher refers to herself as being in charge of the design for learning. -The teacher moves towards the Tutor The teacher tells about the need of building a new relationship, less asymmetrical, with her students both at affective and cognitive levels. We look for sentences revealing more and different interactions and attention to the students. -The teacher moves towards the Mathematics The teacher tells about the need for a reflection on Mathematics with respect to the teaching/learning process. Here, we look for the excerpts from which this reflection emerges. For every movement towards a vertex, we observed related changes of teachers' attitudes that show us why and how these movements happen. More precisely, we observed the attitude towards the "new" processes of mathematics teaching and learning, by looking for emotional disposition, view, and perceived competence coming out from the teachers' words. Table 1 gives examples of each kind of move using bold type to highlight the wording that led us to identify the excerpt 2 in the given category. In the first excerpt, we can see that the teacher describes herself as someone who sets out to study and face new challenges. The second excerpt shows a teacher who recognizes the need of being engaged in the didactic design. The teacher in the third excerpt makes evident the change of her relationship with the students, no longer focused only on teaching. The fourth excerpt reveals a teacher who reflects on the nature of mathematics and on new ways of teaching it. Data analysis and results We collected 44 essays throughout Italy, eight from primary (1st to 5th grade) teachers (indicated by PT#), ten from teachers of the 6th to 8th grade (indicated by MT#), 17 from high school (9th to 13th grade) teachers (indicated by ST#), and nine from university teachers (indicated by UT#). The excerpts presented in the analysis that follows have been chosen as representative of the various aspects that are the focus of our observation and analysis. For every excerpt, we include the original Italian quotations in square brackets. From an emotional standpoint, the incipit of essays PT2, PT6, MT3, MT6, ST5, ST13, ST14, and ST16 is very strong, recalling the moment when everything changed. Many teachers begin their narratives by telling about "that" particular day, the weather or the sensations, and perceptions they felt. A very detailed emotional contextualization that shows how that was experienced as a "watershed," a turning point (Bruner, 1990) after which nothing has been as before: Only after such touching prose, do the teachers continue narrating what happened to their teaching process in the new distance education setting. The teacher moves towards the Student The new situation led the teachers to move towards the Student, in the sense that they narrate the experience of "becoming Students." This movement takes place at two levels. On the one hand, the teacher becomes a Student because she feels the need of an educational training that will allow her to acquire skills in managing the new situation from a teaching point of view. On the other hand, the teacher becomes a Student to acquire digital skills that she is forced to use in distance education. The movement seems to be characterized by different attitudes (positive, negative, or changing) towards mathematics teaching in the new situation, with the model's dimensions strongly intertwined. As a first reaction to the new situation, the writers of essays PT2, PT6, MT2, and ST16 claim to be uncomfortable. This emotional disposition is often linked to the fear of not being able to preserve the vision of mathematics and of teaching mathematics in the transition to distance education: The teachers PT1, PT2, PT3, PT5, PT6, ST3, ST13, ST16, and UT1 declare a feeling of sadness for their prediction of losing positive emotions without being face-to-face in the schools: I will never succeed ... in distance education it is impossible! I will miss the best! … (PT1) [Non riuscirò mai …con la didattica a distanza è impossibile! Mi perdo il meglio!] For some of those (PT5, PT6, ST3) who claim to feel this way, the view of mathematics teaching emerging from the essays is as something that passes strongly through the relationships that are established in the classroom and therefore as something that they find difficult to rethink in the distance setting." Obviously the Distance Education caught me 'unprepared'. Due to my character and the characteristics of my students, my teaching is very much based on relationship, empathy, emotion and not least on physicality, very often theatrical, in the classroom. (ST3) [Ovviamente la DAD mi ha colto 'impreparato'. Per mio carattere e per le caratteristiche dei miei alunni, il mio insegnamento si basa moltissimo sul rapporto, sull'empatia, sull'emozione e non da ultimo sulla fisicità, molto spesso anche teatrale, in classe.] In PT5, MT7, and MT11 essays, the negative emotions of the initial bewilderment are linked to a sense of inadequacy, therefore to a very low perceived competence towards teaching in the new situation: I was asked about distance learning while I could only ever describe my discomfort and my inadequacy, because that's what I feel. (MT7) [Mi si chiedeva della didattica a distanza mentre io invece riuscivo a descrivere sempre e solo il mio disagio e la mia inadeguatezza, perché in fondo è proprio questo quello che provo.] 3 However, essays PT1, PT2, MT2, MT9, MT11, ST3, ST2, and ST16 reveal that, after the initial emotion of discouragement, there is a positive change in attitude. We believe this change is linked to the rethinking of one's own vision and beliefs, to the desire to get back into the game as students, accompanied by positive emotions of challenge. I started 'studying' how to carry out distance learning. After all, the peculiarity of the teachers is to be 'lifelong students', to maintain (hopefully always) the desire to learn and not to retreat in the face of challenges. (ST2) [Ho cominciato a 'studiare' come poter fare didattica a distanza. In fin dei conti la peculiarità dei docenti è quella di essere 'studenti a vita', di mantenere (si spera sempre) la voglia di imparare e di non arretrare di fronte alle sfide.] ST16 explicitly states: Even my vision of distance education has changed during this period, due to the continuous interactions with the pupils, almost 24 h a day! (ST16) [Anche la mia visione di didattica a distanza ha avuto, in questo periodo, un mutamento, grazie anche alle continue interazioni con gli alunni, quasi 24 ore su 24!] Indeed, as suggested in the last excerpt, some teachers (PT1, PT2, PT5, MT7, ST3, ST13) note that the incentive for this change comes from different factors, such as the need to not lose contact with students and also collaboration with colleagues and passion for their work. These factors positively influence the perceived competence in moving towards the Student, with the possibility to "re-imagine," "re-build" oneself: But a real sailor can be seen in storms, right? So I had to 're-imagine, re-build myself'. (PT1) [Ma un vero marinaio si vede nelle tempeste: giusto? E allora mi sono dovuta "riimmaginare, ri-costruirmi".] This change of attitude can already be seen in the titles of essays PT2, MT2, MT10, ST3, ST6, and ST13. For instance, ST3 states, "You never stop learning": Thanks to the advice of some colleagues who had attended that famous course and were already using the Gsuite platform, after realizing that there was no time to lose, I started to 'study' how to carry out distance education. (ST3) [Grazie ai consigli di alcuni colleghi che avevano frequentato quel famoso corso e già usavano la piattaforma Gsuite, dopo aver realizzato che non c'era tempo da perdere, ho cominciato a 'studiare' come poter fare didattica a distanza.] In this movement, positive changes of attitude are linked to a positive attitude towards teaching with technologies, in particular due to a good perceived competence. This emerges from the essays written by teachers (PT3, PT4, MT1, MT5, MT6, MT10, ST1, ST6, ST8, ST13, ST14, ST15, UT1, UT3) who claim to have a good relationship with technology already in their teaching practices preceding the lockdown. The resulting perceived competence seems to help overcome more quickly the initial discouragement and to provoke positive emotions towards having to get back in the game. The teachers MT1, MT5, MT11, ST1, ST3, ST12, ST14, ST15, and UT3 find themselves in a new situation, putting themselves in the position of those who must/want to learn how to use new tools to manage it. However, different behaviors emerge from different teachers moving towards the Student. Some of them (MT2, MT3, MT5, MT9, MT10, ST6, ST14, ST15, UT3) refer acquiring technological skills, such as UT3: I am convinced that what we are experiencing can also be considered an opportunity to take contact and refine our knowledge of digital tools (even different software than those needed for the connection, such as viewers, calculators, calculation software, etc.) that will be very useful also in everyday teaching. (UT3) [Sono convinto che quella che stiamo vivendo può essere considerata anche un'opportunità di prendere contatto ed affinare le nostre conoscenze degli strumenti digitali (anche software diversi da quelli necessari al collegamento, come visualizzatori, calcolatrici, software di calcolo etc.) che potranno essere utilissimi anche nello svolgimento della didattica ordinaria.] Other teachers (MT1, ST5, ST8, ST10) move towards the Student by trying to understand how to use technologies with educational objectives from the beginning, such as ST8: Personally, in order to continue the class work started in September and ended in early March, I thought it was appropriate to support the lessons in videoconference (with google-meet) i.e., synchronous, with lessons I'm going to record (with Screencast or matic) i.e., asynchronous. [...]. Students can learn more easily if the calculation, the reasoning, takes place in a face-to-face setting: the Mathematica software allows me to write the procedure in a clear way, without the help of the graphic tablet. In brief, I hope I get by. (ST8) [Personalmente, per continuare a svolgere il lavoro in classe iniziato nel mese di settembre e concluso agli inizi del mese di marzo, ho creduto opportuno di supportare le lezioni in videoconferenza (con google-meet) cioè sincrone, con lezioni che vado a registrare (con Screencast o matic) cioè asincrone. [...] Gli alunni riescono ad apprendere più facilmente se il calcolo, il ragionamento, si svolge alla loro presenza: il software Mathematica mi consente di scrivere la procedura in modo chiaro, senza l'ausilio della tavoletta grafica. Insomma speriamo che me la cavo.] The expression concluding ST8's essay (I hope I get by [speriamo che me la cavo]) is a typical "student" expression in the geographical region in which the teacher belongs (taken from a well-known book in which a primary school teacher collected essays from students in a quite difficult socio-cultural context). It is worthwhile to note that ST12 goes towards the Student in a very deep sense, almost as a researcher, who spends some days "looking for what was in the literature" about distance education: Personally, I spent the first three days of the school closing to research what was in the literature about Distance Education experiences. I immediately realised only one 'thing': it is not possible to move the teaching-learning practices in the classroom in distance education, you have to change more or less radically the content and teaching practices. (ST12) [Personalmente ho speso i primi tre giorni della chiusura delle scuole a cercare cosa ci fosse in letteratura a proposito di esperienze di Didattica A Distanza (DAD). Ho avuto subito un'unica 'certezza': non è possibile 'migrare' le pratiche di insegnamento-apprendimento in classe in modalità DAD, si devono cambiare più o meno radicalmente contenuti e pratiche didattiche.] Positive attitudes towards technology seemingly favored the prevalence of a sense of challenge over that of bewilderment and confusion. Moreover, some essays suggest that, in addition to the movement towards the Student, there is also the movement towards the Author (such as ST13: I don't have the control of this work among peers…. How to promote it, even at a distance, so that it can encourage learning? [Ma io non ho il controllo di questo lavoro tra pari…. Come promuoverlo, anche a distanza, in modo che possa favorire l'apprendimento?]). The teacher moves towards the Author The radical change, the emotional impact, and the reflection on one's own teaching led the teacher, in some cases, to describe what we interpreted as a movement of the teacher towards the Author and involving a change in the didactical transposition, due to the transition to distance education (i.e., ST15: Distance education triggered me to reflect on my way of teaching [La didattica a distanza ha innescato in me riflessioni sul mio modo di insegnare.]) The emotions and attitudes described in the essays are different and opposite: from the sense of bewilderment to the sense of freedom for no longer being subjected to institutional constraints (such as programs and assessments) that are seen as limits to one's work. The teacher goes towards the Author because she needs to design new teaching situations. We identified two different ways to change. Some teachers try to make a didactical transposition trying to remain as similar as possible to what they did in a face-to-face setting, whereas some others feel the need for a change (others "suffer" this change). These views and beliefs are associated with different emotional dispositions. As an example, PT1 is one of the teachers expressing a positive change in her attitude, after the initial discouragement: I thought about a new 'project/activity' to be carried out online with the main aim of doing mathematics in a playful, joyful and engaging way, even if at distance. I then structured a class activity called 'a team of detectives'. (PT1) [Ho pensato così ad un nuovo 'progetto/iniziativa' da svolgere on line con l'obiettivo prioritario di fare matematica in modo ludico, gioioso e coinvolgente anche se a distanza. Ho creato allora un'iniziativa di classe chiamata 'una squadra di detective'.] For PT3, PT7, MT2, ST15, UT1, and UT3, the new didactical transposition is an adaptation of the previous one. As an example: the attempt was to restart teaching as similar as possible to the previous one (MT2) [il tentativo era quello di riprendere una didattica più simile possibile a quella precedente.] Nevertheless, the essays of PT5, MT8, MT10, ST4, ST5, ST6, ST12, and ST13 show the need to rethink and redefine a new didactic transposition. As an example: What I had to redefine in this first period is the idea of the lesson itself (PT5) [Ciò che più ho dovuto ridefinire in questo primo periodo è l'idea stessa di lezione.] While at beginning the term "video lessons" is widely used to indicate an adaptation of face-to-face lessons used before the lockdown; in this case, as ST12 writes, video lessons have a different meaning: it is not possible to 'migrate' teaching and learning practices in the classroom in Distance Education mode, you have to change more or less radically the contents and teaching practices [...] Making them immediately part of my choice, I exclude that Distance Education coincides with video lessons, precisely because I do not believe that you can transpose the 'physical' class into a 'virtual' class. The video lesson will be an important but not exclusive moment. (ST12) [non è possibile 'migrare' le pratiche di insegnamento-apprendimento in classe in modalità DAD, si devono cambiare più o meno radicalmente contenuti e pratiche didattiche [...] Escludo, facendoli subito partecipi della mia scelta, che DAD coincida con videolezioni, proprio perché non credo che si possa trasporre la classe 'fisica' in una classe 'virtuale'. La videolezione risulterà un momento importante ma non esclusivo.] In some essays, the positive emotions in the movement towards the Author seem linked by a real "sense of liberation" from traditional teaching and anxiety for the syllabus: My first impact with distance education was a great joy for me, I was happy to be able to apply methodologies and tools that I could not always use in class. [...] I am focused exclusively on learning and not at all on completing programs, an anxiety that, unfortunately, I was experiencing even if I was aware of the mistake [...] there is room for knowledge and skills but above all for development and evaluation of competencies. (ST6) [Il primo impatto con la didattica a distanza è stato per me una grande gioia, ero contenta di poter applicare metodologie e strumenti che in classe non sempre riuscivo ad utilizzare. [...] sono concentrata esclusivamente sugli apprendimenti e per nulla sul completamento dei programmi, ansia che purtroppo vivevo anche se consapevole di sbagliare […] c'è spazio per conoscenze e abilità ma soprattutto sviluppo e valutazione di competenze.] The reflection linked to the movement towards Author leads teachers to consider aspects of their own professional activity that may be new to them, as in the case of PT3: Distance education needs a preliminary phase of didactic design through the selection of contents, the identification of objectives and the contextualization of the didactic unit within the disciplinary program. A work of selection and organization of materials to be used, of activities to be proposed for education and curricular disciplines carried out in synchronous and asynchronous way. (PT3) [La DAD ha bisogno di una fase preliminare di progettazione didattica attraverso la selezione dei contenuti, l'individuazione degli obiettivi e la contestualizzazione dell'unità didattica all'interno del programma disciplinare. Un lavoro di selezione e organizzazione di materiali da utilizzare, di attività da proporre per educazioni e discipline curriculari somministrate in forma sincrona e asincrona.] Here, PT3 refers to teaching processes that ought to always be taken into consideration yet are evidently new to him. The process, in this sense, only comes to the surface when PT3 loses some of her certainties. The teacher moves towards the Tutor By looking into this movement, we commit to study the dynamics whereby unexpected and sudden change in teaching delivery influenced the teacher-student relationship. Many of the essays analyzed here focus on such mechanisms. PT2, MT7, ST1, ST2, ST14, and ST16 narrate how the first and very strong perceived need was not related to teaching/learning mathematics; rather, it was about making children feel close (the need to "not leave them alone," as PT2 writes), behaving more than before (and in a different way) as an "adult reference point" for the students, in such a moment of general disorientation. The emotions emerging from the essays are very strong, both positive and negative, described with emphasis and with many lines of text, as the following excerpt shows: What is changed is that now it is no longer a question of homework or teaching, it is about being an adult reference and less a teacher. Having an appointment every day helps to sustain this situation made of fear, sometimes of sick parents, of flimsy internet connections, of crowded rooms, of embarrassment in showing one's home intimacy and nostalgia of the classmates. [...] the only thing that I have managed to do till today I think is not having lost them but this has nothing to do with teaching. (MT7) [Cosa è cambiato allora? E' cambiato che ora non si tratta più né di compiti né di didattica, si tratta anche di essere ancora più un adulto riferimento e meno insegnante. Il fatto di avere un appuntamento ogni giorno aiuta a sostenere questa situazione fatta di paura, talvolta di genitori ammalati, di internet che non va, di stanze affollate, di imbarazzo nel mostrare la propria intimità casalinga e di nostalgia dei compagni. [...] l'unica cosa che ad oggi sono riuscita a fare credo che sia il fatto di non averli persi ma ciò con la didattica non ha nulla a che fare.] The narratives of PT5, MT9, MT11, ST2, ST7, ST13, ST17, and UT1 show new awareness of emotions, that perhaps were previously not so explicitly relevant: This shift towards the Tutor is very evident in the essays of teachers of all levels, even in the ones from high school: We see a relationship in which the collaboration and the willingness to meet each other become the real drive to move forward and to build a functional future for us and for them. (ST17) [Vediamo formarsi una relazione in cui la collaborazione e la volontà di venirsi incontro diventano la vera forza per andare avanti e per costruire un futuro funzionale per noi e per loro.] The shift furthermore entails a change in the relationship, an increase in confidence and trust: This activity, however, although tiring and expensive, brings me closer to my students, in some way it seems to me to 'cuddle' them, caress them one by one, in order to be able to follow and advise them [...] I must recognize that this anomalous situation has greatly increased the confidence and trust among us; I perceive the need they have to be reassured and guided and I realize that the trust they place in me has increased. (ST2) [Questa attività però, anche se faticosa e onerosa mi avvicina ai miei studenti, in un qualche modo mi sembra di 'coccolarli', accarezzarli uno per uno, per poterli seguire e consigliare [...] devo riconoscere che questa situazione anomala ha molto aumentato la confidenza e la fiducia tra noi; percepisco il bisogno che hanno di essere rassicurati e guidati e mi accorgo che è aumentata la fiducia che essi ripongono in me.] In the essays of PT2, PT5, MT1, MT6, UT1, and MT2, this great need emerges also in the opposite direction, that is, the need of the teacher themselves to be tutors for their students, in such a particular moment: The emotional pain will be long, but being able to start again communicating with the students is a cure-all for me. It gives a purpose to my days. (MT1) [La sofferenza emotiva sarà lunga, ma potere riprendere la comunicazione con i ragazzi è per me un toccasana. Dà uno scopo alle mie giornate.] This movement also includes a change in the teachers' understanding of assessment. Distance as well the greater amount of data provided by technological equipment allows the teacher to have a holistic view of the students' knowledge. In this sense, the teacher tends to make more extensive use of formative assessment and therefore dropping summative assessments. Indeed, it seems that MT1, MT3, MT5, MT8, MT10, ST3, ST4, ST6, ST8, and ST10 feel "free" to behave as tutors in relation to the assessment, not worrying about "grades" (MT3: free from the bureaucratic trammels and cancer that afflicts our school: number grades. [libero dalle pastoie burocratiche e dal cancro che affligge la nostra scuola: i voti in numero.]), focusing instead on feedback and formative assessment (MT1: […] evaluation is still important, in a formative sense. Giving feedback to children is fundamental. [la valutazione è comunque importante, in senso formativo. Dare un feedback ai ragazzi è fondamentale]). A very recurring theme in teachers moving towards the Tutor, characterized by very strong and almost always negative emotions such as fear, worry, and anger, is that of the feeling of not being able to include or "keeping" children in difficulty. In the essays of PT3, MT1, MT7, MT8, MT12, ST1, ST8, and ST16, there is a constant fear that distance education may create or exacerbate inequalities due to different factors (socio-cultural context, learning difficulties, greater or lesser technological availability, greater or lesser family support). MT8, whose essay narrates a very difficult social context, writes: In many families, the lack of digital skills and tools necessary for 'distance education' has widened the gap between social classes, undermining the essential value of universality that the right to study should have. (MT8) [In molte famiglie la mancanza di competenze e strumenti digitali necessari per la 'didattica a distanza' ha aumentato il gap tra le classi sociali minando l'imprescindibile valore di universalità che dovrebbe avere il diritto allo studio.] Still very strong negative emotions emerge from MT12 and ST16: Just two teachers (PT5, MT12) think with fear at the moment when they will "return back to class": on the day of a hypothetical return to class, the differences between students will be enormous [...] The teacher moves towards the Mathematics The movement towards the Mathematics is determined by the movement towards the other vertices of the tetrahedron. This movement is activated by the need of the teacher to revise her previous didactic transposition to be effective in this new context, which requires a reflection on Mathematics. As an example, ST4 illustrates her view of mathematics (see the terms "unnecessary" and "baroque" in the excerpt below), suggesting that the new context allows her to manage the teaching/learning process in a way coherent with her views. Moreover, ST4 notes that this coherence between the view and the didactic action allows her to achieve the intended educational goals: the essay reveals that she is perceiving such experience as entirely new: I am experiencing that these didactic goals 4 are achievable without complicating life with unnecessary algebraic 'baroque' manipulations (for example solving equations by means of 15 simplification steps before arriving at the 'final' equation itself ...). The students are reacting well. Better than they did pre-coronavirus. (ST4) [Sto facendo esperienza che questi obiettivi sono raggiungibili senza complicarsi la vita con inutili 'baroccheggiamenti' algebrici (esempio con equazioni da semplificare in 15 passaggi prima di arrivare all'equazione 'finale' vera e propria…). I ragazzi stanno rispondendo bene. Meglio di quanto fatto pre-coronavirus.] For this new transposition, the teacher needs to pass through the Student (she reflects on the technologies to use in order to carry out the new transposition) and the Author (she reflects on how to realize a new didactic transposition). These movements reveal how the view of mathematics and its teaching ultimately influences the emotional dispositions of teachers. On the one hand, for UT1 and UT2, the fear of failing to keep their own vision or a low perceived competence influences the emergence of negative emotions, especially at the beginning. On the other hand, PT1 and ST12 reconsider mathematics as a meeting place, as an occasion even more important than before to strengthen relations with students, rather than a knowledge to be taught. For PT1, for instance, the creation of a comfortable environment can support students from an emotional point of view and support their learning process too: And all of this has to do with mathematics? I studied and lived in my own skin and in that of the 'fragile' children I meet, that emotions are fundamental in every learning process, and also in numbers, and even in this difficult moment! I will continue, with my wizard hat, to meet the pupils and they, perhaps, will feel as a friend, which encourages them because 'learning is a fantastic adventure'! (PT1) [E tutto questo che contatto ha con la matematica? Io ho studiato e vissuto sulla mia pelle e su quella dei bambini 'fragili' che incontro, che le emozioni sono fondamentali in ogni processo di apprendimento, e anche nei numeri, e anche in questo momento così difficile! Io continuerò, con il mio cappello da maga ad incontrare gli alunni e loro, forse, avvertiranno una presenza, amica, che li incoraggia perché 'apprendere è un'avventura fantastica'!] The excerpts of PT3, MT3, MT4, MT10, MT11, ST6, and ST7 highlight how the teacher moves towards the Mathematics feeling the need to pass via the Author: When it was necessary to introduce an absolutely new [mathematics] Discussion and conclusions This study is based on the assumption that the researchers interpret phenomena by means of the narrators' words, producing sense and understanding (Bell, 2002). Summing up, we have collected essays where teachers chose what to narrate about their feelings in the pandemic situation and then their perception of the changes, their reflections, and their emotions related to themselves, to students, to mathematics education, and to distance education. The analysis of the essays was carried out through a double theoretical lens, the e-learning tetrahedron model (Albano, 2017), and the teachers' attitude model (Coppola et al., 2012). These two perspectives allowed us to identify elements of interest and their relations in order to analyze the process of change and the way in which teachers have experienced and perceived it. First of all, in the collected data, we can identify two temporal periods: a first period of bewilderment (e.g., ST14: Carrol's Alice, into a space with an unknown and deforming topology [Alice di Carroll, in un spazio dalla topologia sconosciuta e deformante]) and a second one of reflection and elaboration (e.g., ST15: Distance education has triggered in me reflections on my way of teaching, on my (poor) flexibility, on the best way to transmit and involve [La didattica a distanza ha innescato in me riflessioni sul mio modo di insegnare, sulla mia (scarsa) flessibilità, sul modo migliore di trasmettere e coinvolgere]). This second period is generally perceived differently, as if the situation was less negative than expected before, and a change in the attitude seems to emerge, in particular with high school teachers, together with the movements towards the vertices of the tetrahedron. The first period began at a well-fixed time when something dramatic happens, the media announcement of the lockdown. It is a "watershed," a turning point (Bruner, 1990) after which nothing has been as before. The impact of the lockdown on teachers has been disruptive. Essentially, very strong emotions arising from the closure of the schools are described in all the essays, where this moment is narrated with great emphasis together with the space-time description of the context (the weather, the place where the teacher was while learning the news, etc.). These strong emotions affect both the professional and the personal sphere. The former concerns the lack of possibility to carry out the normal didactic activity, with the discomfort of having to manage one's own professional activity without feeling to have the tools to do so (that is, with a low perceived competence). The latter sphere concerns the personal distress coming from the absence of "contact" with the students. The description of the second period (of reflection) starts very often with the description of distance education in negative terms: "it cannot," "it is not," "it is impossible". Generally, it is just what is missing that triggers a reflection, from which it emerges that the teachers take into consideration new (for them) perspectives on various functions of the teacher, on educational objectives, on mathematics, and on mathematics education. Interestingly, the teachers refer to this situation in terms of opportunities. For example, a primary teacher (PT6) writes that "the tempting opportunity that this situation has offered us is the possibility to rethink and re-evaluate our school microworld" [l'occasione ghiotta che ci ha offerto questa situazione è la possibilità di ripensare e rivalutare il nostro micromondo della scuola.]. Similarly, a secondary school teacher (ST10) writes: "I am firmly convinced […] that this change in which we have been catapulted should be taken as an opportunity" [sono fermamente convinta […] che questo cambiamento in cui siamo stati catapultati vada colto come un'opportunità.]). Actually, what teachers report concerns educational issues widely discussed in the literature in mathematics education such as educational goals, design of learning activities, assessment, and so on. Nevertheless, these issues coming out from reflection appear as completely new for these teachers. In other words, they seem to discover some key aspects of the didactic system in which they are embedded, thanks to the disruption of the didactic system itself. Generally speaking, the traumatic change in the educational setting plunged teachers into an unexpected and unthinkable world where the teachers become aware that the didactic system has to be reconstructed. Therefore, totally different educational worlds are imagined: a school where the summative assessment disappears; a school where mathematics is not a set of formula and procedure; a school where the teachers design activities to promote reasoning and, more generally, competency-oriented activities; and a school where technology is really integrated in teachers' and students' usual practice. These imagined educational worlds are new (for the teachers) due to a different role of the teacher, a different teacher's attitude towards mathematics, and its teaching and a different epistemology of mathematics. We think that it is relevant that teachers take into consideration different possible worlds (in the sense of Bruner, 1986) that can assume an important role in the development and in the diffusion of a culture of mathematics education, even if these possible worlds will not (fully) become actual worlds. The construction of possible worlds can foster different teachers' attitude towards their professional development, enabling an alignment and a dialogue between the new (for them) issues described above and the related development and results in mathematics education research. This alignment can have a deep impact on teachers' education that has to be further explored. However, future research is needed to investigate the teachers' reflection and the teachers' movement in the tetrahedron along the persistent pandemic. New periods could be identified besides the two temporal periods above described, and it would be interesting to inquire about the role that these new periods can have in the actualization of the possible worlds. Finally, from the point of view of the elaboration of the theoretical framework, two specific issues remain open. The first concerns the role of technology in the tetrahedron. It appears from the essays as a means for approaching teachers and students within the didactic system: this approaching is bi-directional, since it is not only the teacher who is in charge of institutionalizing students' knowledge, but she is recognized by the student as the one who takes care of the students' learning and works to this aim. The physical distance prompts teachers' reflection on the role of the contact between teacher and students, and this suggests somehow a relation between external and internal spheres of the tetrahedron model that need further exploration. The second unresolved theoretical issue concerns the relations between the theoretical constructs used in the analysis. The teachers' attitude model (Coppola et al., 2012) and the e-learning tetrahedron model (Albano, 2017) have allowed an analysis from two perspectives that reciprocally enriched each other. On the one hand, the e-learning tetrahedron does not take into account the affective aspects; on the other, the attitude model does not pay attention to technology or didactical elements, including design and tutoring. The analysis process and the findings show that a deeper theoretical integration in terms of networking is possible (Prediger et al., 2008) and further studies will be needed in this direction. Author contribution The authors equally contributed to the development of the manuscript, and all read and approved the final manuscript. Funding Open access funding provided by Università degli Studi di Salerno within the CRUI-CARE Agreement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
12,107.2
2021-10-01T00:00:00.000
[ "Education", "Mathematics" ]
Single-Bus and Dual-Bus Architectures of Electrical Power Systems for Small Spacecraft : Nowadays, it has become possible for universities and new businesses to launch satellites of reduced size and cost fulfi lling viable missions. Nevertheless, there is still a considerable failure rate that reduces the expected lifetime of these spacecraft. One of the main causes of failure is the power system. Redundancy is one of the main options to enhance its lifetime and lower the failure rate. However, cost, mass, and complexity increase due to redundancy, making it more diffi cult to complete the projects. Thus, it is necessary to enhance the lifetime of power systems while keeping the development process simple and fast. This paper proposes two confi gurations of an electrical power system with duplicate components: single-bus confi guration has been designed for a nanosatellite not yet launched and dual-bus confi guration for a micro deep-space probe launched into a heliocentric orbit. The design and implementation of two dual electrical power systems are described; measurements and on-orbit data of the electrical power system of the micro deep-space probe are also presented, demonstrating that the dual-bus electrical power system can be successfully used in spacecraft. Lastly, conclusions regarding the redundancy considerations for small satellite electrical power systems are drawn based on these two examples. INTRODUCTION Th e development of spacecraft by universities has been focused on small satellites, which include nanosatellites, picosatellites and miniaturised deep space probes.Specifi cally, most university-class missions have adhered to the CubeSat specifi cation to easily obtain a launch opportunity and used Commercial Off -Th e-Shelf (COTS) components (Carrara et al. 2017) to shorten the development time.Th e popularity of university-built CubeSats can be demonstrated by reviewing the number of university missions already launched; 266 university-class missions had launched until the end of 2015 (Swartwout and Jayne 2016).Moreover, a new business based on a constellation of CubeSats conducting Earth observation is in operation (Crisp et al. 2015).Similarly, interplanetary and deep-space exploration missions have also been developed by universities (Yoon et al. 2014;Babuscia et al. xx/xx 02/15 One implicit goal of CubeSat development is having fast and low-cost projects; however, the high probability of failure is a common drawback associated with these projects.The failure rate of university-class missions is about 40% (Swartwout and Jayne 2016).The electrical power system (EPS) is one of the main causes of failures of CubeSat missions both in early mission phase and during the first three months (Langer and Bouwmeester 2016).Thus, improving the reliability of the EPS will significantly reduce the failure rate of these missions. In small satellites, simple configurations have predominantly been used for implementing the electrical power systems (Okada et al. 2013;Edries et al. 2016).The power source (PS) of small satellites is typically based on solar cells and lithium batteries as a secondary source (Frost et al. 2015).The electrical power is transferred from the solar cells to the batteries and the spacecraft subsystems using either Maximum Power Point Tracking (MPPT) or Direct Energy Transfer (DET) architectures (Patel 2005;Mourra et al. 2010).In any case, a battery charge regulator (BCR) is required to protect the battery against overvoltage or overcurrent, and power conditioning modules (PCM) are needed to regulate and distribute the voltage for satellite subsystems.Figure 1a shows the architecture of a simple EPS, showing its interfaces with main subsystems of the spacecraft: On-board Computer (OBC), Communication System (COM), Attitude Determination and Control System (ADCS), and Payload (PL). Figure 1b shows the block diagram of main components of a PCM.Placing two identical components in parallel significantly increases the reliability of a system, reduces the operating stress on the components and prolongs their expected life (Patel 2005).Splitting the power conditioning unit has been studied in high-power spacecraft to ease thermal control and to double the output power capacity (Loche et al. 2011).In this paper, two configurations with dual electrical power systems are presented.These configurations have been developed for a nanosatellite and a micro deep-space probe, Shinen-2 (Kuroiwa et al. 2016), launched in December 2014 on-board H-IIA-202.The next section of this paper presents the general approach to the implementation of the dual-bus electrical power systems for two cases.Then, a detailed comparison of the two case studies for a nanosatellite and deep-space probe is made.Because the two missions have different needs, this comparison is focused on showing the performance of different units used in the implementation of the EPS, not the overall systems.The objective is to provide reference designs of the EPS functional units for future spacecraft.The results section shows measurements of the performance of the two case studies including on-orbit data from the deep-space probe, and the theoretical failure rate is discussed.Finally, conclusions regarding the merits of a dual-bus EPS architecture in the context of small satellites are drawn. APPROACH OF DUAL-BUS ELECTRICAL POWER SYSTEM ARCHITECTURE Most spacecraft are designed to achieve specific missions performing different functions in science, technology demonstration or education.For university-class spacecraft, receiving the housekeeping data of the satellite is usually considered minimum success of the mission because one of the primary objectives is education.It is thus considered sufficient for the team, formed mainly of students, to be able to develop a functional spacecraft.In cases with multiple mission objectives, it is usual to consider multiple reliability requirements; achieving any of these requirements is a level of the mission success (Hecht 2011).For example, in the hypothetical case of one spacecraft with two missions and two payloads (PL-1 and PL-2), the mission requirements could include the following: • At least PL-1 shall be operational for minimum success of mission 1; • At least PL-2 shall be operational for minimum success of mission 2; • Both PL-1 and PL-2 shall be operational for full mission success.Usually, including more components needed to satisfy the minimum success is a common way to increasing the reliability of the mission.However, more components usually increase the cost and development time.As figure of merit (FOM), probability of failure (given by 1 -reliability) and cost of units in parallel can be used as trade-off criteria when designing an architecture with redundancy.The theoretical relationship between these two FOMs -that does not account for e.g.cost reduction with mass production -is shown in Fig. 2. It can be seen that the highest increase in reliability (or the highest decrease of probability of failure) is achieved with two components in parallel and the cost is increased linearly with the number of units (Patel 2005).Thus, two EPS designs approach are presented below: single-bus electrical with two units and two units in a dual-bus; adding subsequent buses would follow the law of diminishing returns but would be associated with substantial increases in cost and complexity. SINGLE-BUS ELECTRICAL POWER SYSTEM A generalized architecture of the considered single bus EPS is shown in Fig. 3.The architecture is split into two systems, EPS 1 and EPS 2. Each EPS consists of a power source (PS), battery, BCR and PCM.In a simple case, both systems have the same power capability and can provide the power required for all operation modes of the spacecraft.However, this case might be unattractive due to the mass and dimensional penalties implied by duplicating every component. xx/xx 04/15 One variation of the redundant architecture is to size the power sources of EPS 2 for operation of only the essential elements needed to achieve the minimum success of the mission (Fig. 3b).For example, only OBC, COM and payload 2 (PL-2) may need to be powered from this bus; it means that ADCS is only an essential element for mission of payload (PL-1).In the case where EPS 2 only needs to provide power for selected subsystems, the number of solar cells and battery capacity are calculated according to the power profile of these subsystems.The capacity of battery-2 in Wh (E Battery2 ) is calculated by (Eq. 1) In the case where EPS 2 only needs to provide power for selected subsystems, the number of solar cells and battery capacity are calculated according to the power profile of these subsystems.The capacity of battery-2 in Wh (EBattery2) is calculated by (Eq. 1) where: POBC, PCOM, PPL1, TOBC, TCOM, TPL1 are the power and time required for OBC, COM and PL1 during eclipse; DODBattery2 is the depth of discharge for battery_2; and η2 is the efficiency of the charge/discharge modules. Dual-Bus Electrical Power System The spacecraft can include redundancy of subsystems different to the EPS such as OBC and COM to increase the probability of mission success.These additional subsystems are not necessarily identical to avoid the same errors when executing the same operation, e.g. two communication subsystems might use different frequency bands (Del Corso et al. 2011).The dual-bus architecture for this case can be implemented as shown in Fig. 4. Here, the two power buses are separated.Thus, EPS 2 provides power just to the communication (1) where: P OBC , P COM , P PL1 , T OBC , T COM , T PL1 are the power and time required for OBC, COM and PL1 during eclipse; DOD Battery2 is the depth of discharge for battery-2; and η 2 is the efficiency of the charge/discharge modules. DUAL-BUS ELECTRICAL POWER SYSTEM The spacecraft can include redundancy of subsystems different to the EPS such as OBC and COM to increase the probability of mission success.These additional subsystems are not necessarily identical to avoid the same errors when executing the same operation, e.g. two communication subsystems might use different frequency bands (Del Corso et al. 2011).The dual-bus architecture for this case can be implemented as shown in Fig. 4. Here, the two power buses are separated.Thus, EPS 2 provides power just to the communication (COM-2), the controller unit (OBC-2) and the secondary payload (PL-2). The main reason to include dual electrical power systems is to reduce the failure rate of the whole spacecraft.Thus, the spacecraft should be able to operate when one of the power systems fails.This is the case for the micro deep-space probe Shinen-2, where the dual-bus is implemented: one bus provides power to the sensing payload (radiation particle detector) and one communication subsystem, while the other bus provides power to another communication subsystem that is sufficient for up-and down-link on its own.Different approach is implemented in the nanosatellite.It is not made fully redundant because the secondary power system can only provide power for minimum operating conditions.Namely, OBC and COM have backup power lines from a separate power source.These two power systems will be analyzed in next section. Architecture of dual-bus electrical power system with duplicate components.(b) A failure, indicated by the red marks, in EPS 1 will cause the loss of OBC-1, COM-1, ADCS-1 and PL-1.However, COM-2, OBC-2 and PL-2 can still operate receiving the power from EPS 2. SPACECRAFT DESCRIPTION The nanosatellite taken as example here is a three-unit (3U) CubeSat with dimensions 30 × 10 × 10 cm and mass of 4 kg (Fig. 5a).The main mission is to take a photograph of the Earth using a camera developed with COTS components.Moreover, this is a university-class mission that involves students in the development team as part of education and research projects.The exemplar micro deep-space probe Shinen-2, has a quasi-spherical shape, diameter of about 50 cm and mass of 18 kg (Fig. 5b).This probe was developed with three purposes: firstly, to demonstrate a structure based on Carbon Fiber Reinforced Thermoplastic (CFRTP); secondly, to measure radiation from Earth to deep space with a charge particle detector, and thirdly, to demonstrate a deep space communication method (Bendoukha et al. 2016). COMPARISON OF ARCHITECTURES The electrical power system of Shinen-2 uses a dual-bus system with duplicate components, as described in the previous section.A block diagram of the dual electrical power system of Shinen-2 is shown in Fig. 6.This redundant system aims to have an independent power line for each communication line.Thus, EPS 1 provides power to the main communication line (COM-1) that includes the beacon transmitter (TX-Beacon), the OBC-1 and the main payload (PL-1).The EPS 2 provides power to the deep-space communication (COM-2) that is itself a technology demonstration payload (PL-2).Both power systems include Solar Array Panels (SAPs) as power source (PS), Maximum Power Point Tracking (MPPT) as BCR, Power Conditioning Module (PCM) and protections. BCR-1 Battery-1 Power distribution of nanosatellite.EPS 2 has enough installed capacity to power only the OBC and COM-2 subsystems, while EPS 1 can power all subsystems.Hot redundancy is used selecting DC-DC converters that support parallel connection. xx/xx 07/15 Instead of using the dual-bus power system, the electrical power system of the nanosatellite uses a single-bus electrical power system.However, the components of EPS 2 are sized for minimum operation condition (OBC and COM-1), as described in previous section, and EPS 1 is sized for full operation (OBC, COM-1, COM-2, ADCS and PL).EPS 2 is called secondary power system and provides less power than the solar arrays used in the EPS 1 (main power system).The operating modes relying on the secondary power system are designed to use only the essential subsystems and have a positive power budget.A block diagram of the EPS of the nanosatellite is shown in Fig. 7.The description of each component is presented in the following section.Implementation of hot redundancy can be used by careful selection of DC-DC converters that achieve stable voltage regulation and load sharing when operated in a parallel connection (Mishra 2019). SOLAR ARRAY CONFIGURATION The nanosatellite is a 3U CubeSat with six sides on which the solar panels are body-mounted, as shown in Fig. 5a.In this case, the panels on the 3U faces of the satellite are the power source for the primary power system (EPS 1), while the solar panels on the 1U faces are connected to the secondary power system (EPS 2).As described before, the secondary power system has less installed power, which is sufficient only for minimum operation of the satellite.The micro deep-space probe has 13 sides with solar arrays, 7 sides for EPS and 6 sides for EPS 2. Thus, both power systems have almost the same amount of installed power. In the nanosatellite, the solar panels are composed of multijunction solar cells with efficiency of 30%, an open circuit voltage of 2.7 V, and a short circuit current of 0.520 A (Azur Space 2016).Each solar panel of the primary system consists of 6 solar cells connected in series to obtain a voltage of 16.2 V with a short circuit current of 0.520 A and maximum power of 7.2 W. The two solar panels of the secondary system are composed of 2 solar cells in series.Therefore, their open circuit voltage is 5.4 V, the short circuit current is 0.520 A and maximum power is 2.4 W. Different to the nanosatellite, the solar array in the micro deep-space probe Shinen-2 uses Silicon solar cells with efficiency of 17 %, open circuit voltage of 0.632 V and short circuit current of 1.12 A. The solar array consists of five such solar cells connected in series and generates at most 2.78 W. In Shinen-2, 11 solar arrays are installed on seven sides as power source of EPS 1 (30.58W), while the EPS 2 consists of 10 arrays (27.8 W) distributed on six sides. In brief, the nanosatellite has one kind of array of solar cells for each subsystem, i.e., the solar arrays are in a dual-bus configuration because they are not connected.The installed power in the primary EPS is 21.6 W raised using three solar arrays, and the secondary EPS has installed power of 4.8 W with two arrays.The micro deep-space probe has only one kind of array that is used for the two subsystems.The summary of solar arrays configuration of both spacecraft is shown in Table 1. BATTERY CONFIGURATION The same battery cell, lithium-ion, is used in both spacecraft because this kind of cells has become the preferred choice for most of the small satellites.Thus there is enough space heritage to rely on this energy storage technology (Chin et al. 2018;Navarathinam et al. 2011).These cells have the following characteristics: nominal voltage of 3.7 V with a capacity of 3200 mAh (Sanyo Energy 2012).Even though both spacecraft use the same battery cells, the battery array is different for each spacecraft and for each subsystem.The battery array configuration is summarized in POWER REGULATOR Th e nanosatellite uses MPPT in the main power system to obtain the maximum power from the solar panels by using the integrated circuit LT3652, which implements a Constant voltage (CV) MPPT technique (Brito et al. 2013).In addition, the LT3652 is a battery management chip that regulates and protects the batteries during charging.Th e panels on opposite faces of the satellite are connected to the same MPPT because they do not receive solar radiation at the same time.Th e secondary system does not use MPPT technique because the solar panels in the secondary system have a maximum voltage of 5 V and the DET connection is the most appropriate for low-voltage solar panels (Erb et al. 2011). Th e micro deep-space probe uses MPPT for each solar array.Th is arrangement is more eff ective in this case because the solar arrays will receive solar radiation at diff erent angles.Th us they will have diff erent maximum power points.Th e SPV1040 IC was used to track maximum power.Th is IC also detects the voltage of the battery to protect it during charging.Table 3 summarizes the characteristics of two ICs. POWER CONDITIONING AND DISTRIBUTION UNIT In the nanosatellite, the primary PCMs consist of two buck DC-DC converters to generate 5 and 3.3 V (the used IC was TPS62143).Th ese converters have a maximum effi ciency of 90% and an input voltage range from 3 to 17 V.Th e buck topology was selected because the input voltage in the converters is always above of the output voltage.Th e input voltage is 13.48 V if the solar panels are illuminated and 7.2 V (the battery voltage) in eclipse. Th e secondary PCMs are composed of two buck-boost DC-DC converters to generate the same voltages of 5 and 3.3 V as the primary PCMs.Th e buck-boost topology was selected because the input voltage range for the converters is from 3.2 to 4.2 V, corresponding to the secondary battery voltage. In the case of Shinen-2, the two PCUs use the same kind of PCM.Th e main PCM consists of boost converters to generate 5.0 V from the battery bus, which can vary from 2.8 to 4.2 V. Th is conversion is implemented using the LT1370 IC, which is a high frequency switching regulator with minimum voltage of 2.7 V and maximum current of 6 A. In addition, the LT1316 is used to provide 5.0 V to digital circuits.Th is is boost-type switching regulator that provides maximum current of 0.5 A. RESULTS OF IMPLEMENTATION AND MEASUREMENTS Th e array of six triple junction solar cells, used in the nanosatellite case, is shown in Fig. 8a.Th e percentage of the area covered by the solar cells is 60%.Th e hexagonal solar panel of Shinen-2 is shown in Fig. 8b.Th is solar panel consists of three primary arrays of fi ve silicon solar cells each.Th e percentage of the array area covered by solar cells is 70%.Th e board of the electrical power system of nanosatellite is shown in Fig. 9.Both primary and secondary systems are included in the same PCB, and a PC/104-compatible connector is used as the only interface.In this case with redundancy in the same PCB, proper routing and isolation methodology is followed to avoid failure propagation between the circuits.A functional test was completed to verify the operation of the power conditioning unit and measure the converters' effi ciency in obtaining 5.0 V and 3.3 V. Th e results of these effi ciency tests are shown in Fig. 10, respectively.Th e maximum effi ciency was 93.87% for the 5.0 V regulator (TPS62143) and 87.4% for the 3.3 V regulator (TPS62142).Th ese effi ciencies were consistent with the effi ciencies of around 90% specifi ed by the manufacturer.The printed circuit board of the power conditioning unit of the Shinen2 is shown in Fig. 11.In addition to the power conditioning modules, this board includes a microcontroller, and voltage and current sensors used to acquire housekeeping data.The results of efficiency tests of the DC-DC converter (LT1370) when it operates at battery voltage of 3.7 V to generate the regulated but at 5V are shown in Fig. 12.Even though Shinen-2 was able to communicate until it reached the distance of 2.3 million km from Earth, the telemetry data about power systems was only analyzed until 700.000 km because beyond this range the signal was weak and difficult to decode.This is about twice the distance from Earth to the Moon (Kuroiwa et al. 2016).Even within this distance, it was difficult to reliably decode all transmitted telemetry, which renders the usable data scarce. The histories of the Shinen-2 EPS 1 battery voltage are shown in Fig. 13.The data are plotted against the distance between Shinen-2 and the Earth.The operating voltage varied between 3.88 and 4.06 V, which is similar to the battery of EPS 2. Figure 14 shows the solar array current telemetry; the solar array EPS 1 was located on square face where only one array could be installed, xx/xx 12/15 and solar array of EPS 2 was located on hexagonal face where three arrays were installed (Fig. 8b).It can be observed that 0.8 A and 2.4 A were obtained for EPS 1 and EPS 2, respectively; these values are close to the current at maximum power (1.0 A). DISCUSSION OF FAILURE RATE The nanosatellite and the deep-space probe use single-and dual-bus power systems, respectively.These are variations of the simplest possible architecture of an electrical power system with no redundancy.This section presents an analysis of these architectures focused on the failure probability. Given components A and B, e.g.overcurrent protection and regulation ICs, each with a failure probability P A and P B .Different arrangements of these components will result in different failure probabilities of the complete A-B assembly, P F .Note that the analysis presented here applies directly to the EPS architectures presented before, even though the actual equations may need to be written for more than only two components.Limiting the analysis to only two components makes the results more succinct.It is therefore favored over an in-depth failure probability analysis of the presented exemplar EPS architectures. Different ways in which components A and B can be arranged are schematically shown in Fig. 15.For the sake of clarity, only dual redundancy is shown, even though more than two components could be placed in parallel to further reduce P F .Also note that P F analyzed here is the probability of failure, i.e. the complement of reliability. For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): than two components could be placed in parallel to further reduce PF.Also note that PF analyzed here is the probability of failure, i.e. the complement of reliability For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (crossstrapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): (3) The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq. 4) : By noting that 1 , 2 , 1 , and 2 are always less than or equal to 1.0, one can than two components could be placed in parallel to further reduce PF.Also note that PF analyzed here is the probability of failure, i.e. the complement of reliability For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (crossstrapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq.4): By noting that 1 , 2 , 1 , and 2 are always less than or equal to 1.0, one can observe that PF,b < PF,a < PF,c, i.e. that the fully cross-strapped system has the lowest failure probability of all three.However, this assumes that the connections between components A and B have the same failure probability as in the case of single-string system. than two components could be placed in parallel to further reduce PF.Also note that PF analyzed here is the probability of failure, i.e. the complement of reliability For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (crossstrapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq. 4) : By noting that 1 , 2 , 1 , and 2 are always less than or equal to 1.0, one can observe that PF,b < PF,a < PF,c, i.e. that the fully cross-strapped system has the lowest failure probability of all three.However, this assumes that the connections between components A and B have the same failure probability as in the case of single-string system. than two components could be placed in parallel to further reduce PF.Also note that PF analyzed here is the probability of failure, i.e. the complement of reliability For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (crossstrapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq. 4) : (3) a offers a middle ground e probability is given as (Eq. an or equal to 1.0, one can ped system has the lowest t the connections between than two components could be placed in parallel to further reduce PF.Also note that PF analyzed here is the probability of failure, i.e. the complement of reliability For the simplest, single string arrangement from Fig. 15c, the failure probability is the highest of the three presented in Fig. 15 (DeGroot and Schervish 2014) (Eq.2): However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (crossstrapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq.4): By noting that 1 , 2 , 1 , and 2 are always less than or equal to 1.0, one can observe that PF,b < PF,a < PF,c, i.e. that the fully cross-strapped system has the lowest failure probability of all three.However, this assumes that the connections between components A and B have the same failure probability as in the case of single-string system. However, such a system is the least complicated and thus the quickest to test and implement.Moreover, it requires the least PCB space, which might be an important design consideration for satellites with high volume constraints such as CubeSats.Thus, such a design might be favored in schedule-constrained projects with limited resources, e.g.educational nanosatellite projects.The failure probability of the fully-redundant (cross-strapped) system shown in Fig. 15b is given as the failure of both A components or both B components (Eq.3): (2) (3) (4) The dual redundant power system shown in Fig. 15a offers a middle ground between the fully redundant and single-string systems.Its failure probability is given as (Eq.4): By noting that P A,1 , P A,2 , P B,1 and P B,2 are always less than or equal to 1.0, one can observe that P F,b < P F,a < P F,c , i.e. that the fully cross-strapped system has the lowest failure probability of all three.However, this assumes that the connections between components A and B have the same failure probability as in the case of single-string system.This might not be the case if the xx/xx 13/15 connections are realized with harness, and are manufactured and tested by inexperienced students, for example (Shirasaka et al. 2010).Depending on the complexity of the circuits that components A and B require, the complexity of the complete A-B system in the fully cross-strapped configuration might reach a level where design flaws will be difficult to identify in a timely fashion, thus leading to an on-orbit failure or missing the launch window. Even though P F,a is theoretically higher than the failure probability of the fully redundant system, P F,b , the reduced system complexity might result in lower P F in practice due to design errors and insufficient testing (Shirasaka et al. 2010).Still, P F,a of the dual bus system is less than the P F,c of the single string system.Moreover, if the secondary power system is scaled to only provide the power necessary to satisfy the primary mission objectives, as in the discussed case of the nanosatellite, the increase in reliability is associated with modest mass and size penalties, as opposed to implementing full redundancy.An extreme case of this design approach is Shinen-2 that, as shown in Fig. 6, consists of two single string systems, one of which is designed to operate a communications subsystem.This reduced the systems complexity to the minimum, while lowering the probability of failure of the telecommunications subsystem as a whole, i.e. failure of both communication lines CONCLUSIONS The design and implementation of two electrical power systems were presented and illustrated using examples of a nanosatellite and a micro deep-space probe.Both systems had independent solar array inputs and independent battery arrays.Thus the power conditioning unit was split in two separate units in both cases.The efficiency of the COTS DC-DC converters used as power conditioning modules was determined by experiment.In addition, telemetry data showed battery voltage and current of solar panels of the micro deep-space probe. These two examples were cases of single-bus and dual-bus electrical power systems.On the one hand, for the case of the nanosatellite, the two PCMs were rated at different power output to a single-bus, making the secondary system a backup unit that enabled minimum functionality.On the other hand, the electrical power system of the micro deep-space probe was split in two almost identical units (EPS 1 and EPS 2).Each EPS had an independent power bus and, therefore, Shinen-2 operated using a dual-bus electrical power system that had two communication subsystems powered by different power buses. The advantages of using various configurations of power buses on small satellites were discussed in the context of mass-efficiencydevelopment-random failure trade-off.It was shown, based on the two above satellite examples, that using a dual-power bus can offer increased reliability at a modest increase in mass, volume and complexity, which is also proportional to development risk.Therefore, it is recommended to evaluate the dual-bus power architecture when choosing the EPS architecture for small satellites. A new satellite mission is being operated by the Kyushu Institute of Technology to continue the evaluation of the redundant electrical power systems in a sun-synchronous orbit.Sensors of solar panel temperatures, sun sensors for attitude determination, and current and voltage measurements at more locations in the EPS will be included to better understand the dual-bus electrical power system behavior. Figure 1 . Figure 1.(a) Architecture of simple electrical power system on a small spacecraft and its interfaces with other subsystems.(b) Block diagram of a PCM indicating functional components. Figure 2 . Figure2.Probability of failure and cost versus the number of units in parallel.Decrease of probability of failure for units placed in parallel is minimum for more than three units; however cost is increasing significantly. Figure 3 . Figure 3. (a) Architecture of single bus electrical power system with duplicated EPS.(b) A failure in EPS 1 will prevent the operation of PL-1 and ADCS indicated by the red marks.EPS 2 can be designed to support essential elements for minimum satellite success, i.e.OBC, COM and PL-2. Figure 5 . Figure 5. Example of two small spacecraft with body mounted solar arrays: (a) Nanosatelite following the three unit CubeSat dimensions; (b) Micro deep-space probe Shinen-2. Figure 8 . Figure 8.(a) Solar panel of the nanosatellite; (b) Solar panel of the micro deep-space probe. Figure 9 . Figure 9. PCB including the two electrical power systems of the nanosatellite. Figure 10 .Figure 11 . Figure 10.Converter efficiency with input voltage of 7.2 V for output voltage of 5 V (TPS62143) and output voltage of 3.3 V (TPS62142). Figure 12 . Figure 12.Converter efficiency with input voltage of 3.7 V for output voltage of 5 V (LT1370). Figure 14 . Figure 14.Solar array current of one array on square face (A) and three arrays on hexagonal face (B) of Shinen-2 micro deep-space probe, obtained from spacecraft telemetry. Figure 15 . Figure 15.Schematic representations of the two components A and B arranged in architectures with varying levels of redundancy: (a) Dual redundant power system, (b) Fully redundant (cross-strapped) power system and (c) Single string system. Table 1 . Specification of solar array configuration for each EPS of the nanosatellite and micro deep-space probe. Table 2 . Specifi cation of battery confi guration for each EPS of the nanosatellite and micro deep-space probe. Table 3 . Summary characteristics of the MPPT integrated circuits used in the nanosatellite and the micro deep-space probe. 1 2 + 1 2 + 1 2 − 1 2 2 − 1 2 2 − 1 2 1 − 1 2 1 + 1 2 1 2 .(4)Bynoting that 1 , 2 , 1 , and 2 are always less than or equal to 1.0, one can observe that PF,b < PF,a < PF,c, i.e. that the fully cross-strapped system has the lowest failure probability of all three.However, this assumes that the connections between components A and B have the same failure probability as in the case of single-string system.
8,369.2
2019-10-10T00:00:00.000
[ "Engineering", "Physics" ]
Biomaterials for Pelvic Floor Reconstructive Surgery: How Can We Do Better? Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) are major health issues that detrimentally impact the quality of life of millions of women worldwide. Surgical repair is an effective and durable treatment for both conditions. Over the past two decades there has been a trend to enforce or reinforce repairs with synthetic and biological materials. The determinants of surgical outcome are many, encompassing the physical and mechanical properties of the material used, and individual immune responses, as well surgical and constitutional factors. Of the current biomaterials in use none represents an ideal. Biomaterials that induce limited inflammatory response followed by constructive remodelling appear to have more long term success than biomaterials that induce chronic inflammation, fibrosis and encapsulation. In this review we draw upon published animal and human studies to characterize the changes biomaterials undergo after implantation and the typical host responses, placing these in the context of clinical outcomes. Introduction Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) are important health problems that cause a sizable personal, societal, and economic burden [1]. SUI is defined as the "involuntary leakage of urine on exertion, sneezing or coughing" [2,3]. POP is the "the descent of one or more of the anterior vaginal wall, posterior vaginal wall, the uterus (cervix), or the apex of the vagina (vaginal vault or cuff scar after hysterectomy)" [4]. SUI and POP are thought to share a common pathogenesis, weakening of the muscular and connective tissues of the pelvic floor. Multiple etiological factors have been implicated including ageing, obesity, pregnancy, and childbirth, as well as genetic factors and menopause [1,[5][6][7]. Following failure of conservative management including physiotherapy, corrective surgery is considered to be the most effective and durable treatment for both SUI and POP. Most of the older surgical techniques relied upon suturing the local tissues to the back of the pubic bone (colposuspension) or using an autologous fascial sling. More recently there has been a growing trend to reinforce repairs using both synthetic and biological materials. This practice has been adapted from hernia surgery where there is established evidence that repairs reinforced with synthetic mesh provide superior outcomes. Synthetic meshes were popularized in pelvic floor surgery for SUI following the work of Ulmsten and Petros [8]. The mid-urethral tape (MUT) involved a minimally invasive approach to implant a thin synthetic mesh underneath the mid-urethral point. Early reports of cure rates in the range of 80-90% further propelled the uptake of this technology. Following the early success of MUT and a randomized control trial against colposuspension, synthetic mesh for SUI was soon introduced [9]. This was not based on long term supportive data but rather a grandfather clause which permitted introduction of a new material based on its similarity to an index product, which was used for hernia repair, namely, polypropylene mesh. A long term follow-up, the Ward and Hilton [9] study, demonstrated a 4% exposure of mesh rate. Subsequently mesh was introduced for the treatment of pelvic organ prolapse (POP) and this has resulted in a significant problem with mesh exposure which has led to enormous medico-legal problems, particularly in the United States of America. The following decade has seen a rapid rise in reports of mesh for POP related complications, but it is clearly important to differentiate mesh exposure (erosion) used for SUI from that used for POP. Thus reports of debilitating complications of vaginal mesh implantation have emerged including vaginal wall erosion (0-25.6%), chronic pain (0-5.5%), and sexual problems (1.9-17%) [10]. Although it can be debated whether these rates are high, the complications are often difficult to treat, requiring further hospital visits, further tests, and further reconstructive surgery. The situation has not escaped the attention of medical regulatory bodies such as the FDA who have issued statements warning patients and surgeons of the potential dangers of mesh use for POP [11,12]. More recently there has been a wave of class action litigation law suits raised against device manufacturers by patients who have suffered mesh complications, such that several major manufacturers have withdrawn products from the market. Biological grafts are alternatives to synthetic mesh. The most commonly used material, autologous fascia, has been used for over 100 years in the treatment of SUI with good efficacy. The main drawback however is the need to harvest the graft from a donor site (fascia lata from the thigh or rectus fascia from the abdominal wall) and potential morbidity (e.g., wound infection, scar, nerve injury, and hernia) [13]. There is a limitation on how much graft can be harvested which precludes its use in POP which is associated with relatively large fascial defects. This can be avoided by using grafts derived from cadavers or alternatively animal derived collagen matrices (e.g., porcine dermis, porcine small intestine, and bovine dermis). However, these materials require extensive processing decellularization, sterilization, and cross-linking processes to resist degradation [14]. While this renders materials nonimmunogenic, it can impact their biomechanical properties [15]. There is also the risk of viral or prion transmission [13]. Clinical studies are limited; however clinical experience is that all of the materials appear to be associated with graft failure in the medium term due to the body's response to the material, leading its encapsulation and subsequent degradation with limited remodeling. It is likely that biomaterials are subject to multifactorial problems because of (1) their physical properties (e.g., porosity and degradability), (2) their mechanical properties (e.g., stiffness and strength), or (3) the nature of the patient's immune response to the implanted biomaterials. In addition, surgical and patient specific factors (e.g., individual anatomy and comorbidities) are likely to play a role, though these are not modifiable by material design. To provide a simple context for this review we depict the current hypotheses of how failures of implant might occur through several routes in cartoon form in Figure 1 where the implanted material is shown conceptually as a hammock attached to two trees (the supporting structures of the pelvic floor). In the case of successful implantation, it is currently thought that the material induces an acute inflammatory response, which leads to constructive remodeling and material integration (Figure 1(d)). The aim of this review is to characterize these changes and responses, from the available human and animal studies, and relate them to clinical outcomes, thereby guiding the design of novel materials for this challenging clinical application. Methods The MEDLINE database was searched for articles describing studies investigating the in vivo response to biomaterials used routinely in pelvic floor surgery or that have been studied in the context of clinical trials. The search was limited to the years 1990 to 2013. The following search terms were used: "pelvis, " "pelvic floor, " "vagina, " "in vivo, " "in vitro, " "biocompatibility, " "prolapse, " "incontinence, " "biomaterial, " "sling, " "mesh, " "polypropylene, " "autografts, " "allografts, " and "xenografts. " Abstracts were screened for relevance by 2 reviewers before full articles were retrieved. Articles were included if they described the changes in physical or biomechanical properties of materials after implantation in animals or humans or the histological features of the host response to the implanted material. Implantation sites were restricted to subcutaneous, intravaginal, or abdominal muscles. Results In total 10 studies assessing autologous materials, 11 assessing allograft materials, 24 assessing xenografts, and 24 assessing polypropylene meshes compared with other synthetic meshes were included. These studies are summarized in Tables 2, 3, 4, and 5. Biological Materials 3.1.1. Autologous Materials. Autologous grafts harvested from the rectus fascia and fascia lata have long been used in SUI surgery. A major advantage of autografts over synthetic materials is that erosion is almost unheard of [16]. A possible disadvantage to using autografts is that the connective tissues of patients with SUI may be inherently weak predisposing to failure. Nevertheless the overall long term outcomes with autografts are largely excellent with reported rates of cure generally over 90% [17,18]. Biomechanical Properties of Autologous Materials. Four studies describing changes in mechanical properties of autologous materials over a 12-16-week period were found. Uniaxial stress strain testing of autologous rectus fascia before and after implantation in rabbit vagina and anterior abdominal wall showed no significant decrease of ultimate tensile strength (UTS) (the maximum stress a material can take before failing) and Young's modulus (YM) (material stiffness), at twelve weeks after implantation [19,20]. However, there was a reduction in surface area of the grafts by 50% suggesting that significant degradation had occurred [19,20]. A comparison of mechanical strength of autologous materials used for sling was carried out by Choe et al. [21]. They harvested dermis, rectus fascia, and vaginal mucosa from 20 women undergoing vagina prolapse surgery and they tested displacement and maximum load with the Instron tensiometer. This study showed that fascia lata had the highest mean maximum load to failure (217 N), followed by human dermis (122 N), rectus fascia, and vaginal mucosa (both 42 N) in women undergoing surgeries for various reasons [21]. Autologous rectus fascia showed no significant decrease in tear resistance using the trouser tear test after 4 months of subcutaneous implantation in rodents [22]. In summary in all four studies there was agreement that the mechanical properties did not change significantly over a 12-to 16-week duration [19][20][21][22]. Hilger and colleagues assessed human cadaveric skin and autologous fascia after implantation in the abdominal and vaginal walls of New Zealand white rabbits. Materials were harvested at 6 and 12 weeks. Histological analysis demonstrated that autologous fascia promoted a relatively minimal inflammatory response and neovascularization but moderate collagen infiltration when compared to fenestrated porcine dermis and porcine collagen-coated polypropylene mesh [20]. Jeong and coworkers described similar results noting minimal inflammatory response and neovascularization in rabbits when autologous fascia was implanted under the eye lid for up to 8 weeks [24]. Two studies assessed histological changes in paravaginal tissue after the implantation of autologous fascial slings for SUI in women. In the study by FitzGerald et al. biopsies of the sling were taken from 5 patients requiring revision surgery due to persistent incontinence. The time since the initial surgery ranged from 3 weeks to 4 years. The grafts explanted after up to 8 weeks showed moderate uniform fibroblast infiltration and neovascularization. Collagen remodelling was evident in parts of the graft biopsied at 4 years, with no evidence of chronic inflammation [23]. Woodruff and colleagues performed a similar study in 24 patients undergoing sling revision for poor efficacy (2 patients), urinary retention (9), and sling obstruction (13), 2-34 months after implantation [27]. All grafts showed moderate uniform fibroblast infiltration and moderate collagen fibers. All grafts showed moderate degradation. There was no evidence of encapsulation. In summary these eight studies suggest that when autologous fascia is implanted there is a minimal to moderate inflammatory response, a moderate degree of collagen production, and a suggestion that grafts undergo a degree of remodelling over the long term. Allografts. Allografts used in pelvic floor reconstruction usually consist of fascia. The donors are screened for infectious diseases before the grafts undergo cleaning, freeze drying, and gamma irradiation to eradicate any infective or immunogenic material. A concern with these grafts is that they are often donated by the elderly who have an age related weakening in connective tissues [30]; additionally processing techniques such as freeze drying and solvent dehydration may reduce the tensile strength [31]. Cadaveric grafts are advantageous in that they avoid donor site complications. In terms of efficacy, results are mixed. Some have shown cadaveric fascia to demonstrate similar subjective cure rates to autologous fascia at around 90% at 2 years [32]. However others have shown that on urodynamic testing 42% of cadaveric graft patients had SUI whereas no patients with autologous grafts had SUI [33]. Biomechanical Properties of Allografts. Five studies investigated the change in mechanical properties after implantation of allografts in animals. All these studies utilized uniaxial stress strain testing. The time after which samples were explanted ranged from 60 days to 12 weeks [20,22,[34][35][36]. After implanting human cadaveric dermis in rabbit vagina, Hilger et al. reported a decrease in ultimate strength of 86.6% at 12 weeks; in comparison autologous fascia lost only 28.6% [20]. Conversely, Rice and colleagues found an increase in tensile strength of cadaveric dermis (AlloDerm) from 0.142 to 0.226 MPa, increasing by about 80% of its initial strength, 60 days following subcutaneous implantation [36]. Walter et al. reported that, after 12 weeks, following implantation of cadaveric fascia lata in rabbit vagina, the tensile strength decreased by approximately 90% [34]. Spiess et al. implanted human cadaveric fascia lata subcutaneously on the abdominal wall of 20 rats randomized into 2 survival groups at 6 and 12 weeks. They found no significant decrease in tensile strength from 0.167 kg at week 6 and 0.185 kg at week 12 [35]. Kim et al., similarly, implanted human cadaveric fascia in 20 rats, randomized into 2 survival groups of 2 and 4 months. They found no significant difference in fracture toughness before implantation and after implantation in human cadaveric fascia (from 2120 to 1145 J/m 2 , = 0.09) [22]. In summary, the available studies show disparate results with respect to the changes in mechanical properties of allografts following implantation. This may be attributable to the heterogeneity in the type of allografts used, the animals studied, the sites of implantation, and the assessment at different time points. Human cadaveric dermis and cadaveric fascia have been found to be well integrated onto the abdominal wall [37,40,41] and rectus muscle [36,38] in different animals, including rats, rabbits, and pigs, as noted by moderate fibroblast infiltration, new collagen production, and neovascularization where materials were implanted from 2 days up to 62 weeks. Human cadaveric dermis, after 12 weeks of implantation, was similarly well integrated into vaginal tissues of rabbits. However, it appeared highly fragmented suggesting significant degradation [20]. Krambeck et al. also describe a faster degradation of cadaveric fascia implanted subcutaneously on the abdominal wall of rabbits with a fascial defect for 6 and 12 weeks compared to polypropylene or autologous fascia [26]. VandeVord and colleagues also found moderate cell infiltration and angiogenesis at 12 weeks following the insertion of human cadaveric dermis and cadaveric fascia slings under the bladder neck of rats; however there was a moderate encapsulation after implantation [39]. Finally, in the study by Woodruff et al. in 5 women who received human cadaveric dermis grafts, biopsies 2-65 months after implantation showed significant graft degradation with residual areas of graft appearing acellular and encapsulated [27]. In summary, some studies suggest that allografts demonstrate infiltration by host cells, new collagen production, and BioMed Research International 5 neovascularization whilst other studies suggest that a variable degree of graft degradation occurs along with encapsulation in the long term. There is a degree of agreement that allograft induces an acute inflammatory response as inflammatory infiltrates have been found populating the grafts. Xenografts. A number of grafts from animals, mainly porcine and bovine, have been used in pelvic floor surgery. These materials undergo extensive processing after harvesting to decellularize them and render them non-immunogenic. Additionally there are FDA regulations on animal source and vaccination status which must comply with [42]. Porcine dermis may be artificially cross-linked using hexamethylene diisocyanate to make it more resistant to enzymatic digestion [43]. Clinical studies showed lower continence rates for porcine dermis (approx. 80%) and increased reoperation than that for synthetic tape or autologous fascia [44]. Porcine small intestine submucosa (SIS) has shown cure rates from 79 to 93% at 2-and 4-year follow-up, respectively [45,46]. However one study has raised concerns that SIS may not be strictly acellular and may contain porcine DNA [47]. Biomechanical Properties of Xenografts. Nine studies investigated the mechanical properties of xenografts before and after implantation. All these studies assessed either porcine dermal collagen matrix, both cross-linked and non-cross-linked, or porcine small intestine submucosa. Hilger et al. assessed non-cross-linked porcine dermis xenografts implanted on the abdominal wall and vaginal wall of rabbits. After 12 weeks, half of the grafts implanted in the vaginal wall were absent. The other half as well as grafts implanted into the abdominal wall showed an average reduction of 84.1% in ultimate strength [20]. Another study assessed the long term mechanical integrity of cross-linked porcine dermis. After 9 months following implantation in the abdominal and vagina walls, grafts had degraded by 36% and 46%, respectively. When subjected to mechanical testing non-degraded graft fragments showed similar strength compared to baseline values whilst degraded fragments decreased by more than 50% [48]. Liu and colleagues implanted SIS and porcine dermal collagen matrix in rats with surgically created abdominal wall defects. The maximum load (at failure) at baseline for SIS and dermal collagen matrix was 22.81 N and 43.16 N, respectively. Following 12 weeks of implantation, there was no significant change in the maximum load of cross-linked porcine dermal collagen matrix and SIS [49]. Similarly other workers observed an increase in the ultimate tensile strength of SIS after 90 days of implantation from a baseline value of 7.5 and 9.8 N/cm 2 at baseline, respectively, to 19.56 and 13.3 N/cm. These results were averages of 48 implants in rats [50]. Rice et al. also found an increase in tensile strength of SIS after 60 days of implantation in a rat abdominal wall defect from 0.142 MPa at day 0 up to 0.226 MPa after 60 days of implantation [36]. Similarly, Zhang et al. implanted SIS in abdominal wall of rats and they found increased strength for SIS from 0.35 MPa to 0.41 after 4 weeks [51]. Badylak et al. repaired surgically created abdominal wall defects in dogs with SIS (8 × 12 cm); they performed serial ball burst strength tests after 1, 4, 7, and 10 days and then at 1, 3, 6, and 24 months) [52]. There was an initial decrease in ball burst strength from 73.37 pounds to 39.97 pounds by day 10. After day 10, the strength began to increase and after 2 years there was an increase to 157.20 pounds in burst strength. Jenkins et al. showed an increase in strength in crosslinked porcine matrices after 6 months of implantation in the preperitoneal area from 0.07 ± 0.01 N up to 22.36 ± 3.3 N [53]. In contrast, Ko and colleagues found no significant difference in ultimate tensile strength of SIS after 4 months of implantation in a porcine wall defect, with values ranging from 41.3 to 74.8 N/cm 2 [54]. In summary it appears that non-cross-linked porcine dermal collagen matrices are degraded rapidly (within 3 months) and lose most of their mechanical integrity within this period. By contrast cross-linked porcine dermal collagen matrix is more resistant to degradation and maintains its mechanical properties for at least 3 months, whereas SIS appears to increase in strength after as long as 2 years after implantation. Hilger et al. and Pierce et al. found minimal neovascularization and collagen ingrowth in porcine dermal xenografts [20,65]. Both studies agreed that the degradation of porcine dermis is higher when the inflammatory response is high, and it may accelerate this degradation process. They also reported fragments encapsulated, which has been also found in many studies with different species including rats [39,62], rabbits [65], pigs [40], primates [64], and humans [27]. In contrast, non-cross-linked SIS leads to high collagen ingrowth with a moderate degree of remodeling and orientation and high neovascularization [29, 36, 39, 49-51, 54, 55, 57, 63]. On the other hand, many studies agree with a very rapid degradation of the SIS which is replaced by the host tissue [49,51,52,55,58,66,67]. Only two studies reported an absence of host fibroblast infiltration and fibrotic tissue penetration without neovascularization for SIS implanted in rats [62] and rabbits [26]. In humans, Cole et al. performed revision surgery on a patient who had developed a bladder outlet obstruction after SIS implantation and found that the implant had been encapsulated [60]. Nevertheless, other investigators, at 12 and 48 months, respectively, found that the SIS was replaced by native tissue in humans [56,61]. In summary, the available studies agree that the degree of cross-linkage affects the rate of degradation and the degree of the inflammatory response of the host. Studies on cross-linked xenografts agree that cross-linked collagenous matrices induce little cell infiltration; hence there is limited collagen remodeling and graft degradation. In non-crosslinked xenografts, cell infiltration was greater with faster degradation rate and collagen production. Polypropylene Mesh. There is a range of synthetic polypropylene meshes that have been used. These are summarized in Table 1 where they are classified as type 1, 2, 3, or 4 according to their mesh size, where 1 is macroporous (>75 m), 2 is less than 10 m, 3 is microporous with microporous compartments, and 4 is nanoporous (<1 m). Thus a wide range of synthetic materials have been investigated for use in the treatment of SUI. These materials offer several advantages including lack of transmission of infectious diseases and ease of availability, as well as the sustainable tensile strength due to their nondegradable nature [68]. Mesh materials have been classified in to 4 groups based on the basis of porosity (microporous or macroporous) and filamentous structure (monofilament of multifilament) [69]. The initial clinical experience with mid-type II (microporous/multifilament fibers, e.g., expanded PTFE) and III (macroporous and microporous/multifilament fibers, e.g., Mersilene) meshes was largely negative with excision rates of up to 30% for expanded PTFE [70] and erosion rates of 17% for Mersilene (polyester) [71]. A greater pore size is thought to be advantageous as it allows the admittance of immune cells and greater collagen ingrowth into the construct [13]. This is thought to reduce the risk of mesh infection and accelerate and enhance host tissue integration. Monofilament meshes are thought to reduce the risk of infection in comparison to multifilament meshes. The theoretical concern with the latter is that bacteria may colonize the 10 m subspaces between fibers which are inaccessible for the larger host immune cells (9-20 m) [72]. Today a mid-type I polypropylene mesh that is macroporous and monofilament is most commonly used [73] with cure rates for SUI of >90% at 5 years. Biomechanical Properties of Polypropylene. Seven studies investigated the mechanical properties of polypropylene meshes with implantation times ranging from two weeks in animal models up to two years. Animal models used were rats abdominal wall [35,74], pig preperitoneal implantation [75], rats rectus fascia [76], minipigs hernia repair [77], and ewes abdominal and vaginal walls [78]. Melman et al. tested Bard Mesh, a knitted monofilament mesh made of high molecular weight polypropylene (HMWPP) and Ultrapro, a knitted macroporous composite mesh made of low molecular weight polypropylene (LMWPP) and poliglecaprone (Table 1). They have been implanted in minipigs hernia repair model for up to 5 months. HMWPP mesh decreased from maximal load at failure 59.3 N at 1 month to 36.0 N at 5 months, while LWPP mesh decreased from 61.5 to 37.8 N at 5 months [77]. Long term studies were carried out by Zorn et al. where TVT and SPARC were compared to SIS in a rat abdominal wall defect for up to 12 months. Both TVT and SPARC are macroporous meshes made of polypropylene monofilaments. SPARC did not change its mechanical properties after 12 months of implantation (maximum load at baseline 0.453 kg and at 12 months 0.497 kg). By contrast the maximum load for TVT decreased from 0.779 kg to 0.523 kg for TVT and for SIS decreased from 0.402 kg to 0.174 kg [74]. Also Bazi et al. showed how similar are the mechanical properties of Gynecare TVT and Advantage, both macroporous polypropylene monofilament meshes, compared with other meshes such as IVS Tunneller, multifilament polypropylene mesh, and SPARC. The lowest, at 25.2 N, was TVT and the highest, 34.9 N, was Advantage, with no significance between them after 24 weeks of implantation in rats rectus fascia [76]. Also other studies agree on these parameters where TVT was found to be able to comply with the highest break load (0.740 kg), compared to 0.39 kg for fascia lata after implantation in rats abdominal wall for up to 12 weeks [35], and was said to be less stiff than other synthetic materials used for meshes (0.23 N/mm compared to nylon, 6.83 N/mm) [79]. A recent study compared two sizes of meshes implanted in two different places in a sheep model. Gynemesh was cut in two sizes (50 × 50 mm and 35 × 35 mm) and it was implanted in 20 adult ewes, on the abdominal and vaginal walls for a period of 60 and 90 days. Results showed that grafts of both dimensions, implanted on the vaginal wall, were stiffer than the ones implanted on the abdominal wall, after a period of 90 days [78]. However, they all agree that physical characteristics of the mesh, such as monofilament or multifilament, porosity, and polymer molecular weight, hugely affect the mechanical performance of the implants in vivo. A very recent study of Manodoro et al. showed how 30% of Gynemesh grafts (50 × 50 mm), implanted in ewes after 90 days, caused vaginal erosion and exposure. The study also showed that 60% of the smaller Gynemesh meshes (35 × 35 mm) had a reduced surface (i.e., contracting) after 90 days of implantation [78]. Falconer et al. reported a study on Prolene and Mersilene meshes. The biopsies were stained with Masson's trichrome. Autologous fascia lata implanted in 14 rabbits randomized into 2 survival groups (30 and 60 days). Implantation into the right voice muscle. (i) No significant inflammatory reaction. (ii) No significant fibrosis or scarring. Mersilene was found to induce a higher inflammatory response compared to Prolene, which triggered a minimal inflammatory reaction [89]. Pierce et al. reported a long term study comparing biological and synthetic grafts implanted in rabbits. Polypropylene caused a milder inflammatory reaction with more long term, better host tissue incorporation compared to natural grafts [65]. Also Bazi et al. evaluated biopsies on the basis of inflammatory infiltrate, fibrosis, mast cell presence, muscular infiltration, and collagen filling of the mesh on an arbitrary scale described as low, moderate, or extensive based on H&E, periodic acid-Schiff, and toluidine blue staining of tissue. They agreed that all of the materials (Advantage, IVS, SPARC, and TVT) induced inflammation and collagen production, with SPARC being the one with the mildest response and TVT the one with the highest inflammatory response [76]. Elmer et al. reported an increase in macrophages and mast cell counts and a mild but persistent foreign body response to polypropylene meshes [91]. This study is consistent with other reported investigations where the polypropylene meshes are invaded with both macrophages and leukocytes, signs of inflammation, resulting in collagen production [27,38,65,76,83,85]. Biomechanics. In general, when biological materials fail this is due to enzymatic degradation after implantation, leading to a loss of mechanical support and weakening of the repair. This appears to apply particularly to the non-crosslinked xenogenic matrices. Chemically cross-linking appears to prevent this degradation and improve the mechanical outcomes. Unfortunately there is a lack of clinical evidence on how these mechanical outcomes translate into patient outcomes. Autologous grafts are the most successful biological material used in contemporary practice and the studies reviewed appear to support the long term mechanical integrity of these grafts. Nevertheless, they present several important limitations that are related to the need to harvest from a donor site. However use of cadaveric tissues avoids these limitations; however their quality depends on the age and comorbidities of the donor and this is maybe the reason for the mixed results in mechanical properties. This is consistent with the available clinical studies which suggest that allografts have poorer cure rates than autologous grafts. We have found that polypropylene maintains its morphology and strength after implantation for up to 24 weeks [35,74,76]. However there was evidence that stiffness increases [77,93]. This is consistent with durable cure rates particularly in SUI surgery (there is still some question regarding efficacy of transvaginal POP repair, compared with native tissue repair). The major issue with polypropylene meshes is the associated serious complications, in particular vaginal or urinary tract exposure (up to 10-14%). There is some evidence that meshes with greater stiffness cause the surrounding tissue to weaken, an effect termed stress shielding [94]. This can be compared to the effect of metal implants on the surrounding bone after orthopedic surgery. This effect could lead to thinning of the surrounding vaginal tissues as predisposing to erosion. Host Response. Biomaterials implanted into the body will always attract the attention of the immune system. With some materials there is an M1 macrophage response of constructive remodeling; this appears to be the case with some biological matrices, SIS in particular. With materials which the body cannot remodel or integrate such as polypropylene meshes, the macrophage response is much more aggressive, an M2 macrophage response [95,96]. It appears that a state of constant inflammation can be generated by some patients in response to some of these nondegradable materials. Constant inflammation leads to an upregulation of degradative enzymes; although these enzymes cannot degrade the material, they may damage the surrounding extracellular matrix and contribute to tissue thinning and mesh exposure. Moreover perpetuation of the inflammatory response can also result in activated fibroblasts, which produce excessive collagen laid down in a disorganized fashion around the implant (i.e., fibrosis), encapsulating the material. A small amount of fibrosis is arguably advantageous to the repair in SUI, providing a stable back stop allowing urethral compression. However excessive fibrosis may lead to mesh contraction resulting in increased pull on the adjacent tissues leading to complications such as voiding dysfunction, pain, and painful intercourse. In POP this excessive fibrotic response can lead to mesh exposure which presents a major reconstructive surgical challenge, often necessitating repeat Levels of interleukin 2 and interleukin 6 were high straight after the operation but they become normal after 2 months. Wiedemann and Otto, 2004 [56] Biopsies taken from the implantation site of the SIS band under the vaginal mucosa from 3 patients during reoperation, at a mean of 12.7 months, after pubourethral sling procedures due to recurrent urinary stress incontinence. (i) Focal residues of SIS implant. (ii) No evidence of a specific tissue reaction that might point to a foreign body reaction. (iii) No evidence of any significant immunological reaction and in particular no evidence of any chronic inflammatory reaction. Konstantinovic et al., 2005 [50] Abdominal wall defect repaired with SIS in 24 Wistar rats randomized into 4 survival groups (7,14,30, and 90 days). SIS implanted subcutaneously on the abdominal wall of 30 rats randomized into 3 survival groups (7, 30, and 90 days). (i) Moderate inflammatory reaction increased to severe after 90 days. (ii) 86% of the graft was replaced by new collagen fibers. SIS and porcine dermis implanted subcutaneously on the anterior rectus fascia of 10 rabbits randomized into 2 survival groups (6 and 12 weeks). (i) Porcine dermis presented moderate fibrosis which was minimal for SIS. (ii) Minimal degree of scar for both grafts and high degree of inflammatory infiltrate. Ko et al., 2006 [54] Abdominal wall defect repaired with 8-layer SIS in 20 domestic pigs randomized into 2 survival groups (1 and 4 months). No significant changes of biomechanical properties after 4 months of implantation. Abdominal wall defect repaired with SIS and cross-linked porcine dermis (Permacol) in 33 primates randomized into 3 survival groups (1, 3, and 6 months). (i) Considerable contraction after 1 month for both materials, but not significant change over the next 5 months. (ii) Better integration of both materials at late stage by scar formation. (iii) Inflammatory cells infiltration 3 months after implantation for SIS associated with formation of few blood vessels. (iv) Acellular porcine dermis over the entire course implantation with substantial inflammation surrounding their perimeter. (v) Partial resorption for both materials after 6 months. Pierce et al., 2009 [65] Cross-linked porcine dermis implanted on the abdominal wall and posterior vagina of 18 rabbits sacrificed 9 months after implantation. 11 grafts remained intact without significant changes of biomechanical properties compared to the baseline values. They were just thicker and tolerated with less elongation at failure. Seven grafts were partially degraded but thicker again and with significant decrease of all biomechanical properties. (i) Host connective tissue incorporation between fibers. (ii) Intense foreign body reaction in degraded grafts. procedures with no guarantee of symptom resolution. Nevertheless with the observation that the vast majority of patients do well with mesh, it can be concluded that some degree of fibrosis is helpful to the surgical management whereas clearly excessive fibrosis is detrimental. Implantation of autologous fascia in general showed good integration within host tissues, associated with a low inflammatory response, compared to polypropylene meshes and degree of graft remodelling in the available human studies [50,84]. It must be borne in mind that the human studies were all reoperative cases for clinical failure. It is difficult to speculate on whether all successful outcomes result in fully integrated and remodelled graft. Non-cross-linked xenografts are associated with clinical failure due to rapid degradation which is presumably too soon for the regeneration of strong tissues in its place [20,24,29]. The cross-linked grafts avoid this but rather similar to the synthetic mesh are associated with a perpetuated inflammatory response as the body is unable to integrate and remodel them. This ultimately leads to encapsulation of the graft. It would therefore seem appropriate that there should be a proper balance of degradation and replacement by new host tissue with xenografts. SIS appears to fulfill this. This relationship between grafts and host tissues will vary for different materials and with different individuals. Here it is worth noting that as many as 15% of the population are allergic to nickel and more than 80% can become sensitized to nickel on sustained exposure [97] and that there are very successful studies involving muscle regeneration using decellularized ECM [98]. Therefore, it is clear that the immune response to any foreign material is complex, dynamic, and patient specific. The fact that polypropylene meshes provoke little adverse reaction when implanted in the abdominal wall for hernia repair but are associated with complications in the pelvic floor may also suggest a site-specific host response notwithstanding the differences in biomechanical aspects [99]. This contrasting response has been confirmed in ewes [78], therefore the need for relevant animal models for longer studies [100]. Perspective on the Ideal Material Whilst authors have previously described paradigms of the ideal material, we suggest that these have been unrealistic [101]. Ultimately a permanent material will always cause complications in some patients due to variation in individual immune responses. Conversely degradable materials will fail in some patients. The question is which is least desirable? Whilst recurrent symptoms can always be treated by corrective surgery, the complications of polypropylene mesh such as chronic pain have proven resistant to treatment in many cases. Thus we suggest that materials for this application should be degradable based on the principle of least harm. With this in mind, it is essential that the degradability is tuned so that it allows enough time for the development of a neotissue that is able to mechanically support the pelvic organs. A material that does not cause any inflammation is unrealistic and probably undesirable as an initial inflammatory response is required to promote angiogenesis and collagen ingrowth, integrating the material. This is essentially an M1 macrophage response. For this to happen, the material should be readily permeable to host cells. On a practical level any material for this application needs to be robust to withstand surgical handling and provide support at the point of insertion. We suggest that a more realistic material for this application would be the one that (i) is degradable, (ii) provokes an acute inflammatory response, (iii) undergoes tissue remodeling, (iv) is permeable to cells, (v) is mechanically robust at point of implantation. Conclusion and Future Perspective The clinical experience suggests that both synthetic and biological materials can provide successful outcomes when used in the surgical management of pelvic floor disorders. However, it has become clear that there is an incidence of significant complications of polypropylene meshes and that many surgeons do not consider the complication rate acceptable. Both the host response and the mechanical properties of the materials need to be taken into consideration to predict success of the implants, in addition to their response to dynamic loading. There has clearly been a lack of adequate preclinical evaluation with polypropylene mesh and we suggest several steps which may make the development for new materials an altogether safer endeavor: (i) a better understanding of the forces within the pelvic floor, whose materials need to cope with when implanted; (ii) computational modeling of how materials might perform under load for many years (this can be achieved using in virtuo models once established); (iii) the investigation of immune responses in patients in whom materials perform well over many years versus patients in whom they cause severe complications (using biochemical markers, genomic markers, and non-invasive imaging); (iv) the development of better animal models that develop the complications associated with vaginal mesh use such as exposure; (v) establishment of standardized criteria to evaluate the performance of materials in in vivo and in vitro studies so that they can be accurately compared. There are several other factors which require urgent attention but are beyond the scope of this review. Surgical expertise based on training and experience in reconstructive surgery is a key factor in outcomes of pelvic floor procedures and there is a need to ensure that surgeons are adequately trained. Patient specific issues, such as individual anatomy and tissue strength, could also impact outcomes and further investigation remains necessary to assess these aspects and their role in determining outcome [102]. Although databases to track complication rates exist, such as MAUDE and Postmarket Surveillance Studies, the medical community needs to participate more fully in these databases in order to more critically audit patient outcomes and move forward. Ultimately to develop new effective and safe materials there is a need for a multidisciplinary approach that combines the efforts of those working in regenerative medicine, biomaterials, and surgery. Disclosure Professor Chris Chapple is a consultant for AMS, Allergan, Astellas, Lilly, ONO, Pfizer, and Recordati. He is also a researcher, speaker, and trial participant for Allergan, Astellas, Pfizer, and Recordati. All the other authors have nothing to disclose.
8,918
2015-04-21T00:00:00.000
[ "Materials Science", "Medicine" ]
Limbal Epithelial Stem Cells of the Cornea * 1. Function and Structure of the Cornea The cornea on the front surface of the eye is our window to the world, hence maintenance of corneal tissue transparency is essential for vision. The integrity and functionality of the outermost corneal layer, the epithelium, plays a key role in refraction of light on to the retina at the back of the eye. Like other epithelia, the epithelium of the cornea is maintained by stem cells. This review will discuss what is currently known about the properties of these stem cells, the clinical consequences of stem cell failure and the potential for stem cell therapy in regeneration of the ocular surface. The cornea is responsible for protecting the eye against insults such as injury and infection. It also provides the majority (two thirds) of the total refractive power of the eye and is therefore the major refracting lens (Meek et al., 2003). The cornea is comprised of five layers (see Figure 1), the outermost non-keratinised stratified epithelium, Bowman's layer, a highly ordered keratocyte-populated collagenous stroma, Descemet's membrane and the inner endothelium (a cellular monolayer). Copyright: C 2009 Genevieve A. Secker and Julie T. Daniels. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. . The human cornea in cross-section. At the outer surface of the cornea, there is an epithelial layer, which sits on a basement membrane above Bowman's layer. The middle stromal layer, which is sparsely populated with keratocytes is surrounded by dense connective tissue. The final layer consists of a single sheet of endothelial cells, which sits on Descemet's membrane. Corneal development Development of the anterior chamber of the eye (comprised of the cornea, lens, ciliary body, iris, trabecular meshwork and aqueous humour) requires the interaction of cells from the surface epithelium and neuroepithelium with mesenchymal cells predominantly of neural crest origin. Anterior eye development first begins with the formation of the lens placode. This forms after the optic vesicles come in contact with the surface ectoderm. A thickening forms, that enlarges and forms a lens pit. Between days E8.5 and 9.5 in mouse this lens pit becomes the lens vesicle and remains connected to the surface ectoderm via a lens stalk (Kaufman, 1992;Pei and Rhodin, 1970). Eventually this lens vesicle detaches from the surface ectoderm and invaginates into the optic cup. Shortly after this detachment, periocular mesenchymal cells derived from somitomeric mesoderm and forebrain neural crest migrate into the space between the anterior lens vesicle epithelium and the surface ectoderm, eventually forming keratocytes and corneal endothelium (Trainor and Tam, 1995). In mice, four to seven layers of mesenchymal cells are seen at E12. These cells have long cytoplasmic extensions with a star shaped phenotype (Haustein, 1983). Cell numbers continue to increase and condense to form several layers of separated flattened cells. At E14.5 to E15.5 the cells adjacent to the lens structure form the endothelium (Reneker et al., 2000). The surface ectoderm cells overlaying the mesenchymal cells become the corneal epithelium. The remaining mesenchymal cells between these two layers differentiate into corneal stromal fibroblasts (Cintron et al., 1983). This differs with humans, where there is a second wave of mesenchymal cell migration into the space between the newly formed endothelial layer and the surface ectoderm. These cells differentiate into corneal fibroblasts. In mouse the proliferative potential of corneal fibroblasts diminishes during development from birth to eyelid opening, however they arrest in the G0 phase of the cell cycle as opposed to becoming terminally differentiated (Zieske et al., 2001). As the corneal endothelium differentiates, the lens detaches from the immature corneal structure. This allows the formation of a fluid filled area in to which the iris and ciliary body grow. Studies resulting in abnormal corneal development due to the over expression of growth factors such as TGFα, FGF3 and EGF in the lens, highlight the importance of the lens in cornea development (Coulombre and Coulombre, 1964;Reneker et al., 1995;Reneker et al., 2000;Robinson et al., 1998). The ectoderm overlaying the lens becomes the corneal epithelium. In its primitive state the epithelium is 1-2 cell layers thick and later stratifies to three to four 2 stembook.org Limbal epithelial stem cells of the cornea Figure 2. Development of the rodent cornea. Between days P1 and P7 the epithelial layer is 1-2 cell layers thick until just prior to eyelid opening at day P10, when this increases to 2-3 cell layers. Following eyelid opening at day P14 the number cell layers increases to 4-5 with 5-6 cells layers being present at 3 weeks of age. At P28 the corneal epithelium is representative of the adult epithelium, with a single layer of columnar basal cells, which become flattened as they move to the surface. cell layers following lens detachment. The eyelids then form and fuse with the primitive epithelium being reduced to 1-2 cells layers thick until eyelid opening which occurs at 24 weeks gestation in humans and P12 to P14 in mice. For up to seven days of age the corneal and limbal epithelia in rats is 1-2 cells layers (Chung et al., 1992). Prior to eyelid opening at 10 days the epithelial thickness increases to two to three layers. Further increases to four-five cell layers occur in the central cornea following eyelid opening at two weeks of age (Chung et al., 1992;Watanabe et al., 1993). The layers continue to increase until four weeks of age when the epithelium reaches adult levels of six to seven cell layers (Song et al., 2003). The basal epithelial cell shape also changes with development. Initially the cells are flat and ovoid in shape until eyelid opening after which they become more cuboidal. By three weeks the basal cells are more columnar in the central cornea but not the limbal region (Chung et al., 1992). The stroma and endothelium The stroma is a mesenchymal tissue derived from the neural crest. The dense tissue of the stroma accounts for 90% of the total corneal thickness. The parallel arrangement of lamellae formed from heterodimeric complexes of type I and type V collagen fibres maintain transparency (Fini and Stramer, 2005). These collagen fibres are held in a uniform spacing pattern by proteoglycans. Keratocytes (fibroblasts) are located between the lamellae (Hay et al., 1979). These sparsely located keratocytes link to one another via dendritic processes (Muller et al., 1995) and produce crystalline proteins to maintain corneal transparency (Jester et al., 1999). Recent reports have described a keratocyte stem cell population in the anterior stroma Funderburgh et al., 2005). Descemet's membrane rests on the innermost surface of the cornea. It acts as a basement membrane for the inner endothelial cell monolayer. These cells transport nutrients from the aqueous humour to the stroma and concurrently pump out excess water preventing corneal oedema (swelling) by maintaining optimal hydration. The corneal epithelium The corneal epithelium is a dynamic physical barrier preventing the entry of deleterious agents into the intraocular space. It consists of superficial squamous cells, central suprabasal cells and a single layer of inner columnar basal cells. The differentiated squamous cells have surface microvilli and occupy the outer 1-3 cell layers of the epithelium. The function of the microvilli is to increase cell surface area allowing close association with the tear film. Highly resistant tight junctions formed between neighbouring cells provide a protective barrier (Klyce, 1972). The underlying suprabasal cells have wing-like extensions, rarely undergo division and migrate superficially to differentiate into squamous cells. The inner basal cells consist of a single layer of columnar cells with several important functions including the generation of new suprabasal cells. Additionally, they secrete matrix factors important for basement membrane and stromal function. The basal cells also regulate organisation of hemidesmonsomes and focal complexes to maintain attachment to the underlying basement membrane. These functions are suggested to be important in mediating cell migration in response to epithelial injury (Pajoohesh-Ganji and Stepp, 2005). Homeostasis in the corneal epithelium Corneal integrity and therefore function is dependent upon the self-renewing properties of the corneal epithelium. The prevailing hypothesis is that this renewal relies on a small population of putative stem cells located in the basal region of the limbus. These putative stem cells are primitive and can divide symmetrically to self renew and asymmetrically to produce daughter transit amplifying cells (TAC) that migrate centripetally to populate the basal layer of the corneal epithelium (see Figure 3; Kinoshita et al., 1981;Tseng, 1989). The TAC divide and migrate superficially, progressively becoming more differentiated, eventually becoming post-mitotic terminally differentiated (TD) cells. Using suppressive subtractive hybridisation, Sun et al., 2006 identified a novel gene (EEDA) with localisation to corneal basal and suprabasal cells, suggesting it is involved in early stage stratification of epithelial differentiation (Sun et al., 2006). Once fully differentiated TD squamous cells are shed from the ocular surface during normal wear and tear and this in turn stimulates the cycle of cell division, migration and differentiation (Beebe and Masters, 1996). Thoft and Friend developed the 'The X, Y, Z hypothesis of corneal epithelial maintenance'. This hypothesis proposed that the addition of the proliferation of basal cells (X) and the centripetal migration of cells (Y) was equal to epithelial cell loss from the corneal surface. However, they were unable to rule out the involvement of the neighbouring bulbar conjunctiva (Thoft and Friend, 1983). Later, mathematical analysis indicated that the corneal epithelial cell mass could be renewed by cells from the limbal epithelium alone (Sharma and Coles, 1989). Furthermore, a fine balance between cell proliferation, differentiation, migration and apoptosis is necessary. A variety of cytokines have been shown to play important roles in the maintenance and wound healing of the cornea. These factors are supplied in part by the adjacent tear film and the aqueous humour (Welge-Lussen et al., 2001). Other growth factors are produced by keratocytes in the supporting stroma (West-Mays and Dwivedi, 2006) and by the corneal epithelial cells themselves (Rolando and Zierhut, 2001). Limbal epithelial stem cells Throughout life, our self-renewing tissues rely upon populations of stem cells / progenitors to replenish themselves throughout life following normal wear and tear and injury. The corneal epithelium on the front surface of the eye is no exception as dead squamous cells are constantly sloughed from the corneal epithelium during blinking. At the corneo-scleral junction in an area known as the limbus, there is a population of limbal epithelial stem cells (LESCs). LESCs share common features with other adult somatic stem cells including small size (Romano et al., 2003) and high nuclear to cytoplasmic ratio (Barrandon and Green, 1987). They also lack expression of differentiation markers such as cytokeratins 3 and 12 (Kurpakus et al., 1990;Schermer et al., 1986). LESCs are slow cycling during homeostasis and therefore retain DNA labels for long time periods, however in the event of injury they can become highly proliferative (Cotsarelis et al., 1989;Lavker and Sun, 2003;Lehrer et al., 1998). To replenish the stem cell pool, stem cells have the ability to divide asymmetrically (see Figure 4). Expression of C/EBPδ in a subset of LESC both in vivo and in vitro has recently been suggested to be involved in the regulation of self-renewal and LESC cell cycle length (Barbaro et al., 2007). Evidence for stem cells in the corneal limbus The first experimental indication of the presence of stem cells in the limbus was the observation of pigment (melanin) movement from the limbus to towards an epithelial defect following wounding of rabbit corneas (Mann, 1944). stembook.org Limbal epithelial stem cells of the cornea Davanger and Evenson later observed a similar centripetal migration of pigment from limbus to central cornea in humans. Hence they proposed that the limbal Palisades of Vogt (PV) were the source of LESC (Davanger and Evenson, 1971;Huang and Tseng, 1991). Following lamellar keratoplasty, this centripetal migration was also observed in the rabbit as host epithelium was gradually replaced with donor epithelium (Kinoshita et al., 1981). Furthermore, the complete removal of the limbus results in impaired corneal function, neovascularisation and conjunctival ingrowth (Huang and Tseng, 1991). Stem cells may be identified by the retention of DNA labels as they are slow cycling and only divide occasionally (Bickenbach, 1981). Assuming stem cell division during the labelling period, stem cell exposure to DNA precursors such as tritiated thymidine or bromodeoxyuridine followed by chase periods of up to 8 weeks labels the slow cycling cells (presumed to be stem cells). The more differentiated and more rapidly dividing daughter transit amplifying cells (TAC) undergo dilution of the label through multiple divisions. Through the use of tritiated thymidine, Cotsarelis et al, found slow cycling label retaining cells (LRCs) in the limbal basal epithelial region of the mouse cornea and postulated that up to 10% of limbal basal cells were stem cells (Cotsarelis et al., 1989). Phenotypically this population of cells appear to be more primitive in nature as they remain small and round (Romano et al., 2003). Limbal basal cells exhibit higher proliferative potential when compared to peripheral and central cornea both in vitro and in vivo. Large epithelial wounds in rabbits heal faster than smaller central defects. This implies that the proliferative capacity of the peripheral cornea is greater than that of the central (Lavker et al., 1991). In the human, limbal explant cultures have greater proliferative potential when compared to central explants (Ebato et al., 1987;Ebato et al., 1988). Furthermore, LESC proliferation is resistant to inhibition by tumour-promoting phorbol esters (Kruse and Tseng, 1993;Lavker et al., 1998). Based upon the methods of characterisation used to identify features of stem cells isolated and cultured from human epidermis (Barrandon and Green, 1987), similar clonogenicity studies on cells isolated from the limbus produced large holoclone colonies (stem cell derived) with extended cell generation number. The less clonogenic meroclones and paraclones were found elsewhere in the cornea (Pellegrini et al., 1999). Clinical evidence also points toward the limbus as a depository for a stem cell population. During homeostasis, the limbal epithelial cells are thought to act as a barrier preventing conjunctival epithelial cells from encroaching upon the cornea (Tseng, 1989). During LESC failure (to be discussed later), the conjunctiva can invade the cornea causing chronic inflammation, painful corneal opacity and neovascularisation. Ambati et al., have recently shown experimentally that soluble vascular endothelial growth factor receptor 1 (sFlt1) is important for corneal avascularity (Ambati et al., 2006). They have since found expression of sFlt1 in normal human corneal epithelium and a reduction of sFlt1 in vascularised patients (Ambati et al., 2007). Further clinical evidence pointing to the location of LESC at the limbus was demonstrated by Kenyon and Tseng, who transplanted two limbal explants taken from the contralateral healthy eye of patients on the damaged eye. This resulted in re-epithelisation of the cornea and regression of persistent epithelial defects and neovascularisation (Kenyon and Tseng, 1989). The dogma that stem cells which give rise to corneal epithelial cells exclusively reside in the limbus was recently challenged. In the mouse it was demonstrated that central corneal epithelium could be serially transplanted and that it contains oligopotent stem cells that can maintain the corneal epithelium without cellular input from the limbal region. Furthermore, holoclone colonies were cultured from the central corneas of a number of mammalian species including from two human donors (Majo et al., 2008). However, both human donors were 4 years or younger so it will be interesting to see if the results are reproducible in the adult human cornea when development of the eye is complete. In the skin, the existence of transit amplifying cells has also been questioned. Rather than stem cells producing transit amplifying cells to maintain homeostasis in the epidermis, it has been proposed that a population of 'committed progenitor' cells fulfil this function during normal tissue turn over. It is proposed that the stem cells are only called into action in response to injury (Clayton et al., 2007;Jones et al., 2007). Similarly, it has been proposed that function of LESCs is to respond to injury and not to look after normal wear and tear of the corneal epithelium (Majo et al., 2008). It remains to be determined if the long-accepted transit amplifying cell hypothesis continues to hold true for the corneal epithelium. The LESC niche The stem cell niche, or microenvironment consisting of cellular and extracellular components, is hypothesised to prevent stem cell differentiation and thus regulates their fate (Schofield, 1983;Watt and Hogan, 2000). When a stem cell divides asymmetrically, one daughter may leave the niche to enter a differentiation pathway under the influence of different environmental stimuli. The limbus differs from cornea both anatomically and functionally and hence could differentially determine stem cell fate. Within the limbal region of the cornea, the LESC niche is thought to be located within the palisades of Vogt (PV)an undulating region of increased surface area. The palisades are highly pigmented with melanocytes (Davanger and Evenson, 1971;Higa et al., 2005) and are infiltrated with Langerhan's cells (Baum, 1970) and T-lymphocytes (Vantrappen et al., 1985). The melanin pigmentation is thought to shield LESCs from damaging ultraviolet light and the resultant generation of reactive oxygen species (Shimmura and Tsubota, 1997). The deep undulations of the Palisades of Vogt at the limbus provide LESC with an environment that protect them from shearing forces (Gipson, 1989). Furthermore the crypts described by Shortt et al., predominantly occur on the superior and inferior cornea where they are normally covered by the eye lids. (Shortt et al., 2007a) This may reflect the evolution of a protective environment for LESCs in humans. The basement membrane lining the LESC niche contains papillae of stroma that project upwards (Shortt et al., 2007a). The limbal and corneal basement membrane components also differ, with the limbal region containing laminin-1,5 and α2β2 chains not found in the cornea. Furthermore, type IV collagen α1, α2 and α5 chains are found in the limbal region whereas α3 and α5 are located in the cornea (Ljubimov et al., 1995;Tuori et al., 1996). A more recent study by Schlötzer-Schrehardt et al., found patchy immunolocalisation of laminin γ 3 chain, BM40/SPARC and tenancin C, that was also found to co-localise with ABCG2/p63/K19-positive cell clusters. These factors may be involved in retaining cell stemness (Schlotzer-Schrehardt et al., 2007). The basement membrane beneath the LESC may also act to sequester and therefore modulate growth factors and cytokines involved in LESC regulation and function (Klenkler and Sheardown, 2004). Although the surface of the cornea is exposed to atmospheric oxygen, the LESC niche lies beneath a number of cell layers where the oxygen tension is likely to be lower. Interestingly, hypoxic in vitro conditions have been found to produce larger, less differentiated limbal epithelial cell colonies suggesting that low oxygen levels may induce selective proliferation of undifferentiated cells (Miyashita et al., 2007). The limbal niche is vascularised and highly innervated (Lawrenson and Ruskell, 1991) unlike the avascular cornea and therefore is a potential source of nutrients and growth factors for LESC. Limbal fibroblasts in the underlying stroma are heterogeneous and express secreted protein acidic and rich in cysteine (SPARC) that may contribute to LESC adhesion (Shimmura et al., 2006). Furthermore, Nakamura et al., identified a population of bone marrow-derived cells located in the limbal stroma following transplantation of GFP labelled bone marrow cells into nude mice (Nakamura et al., 2005). It is possible therefore that these cells are able to migrate into the limbal stroma, although any potential functionality remains unclear. Sonic hedgehog, Wnt/β-catenin, TGF-β and Notch signalling pathways have all being implicated in niche control of stem cells, however little is known of their potential roles in the LESC niche. Mice lacking in expression of Dkk2, a Wnt pathway inhibitor, display epidermal differentiation on the ocular surface. The lack of Dkk2, leads to increased Wnt/β-catenin signalling in the limbal stroma. This demonstrates the importance of limbal niche control over LESC differentiation during development. PAX6 expression is also lost in the corneal epithelial cells of these mice, suggesting it is downstream of Dkk2 (Mukhopadhyay et al., 2006). Deficiencies in PAX6 leads to aniridia resulting in impaired corneal epithelial function and eventual LESC failure, which may be due to altered niche development. Putative positive and negative LESC markers The literature reflects many attempts to prospectively identify LESC using a specific marker. As yet no single, reliable marker has been found. However, the expression of a combination of several features seems to allow for greater specificity. Putative 'markers' can either be positive (present) or negative (absent). Limbal basal cells lack differentiation markers such as the 64 kDa cytokeratin 3 (CK3) that is present in all other layers of the corneal epithelium and the suprabasal layers of the limbal epithelium (Schermer et al., 1986). The corneal specific 55 kD protein, cytokeratin 12 (CK12) is also expressed in a similar pattern (Chaloin-Dufau et al., 1990). Furthermore, connexin 43 (Shortt et al., 2007a;Matic et al., 1997) and involucrin (Chen et al., 2004), both markers of cells destined for differentiation, are also absent. The transcription factor p63 is required for formation of epidermis and has been proposed as a putative positive LESC marker (Pellegrini et al., 2001). In vitro, p63 was found to be expressed in limbal epithelial cell derived holoclones with little or no expression in meroclones and paraclones. In vivo, p63 was located in the limbal basal epithelium. However, since these initial observations a number of reports have suggested that p63 is not sufficiently specific to act as an LESC marker as it has also been localised to basal cells of the peripheral and central cornea in humans (Chen et al., 2004;Dua et al., 2003) and in rats (Chee et al., 2006). However, limbal epithelial cells expressing high levels of p63 with a high nuclear to cytoplasmic ratio appear to be more stem like (Arpitha et al., 2005). Further work has since indicated that the Np63α isoform may more specifically label LESC (Di Iorio et al., 2005). Many types of organ-specific stem cells, including LESC have been recently shown to exhibit a side population (SP) phenotype. The SP cells are able to efflux Hoechst 33342 dye through the ATP-binding cassette transporter Bcrp1/ABCG2. ABCG2 has therefore been proposed to be a universal marker for stem cells (Zhou et al., 2001;Watanabe et al., 2004). In putative LESCs, this protein has been immunolocalised to the cell membrane and cytoplasm of a population of limbal basal cells and a few suprabasal cells (Chen et al., 2004). Furthermore, ABCG2 positive cells produce higher colony forming efficiency values in vitro than their negative counterparts (de Paiva et al., 2005). Our laboratory has localised ABCG2 to the outer edge of holoclones where it is thought that the stem cells reside. Clusters of cells expressing the integrin α9 have been localised to the limbal basal epithelium (Stepp et al., 1995). However, upregulation of α9 in wounded murine corneas have since indicated this integrin to be associated with TAC's (Stepp and Zhu, 1997). Integrin β1 was originally suggested to be a keratinocyte marker (Jones and Watt, 1993). Cells that rapidly adhere to the integrin β1 ligand, collagen IV also display LESC properties (Li and Lu, 2005). Limbal basal epithelial cells are described as β1 integrin bright as are the stem cells of the epidermis suggesting a gradient of expression that decreases with differentiation. The integrins α2, α6 and β4 are negative in the limbal basal epithelial cells (Schlotzer-Schrehardt and Kruse, 2005). N-cadherin is an important mediator of cell-cell adhesion and may play a key role in the maintenance of haemopoietic stem cells by facilitating adhesion to osteoblasts in the bone marrow niche (Calvi et al., 2003;Zhang et al., 2003). Hayashi et al found expression of N-cadherin in a subpopulation of limbal epithelial basal cells and in adjacent melancytes implying N-cadherin plays an important role in interactions between LESC and their corresponding niche cells (Hayashi et al., 2007). Even though the limbal epithelium is derived from the surface ectoderm a number of neural stem cell markers have been suggested as LESC markers. Recent in depth immunological studies of neurotrophic factors and their receptors in the human has found NGF, glial cell-derived neurotrophic factor (GDNF) and their corresponding receptors TrkA and GDNF family receptor alpha (GFRα)-1 to be exclusively expressed in the limbus (Qi et al., 2008). Notch 1 is a ligand-activated transmembrane receptor that has been shown to maintain progenitor cells in a number of tissues. The role of Notch signalling in the cornea is unclear. However, cell clusters in the palisades of Vogt have been found with some co-localisation with ABCG2 (Thomas et al., 2007). Using Notch 1 deficient mice, Vauclair et al, demonstrated Notch 1 signalling is required for cell fate maintenance during corneal epithelial wound healing linking this to regulation of vitamin A metabolism (Vauclair et al., 2007). Notch 1, other Notch family members and their down-stream targets have been identified throughout the cornea suggesting a role in differentiation (Ma et al., 2007). More recently, Nakamura et al., has found Hes1, a major target in Notch1 signalling, to be localised to the basal limbal epithelium in adult mice (Nakamura et al., 2008). It is likely that Notch signalling, perhaps under synergistic regulation with the Wnt signalling pathway, controls the balance between LESC self-renewal and daughter cell commitment to differentiation. The cell cycle arrest transcription factor C/EBPδ has also been implicated in the regulation of LESC self-renewal. Limbal epithelial basal cells that express C/EBPδ co-express Bmi1 (which is involved in stem cell self renewal) and Np63α (Barbaro et al., 2007). Cell-cell communication is facilitated by gap junctions. Connexins 43 and connexin 50 are present in the corneal epithelium (Dong et al., 1994). Cx 43 is expressed by corneal basal cells except that of the limbus, implying it is utilised by more early TACs. The lack of intracellular communication has been suggested to help maintain stem cells and their niche (Matic et al., 1997) by protecting the cells from damage affecting adjacent neighbours (Chee et al., 2006). Like the stratified squamous epithelia, (Watt and Green, 1981) involucrin is also expressed in the corneal epithelium (Chen et al., 2004) and in larger cells in vitro suggesting it is a marker of differentiation. The RNA binding protein, Musashi-1 is produced in the developing and adult eye (Raji et al., 2007) and has recently been found in putative LESCs co-cultured with amniotic epithelial cells as feeders (Chen et al., 2007). 8 stembook.org Limbal epithelial stem cells of the cornea Clinical consequences of LESC failure and cultured stem cell therapy LESC deficiency can occur as a result of primary or acquired insults. Partial or full LESC deficiency leads to deleterious effects on corneal wound healing and surface integrity (Chen and Tseng, 1991;Dua et al., 2003). Deficiency can arise following injuries including chemical or thermal burns and through diseases such as aniridia and Stevens Johnson syndrome (see Figure 5). As a result of LESC deficiency conjunctivalisation, neovascularisation, chronic inflammation, recurrent erosions, ulceration and stromal scarring can occur causing painful vision loss (Holland and Schwartz, 1996;Kenyon and Tseng, 1989;Puangsricharern and Tseng, 1995). Long term restoration of visual function requires renewal of the corneal epithelium, through replacement of the stem cell population has traditionally been achieved by grafting limbal auto-or allografts (Kenyon and Tseng, 1989{Ramaesh, 2003. Each procedure carries a risk of complication such as damage to healthy eye by removal of autologous tissue for transplantation or side effects from long-term immunosuppression with allogenic tissue. As an alternative, cultured LESC therapy has been developed where LESCs are expanded in vitro for therapeutic application in patients in a variety of protocols utilising amniotic membrane or fibrin, in the presence or absence of growth arrested 3T3 fibroblast feeder layers (Lindberg et al., 1993;Pellegrini et al., 1997;Grueterich et al., 2002;Koizumi et al., 2001;Shortt et al., 2007b;Tsai et al., 2000). Cultured autologous mucosal epithelial cell grafts have also been used to reconstruct the ocular surface of LESC deficient patients with some success (Nakamura et al., 2003). Recently it has been demonstrated that other stem cell populations including human embryonic stem cells (Ahmad et al., 2007) and hair follicle stem cells (Blazejekska et al., 2008) can be driven towards a corneal epithelial-like phenotype. These exciting data may lead to alternative therapeutic strategies in the future for patients blinded by ocular surface disease cause by failure of LESC function. The biological mechanisms of efficacy experienced by recipients of the cultured LESCs are unclear yet the clinical results are promising. It has been suggested that bone marrow derived stem cells may be recruited to the cornea to repair the damage caused by LESC failure (Daya et al., 2005) since no long-term survival of allogeneic cultured LESCs has been demonstrated. Our hypothesis is that the transplanted cultured limbal epithelium may act, at least in some patients, by 'kick-starting' the recipient's own ailing LESC. One of the causes of blindness in children with aniridia is due to progressive ocular surface failure. The majority of cases are caused by PAX6 haploinsufficiency being a result of heterozygous null mutations (Van Heyningen and Williamson, 2002). The disease is a pan-ocular, bilateral condition most prominently characterised by iris hypoplasia and varies from a relatively normal iris to the complete lack of an iris. Aniridia is often associated with cataracts, corneal vascularisation and glaucoma, with a significant number of cases of visual morbidity being due to corneal abnormalities. The underlying process of these abnormalities is poorly understood and is thought to be due to stem cell failure (Mackman et al., 1979;Nishida et al., 1995;Tseng and Li, 1996). However, it has also been proposed that it may be due to a deficiency in the stem cell niche and adjacent corneal stroma (Ramaesh et al., 2005). More recently downregulation of Pax6 has been linked to abnormal epidermal differentiation of cornea epithelial cells stembook.org Limbal epithelial stem cells of the cornea . Treatment usually involves replacement of LESC using limbal allografts and/or corneal grafts or more recently ex vivo cultured LESC grafts (Holland et al., 2003). Aniridia represents a spectrum of disease, with iris anatomy defects ranging from the total absence of the iris to mild stomal hypoplasia with a pupil of normal appearance. Other associated defects include foveal hypoplasia, optic nerve hypoplasia, nystagmus, glaucoma and cataracts. These conditions may develop with age causing progressive visual loss. Another important factor leading to progressive loss of vision is aniridic-related keratopathy (ARK; Mackman et al., 1979;Margo, 1983) which occurs in 90% of patients. Initially the cornea of patients appears normal during childhood (Nelson et al., 1984;Nishida et al., 1995). Changes occur in patients in their early teenage years, with the disease manifesting as a thickened irregular peripheral epithelium. This is followed by superficial neovascularisation and if left untreated it may result in subepithelial fibrosis and stromal scarring. Furthermore patients develop recurrent erosions, ulcerations, chronic pain and eventual blindness (Holland et al., 2003). Histologically, stromal neovascularisation and infiltration of inflammatory cells is seen with the destruction of Bowman's layer. Additionally, the presence of goblet and conjunctival cells is seen on the corneal surface (Margo, 1983). Traditionally, these clinical and histological manifestations have lead to the consensus that LESC deficiency is largely responsible for corneal abnormalities in aniridia Margo, 1983). Based on the clinical and histological manifestation of aniridia, LESC deficiency has been presumed to be the pathogenesis behind ARK Margo, 1983;Nishida et al., 1995). As a LESC marker has yet to be definitively identified, a true demonstration of LESC deficiency can not be assumed. Furthermore treatment for these patients involving replacement of LESC, either by keratolimbal allografts or more recently ex vivo expanded LESC grafts, provides a better outcome than corneal transplants (Holland et al., 2003;Shortt et al., 2007b;Tiller et al., 2003). This is consistent with LESC deficiency. However, patients who receive both limbal and corneal tissue seem to have the better outcome, suggesting an abnormality with corneal tissue and not just the limbus. This may be a downstream effect of LESC deficiency. Alternatively low levels of PAX6 may have a generalised effect on the entire cornea. ARK could also be the consequence of abnormal corneal epithelial/stromal healing responses as there is insufficient evidence to indicate that the proliferative potential of LESC is impaired (Ramaesh et al., 2005;Sivak et al., 2000). Recently, studies looking at the regulation of genes downstream of Pax6 in the Pax6 heterozygous mouse, suggests the pathogenesis of ARK is due a number of mechanisms and not solely due to LESC deficiency (Ramaesh et al., 2005). Further studies are needed to elucidate the exact mechanism of ARK progression to allow the use of appropriate treatments. Mutations in Pax6 result in a distinct small eye syndrome in the small eye (SEY) mouse and rat (Hill et al., 1991;Matsuo et al., 1993). These animals are excellent models for aniridia and the progressive nature of associated corneal abnormalities (Ramaesh et al., 2003;Davis et al., 2003). As the name suggests, mice with semidominant mutations develop small eyes and other ocular deformities. The murine strains Pax6 Sey , Pax6 SeyNeu and Pax6 Coop represent three SEY mice with differing point mutations in the Pax6 gene. Pax6 SeyDey and Pax6 SeyH mice have Pax6 gene deletions (Hill et al., 1991;Hogan et al., 1986;Lyon et al., 2000;Schmahl et al., 1993;Theiler et al., 1980). The SEY mice with semidominant heterozygous phenotypes demonstrate comparable developmental ocular abnormalities. This includes microophthalmia and defects in the iris, lens and retina with phenotypic severity being variable (Callearts et al., 1997;Hill et al., 1991). Cataracts, glaucoma and more importantly corneal abnormalities can develop in mutant SEY during post-natal development and adult life (Lyon et al., 2000). Interestingly, the phenotypic variability seen between mice is also observed within a single SEY strain (Hogan et al., 1986). This can even be detected between two eyes of the same mouse, suggesting a stringent requirement for Pax6 activity to be at specific levels at precise times during development (Hill et al., 1991;Schedl et al., 1996;van Ramsdonk and Tilghman, 2000). Homozygotes generate an ultimately lethal phenotype with no eyes and nasal primordial (Hill et al., 1991). A number of sey mice arose independently all of which are semidominant and by examining comparative mapping studies and phenotypic similarities to aniridia, it was suggested to be the mouse homologue of the human disease (Glaser T et al., 1990). This research led to the discovery that the Pax6 gene was responsible for the Sey phenotype and suggested that it was also responsible for the human disease, aniridia (Hill et al., 1991). These models are helping us to address fundamental questions about LESCs and their niche environment. Summary LESCs are clearly important for vision. Efforts to specifically and prospectively identify these elusive cells are proving difficult. However, despite this mixed populations of epithelial cells isolated from the limbal region have the potential to restore the ocular surface and improve vision in patients with LESC function failure. The mode of clinical
8,124.2
2009-06-30T00:00:00.000
[ "Medicine", "Biology" ]
Determinants of private-sector antibiotic consumption in India: findings from a quasi-experimental fixed-effects regression analysis using cross-sectional time-series data, 2011–2019 The consumption of antibiotics varies between and within countries. However, our understanding of the key drivers of antibiotic consumption is largely limited to observational studies. Using Indian data that showed substantial differences between states and changes over years, we conducted a quasi-experimental fixed-effects regression study to examine the determinants of private-sector antibiotic consumption. Antibiotic consumption decreased by 10.2 antibiotic doses per 1000 persons per year for every ₹1000 (US$12.9) increase in per-capita gross domestic product. Antibiotic consumption decreased by 46.4 doses per 1000 population per year for every 1% increase in girls’ enrollment rate in tertiary education. The biggest determinant of private sector antibiotic use was government spending on health—antibiotic use decreased by 461.4 doses per 1000 population per year for every US$12.9 increase in per-capita government health spending. Economic progress, social progress, and increased public investment in health can reduce private-sector antibiotic use. www.nature.com/scientificreports/reducing viral infections for which antibiotics are inappropriately taken (e.g.Seasonal Influenza), and by reducing secondary bacterial infections following vaccine preventable viral infections 22,23 .Finally, respiratory infection incidence is a good predictor of antibiotic use, as indicated by previous studies from India and the US 24,25 . However, our understanding on these key drivers of antibiotic consumption is largely limited to observational studies.The only exception of which we are aware was a global analysis by Klein et al. 26 which examined the role of economic growth, measles vaccination rate, imports, and physician density on any antibiotic use during 2000-2015 among low middle income countries (LMICs) and high income countries (HICs).Using a quasiexperimental method-the fixed effects design-that study reported a significant positive association between GDP per-capita and changes in the antibiotic consumption rate in LMICs but not in HICs, while other factors were not found to be significant in either group 26 . Indian data-with substantial differences across states and changes over years-provide an opportunity to explore the key factors that drive antibiotic consumption.India is among the countries with the highest burden of mortality due to infectious disease.Diarrheal disorders, lower respiratory tract infections, tuberculosis, and childhood pneumonia were among the top ten causes of deaths in India in 2019 27 .Infectious disease burden varies across Indian states 28 , while health systems and services are not uniform 29 .India's national health mission classifies states into high focus (HF) states and non-high focus (nHF) states based on health infrastructure, life expectancy, fertility rate, and child and maternal mortality indicators 30 .HF states include Bihar, Chhattisgarh, Jharkhand, Madhya Pradesh, Odisha, Rajasthan, Uttar Pradesh, and seven northeastern states.Our previous work has shown that antibiotic consumption differences across Indian states are as big as differences across countries, and that antibiotic consumption decreased in many Indian states from 2016 to 2019 31 .It is therefore important to understand the factors determining antibiotic consumption in India as this will help policy makers identify and invest in the most efficient and effective measures to reduce antibiotic consumption. Among the eight groups of factors that we discussed above, the first three groups-related to perceptions and behaviors, health systems, and climate and environmental factors-are largely time invariant, meaning we do not expect substantial changes in these factors over a few years.Therefore, in this paper, we attempt to examine the effects of the remaining five time-varying factors described in the literature-namely economic productivity, government spending on health, girls' higher education rate, measle vaccination rate, and lower respiratory tract infection incidence on private-sector antibiotic consumption.We do so by using annual state-level private sector antibiotic consumption-measured as defined daily doses (DDD) consumed per 1000 population per day (DID)-across Indian states from 2011 to 2019 using a quasi-experimental fixed effects regression method.In addition, we controlled for the potential confounding effect of some of the time invariant factors discussed above. Results The median private-sector antibiotic consumption across all the states for the entire study period was 11.2 DIDs (IQR = 4.9).The time series graphs of antibiotic consumption in 19 states from 2011 to 2019 are shown in Fig. 1.Across all the years, the highest annual value (30.5) was recorded in Delhi in 2013 and the lowest (6.6) was recorded in Madhya Pradesh in 2017.The median DID value was 10.9 (IQR = 6.3) in 2011, reached a maximum of 12.0 in 2016 (IQR = 4.9), and ended at the lowest value of 10.5 (IQR = 5.0) in 2019. The summary of outcome and predictors and differences between HF and nHF states is given in Table 1.The median DID during the nine years was only 8.6 in HF compared to 13.1 in nHF states and Wilcoxon rank sum test showed that the difference was statistically significant (p < 0.0001). The year-wise state-wise data are given in the Supplement.At the national level, the median per-capita GDP was ₹108,500 (IQR = 85,600), and it ranged from ₹21,800 in Bihar in 2011 to ₹376,000 in Delhi in 2019 (Supplement Table S2).The median per-capita government spending on health was ₹988.3 (IQR = 587.1)and it ranged from ₹302.7 for Bihar in 2011 to ₹3777.8 for Delhi in 2019 (Supplement Table S3).The median girls' enrollment rate in tertiary education was 25.5% (IQR = 12.9), and the rate ranged from 9.5% in 2011 in Jharkhand to 51.8% in 2019 in Delhi.(Supplement Table S4) The median measles vaccination rate was 907.0 per 1000 eligible children (IQR = 157.7).This ranged from 737.3 for Tamil Nadu in 2016 to 1439.0 for Telangana in 2016 (Supplement Table S5).The median LRTI incidence was 10.0 per 100 (IQR = 1.7); ranging from 6.2 in 2011 in Telangana to 12.5 in 2019 in Odisha (Supplement Table S6).All the predictor variables were significantly different between HF and nHF states (Wilcoxon rank sum test; p < 0.001). Table 2 shows the results of the fixed effects adjusted linear regression models with standard errors adjusted for 19 state-clusters.Model 1 without adjusting for population showed that private sector antibiotic consumption significantly increased with increase in LRTI incidence (β = 1.705, p = 0.001).Antibiotic consumption significantly decreased with increase in per-capita GDP (β = − 0.028, p = 0.006), per-capita government spending on health (β = − 1.264, p = 0.008), and girls' tertiary education enrollment (β = − 0.127, p = 0.008) after adjusting for increase in LRTI incidence and all state-level measured and unmeasured time-invariant confounders.In absolute terms, these results translate to a decrease in the private sector antibiotic consumption by 10.22 antibiotic doses per 1000 population per year for every ₹1000 (US$12.9)increase in per-capita GDP, by 461.4 antibiotic doses per 1000 population per year with every ₹1000 increase in per-capita government health spending, and by 46.4 doses per 1000 population per year with every one percent increase in girls' enrollment rate in tertiary education.The antibiotic consumption increased by 622.3 doses per 1000 population per year with every one percent increase in LRTI incidence.Including population changes in the model (Model 2) did not substantially change the findings. Sensitivity analysis The results of sensitivity analysis-differences across HF and nHF states-with and without adjusting for changes in populations-are shown in Table 3. Model 1 which did not adjust for population changes showed that increase in LRTI incidence was significantly associated with increased antibiotic consumption in nHF states (β = 2.137, p = 0.005) but the association was not significant in HF states (p = 0.061).Increased government health spending Table 1.Private-sector antibiotic consumption (DID) and its predictors: overall median and inter-quartile ranges and differences between High Focus and non-High Focus States in India, 2011-2019.p25 25th percentile, p75 75th percentile, ₹ Indian rupees, DID defined daily doses per 1000 population per day, GDP gross domestic product, LRTI lower respiratory tract infection.a p values correspond to Wilcoxon rank sum tests for differences between high focus and non-high focus states.www.nature.com/scientificreports/had a negative effect on antibiotic consumption in HF (β = − 0.872, p = 0.001) and nHF states (β = − 1.694, p = 0.044).Girls' education enrollment (β = − 0.151, p = 0.009) had a significant negative effect on antibiotic consumption in nHF states only.However, we prefer to interpret these results with caution, considering the fewer sample of states in each group.Including population changes in the model (Model 2) did not substantially change the findings. Discussion Using a quasi-experimental study design, we showed that economic productivity, government health spending, and girls' higher education strongly influence antibiotic consumption, independent of infectious disease burden, and public health interventions like vaccinations after adjusting for all measured and unmeasured time-invariant confounders.Antibiotic consumption increased by 622.3 doses per 1000 population per year with every 1% increase in LRTI incidence.The most important determinant of private antibiotic consumption was government health spending.We discuss four key findings from our analysis below.First, overall economic productivity and growth leads to significant reduction in antibiotic consumption.Every ₹1000 (US$12.9)increase in per-capita GDP led to a decrease in private sector antibiotic consumption by 0.028 doses per 1000 population per day or 10.22 antibiotic doses per 1000 population per year.Our results agree with the study by García-Rey et al. that reported a significant negative association between GDP per capita and antibiotic use in Spain 1 .However, our results contrast with the findings by Klein et al. 26 that reported a significant positive association between changes in per capita GDP and antibiotic consumption in LMICs, but not in HICs.Typically, with increased economic growth, the availability of pharmaceutical products in the market may improve in LMICs and may lead to improved access and consumption.However, this may not be the case with India, as the country already had a substantial pharmaceutical industry that had produced and made available antibiotics in private sector (similar to HICs) even before the major economic reforms in 1990s and the subsequent economic growth 32 .In their 2018 paper, Tamhankar et al. reported no influence of economic growth indicators on antibiotic consumption in India 33 , using data from 2000 to 2010.However, this analysis used only national level antibiotic consumption data and did not use DDD, the standard unit of antibiotic use.This contrasts with our study which analyzed state-level panel data for several years employing a rigorous fixed effects design and suggests a strong inverse causal relation between economic growth and private sector antibiotic consumption after adjusting for various factors.Our finding may be explained in part by the changes in more distal factors that influence antibiotic consumption following economic progress, for example, improved availability of drinking water, sanitation facilities, and nutrition-and not merely the improved availability of medicines in public sector-as our results were adjusted for changes in government expenditure in health.www.nature.com/scientificreports/Second, increased government spending on health had the most significant effect in reducing private sector antibiotic consumption.With every ₹1000 (US$12.9)increase in per-capita government health spending, privatesector consumption of antibiotics reduced by 461.4 antibiotic doses per 1000 population per year.The 2015 WHO Global Action Plan called for political will to make effective antibiotics available and accessible for patients, including through government spending on health as a means to reducing inappropriate use and antibiotic resistance 34 .Data from the US show that 30-75% of antibiotics prescribed in hospitals, nursing homes, doctor's offices and emergency departments are unnecessary 35 .Considering the catastrophic financial consequences of antibiotic resistance due to an increase in hospital admissions and drug usage following inappropriate use, the return on investment in making appropriate antibiotics accessible through public systems is high 36 .However, government spending on health improves not only antibiotic availability in the public sector but may also improve vaccination services, and infection prevention and control activities including awareness programs, all of which may reduce antibiotic use.These findings support an earlier analysis of WHO antibacterial resistance surveillance report which concluded that copayment requirements for drugs in public sector can incentivize private sector to overprescribe antibiotics 16 .Further, our study adds to the literature on successful government policy interventions to reduce antibiotic use especially in countries where out-patient healthcare provision is through largely unregulated private providers 37 . Third, increase in measles vaccination rate was not found to be associated with antibiotic consumption, consistent with what Klein et al. reported 26 .A meta-analysis in 2019 found that although the overall evidence base on the effect of vaccination on antibiotic use is poor, randomized controlled trials (RCT) mostly reported reductions in antibiotic use following vaccinations 38 .This was particularly the case with studies on pneumococcal and influenza vaccines similar to what was reported by Klugman and Black 39 .However, these papers and a 2012 review 40 included studies mostly from HICs, and had only one very low quality evidence RCT with measles vaccine that reported no significant difference in antibiotic use. 41These suggest that measles vaccination rate may not be an appropriate measure to test the effect of vaccination on antibiotic use.We did not have sufficient data to include pneumococcal and influenza vaccines in our analysis, which if considered, may give a different result. Lastly, improvement in girls' higher education enrollment significantly reduces antibiotic consumption.With every one percent increase in girls' enrollment rate in tertiary education, antibiotic use decreased by 46.4 doses per 1000 population per year.Our findings are in line with a recent Southeast Asia scoping review that had shown that women have better knowledge of antibiotics and were less likely to buy antibiotics over the counter 42 .A national survey from Thailand in 2017 also showed that women with higher levels of education had significantly higher chance of obtaining information on appropriate use of antibiotics 43 .A more recent meta-analysis showed that higher education was associated with 14% lower odds of any aspect of antibiotic misuse in LMICs, whereas higher education was associated with 25% higher odds of antibiotic misuse in Europe 44 . Our analysis showed that increased public investment in health can reduce private sector antibiotic consumption which may help in reducing antibiotic resistance as observed by Collignon et al. in their analysis 45 .The better availability of antibiotics in the public sector especially in primary healthcare facilities may reduce the prescription from private facilities.This is critically important as most of the antibiotic use happens in the primary care 46 particularly for self-limiting illnesses 47 .www.nature.com/scientificreports/ Strengths and limitations The quasi-experimental design provided an opportunity to perform a 'natural experiment' and study the effect of some of the key factors that influence antibiotic consumption.In addition, the panel data available helped to control the effects of many measured and unmeasured confounders that may not vary substantially across timeincluding population and provider awareness and behavior, health infrastructure including human resources, climate factors, and population age structure and density.However, one may challenge the assumption that provider behavior and health infrastructure have not changed substantially in the 10-year period.By including government funding on health as a predictor we might have covered the infrastructure changes.Fixed effects regression works under the assumption of strict exogeneity, meaning, we assume there is no reverse causality and no reverse feedback from past outcomes to current covariates and current outcome to future covariates.However, in our model, this assumption may be violated when we consider the relation between antibiotic use and infection.Another important limitation is that our model included only a few selected timevarying covariates and therefore might have missed out the effect of other time-varying factors-one key factor being improvement in water and sanitation facilities.Additionally, we had to limit the analysis to using the measles vaccination rate as we lacked long-term data on pneumococcal and influenza vaccines. Our analysis is restricted to private sector data and as we could not examine changes in consumption through public sector purchases, some of our findings could be explained by a shift from the private sector (which we observed) to the public sector (which we did not observe), although this is likely to be minimal given the dominant role of the private healthcare sector in India. Conclusion We examined the effect of some of the key factors determining antibiotic consumption using state level data from India from 2011 to 2019.This is the first study in the Indian context examining the determinants of antibiotic consumption.We found that economic progress (GDP), social progress (girls' higher education), and welfare measures (government spending on health) can reduce private sector antibiotic consumption.India's ongoing revision of its Antimicrobial Resistance National Action Plan should consider creating a robust and sustainable antibiotic consumption and use surveillance system that captures data from public and private sectors.Data on antibiotic use from both sectors will help in designing targeted stewardship programs and in assessing impact of various interventions and investments including newer vaccines and water and sanitation measures in reducing antibiotic use. Setting Annual state-level private sector antibiotic consumption data across 19 Indian states from 2011 to 2019. Measures Outcome Per-capita private sector antibiotic consumption in Defined Daily Doses (DDD) per 1,000 Inhabitants per Day ("DID").DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults 31 .We calculated total DDDs across all antibiotics used in private sector using PharmaTrac antibiotics sale volume (strength of the product, number of "packs consumed" and "pack size") and unit DDD values for molecules or formulation from WHO database.Then we used population figures to calculate the DIDs-the details of which are available in our previous paper 48 and are indicated in formulae below. Predictors Our independent variables are per-capita net state domestic product[NSDP], per capita government spending on health, girls' tertiary education enrollment rate, measles vaccination coverage, and incidence of lower respiratory infections (LRTI). Data sources The outcome measure (DID) was based on the analysis of PharmaTrac data, the details of which have been published previously 31,48 .Briefly, PharmaTrac gathers primary data from a panel of 9000 pharmaceutical distributors and 500,000 retailers and extrapolate the data to represent the entire private retail sector pharmaceutical sales.The PharmaTrac dataset combines data for some states and therefore the number of states (n = 19) is smaller than the actual number (n = 28).Wherever data are combinedly presented for states, we have used the combined population for our analysis.Besides data were not available for the state of Jammu and Kashmir and for union territories.The drugs dispensed through public facilities-which accounts for less than 15-20% of all drug sales in the country-are not covered by PharmaTrac. We used the mid-year, state level population (in millions) from the National Population Commission (https:// censu sindia.gov.in/) (Supplement Table S1).We gathered the data on net state domestic product (NSDP) from the national statistical office, Reserve Bank of India (https:// www.rbi.org.in/).We calculated annual per-capita NSDP, in Indian Rupees (INR, [₹], in '000 s, at current prices) for the years 2011-2019. We used the annual budget documents of state governments to gather the per-capita amount (INR, [₹], in '000 s, at current prices) spent by the state governments on health every year. Girls' tertiary enrollment rate is the number of girls in tertiary level education as percent of all girls who have finished secondary school in the previous 5 years 49 .We sourced the data from Ministry of Education, Government of India using the search option in the CEIC website (https:// www.ceicd ata.com/). Measles immunization rate from India's health management information system (HMIS) standard report (Standard Reports/12 ~ H. RCH Reports) provided state-wise tabulated data on immunizations and is used for Global burden of disease estimations.We retrieved this data in excel format from the HMIS portal where the measles and measles-rubella (MR) vaccination rate are available under the item codes 9.2.1 and 9.2.2 (https:// hmis.nhp.gov.in/).The vaccination coverage is expressed as the number of children up to one year of age who received at least one dose of measles/MR vaccine during the current year for every 1000 eligible children. We retrieved the state level estimates of the incidence of LRTIs for both sex for all ages for the years 2011-2019 from the Global Burden of Diseases (GBD) data from IHME database (https:// vizhub.healt hdata.org/ 50 , and used the incidence per 100 population for analysis (referred to as LRTI incidence). Ethical consideration This research did not involve any human subjects and the data used for the analysis did not include patient identifiers, therefore this study did not require review by an institutional review board. Study design We used fixed effects regression, a quasi-experimental causal inference design to answer our question, using a cross-sectional time-series (panel) data.The empirical data sources are described above.There are numerous measured and unmeasured confounding factors that may affect the relation between antibiotic consumption and these predictors.For example, the state population-age composition may affect the respiratory infection rate and antibiotic use.Similarly, health service availability, doctor-population ratio, and health attitude, perception, and awareness levels may affect vaccination rate and antibiotic consumption.Further, various cultural and social characteristics may affect girls' education and antibiotic consumption.However, these confounding variables are largely time-invariant, i.e., they remain largely unchanged during the study period.We used the longitudinal nature of the data to control for confounding effect of these time-invariant measured and unmeasured factors by using a fixed effects regression method as discussed below. Fixed effects regression model Let y it denotes the DID for i-th state in t-th year, α represents the intercept, β denotes the vector of regression coefficients for the time-varying predictors of interest represented by vector x it , and γ denotes the vector of regres- sion coefficients of time-invariant observed confounders represented by vector z i .Consider that we measured the variables at two years (T = 2).The equations to calculate the DID for i -th state for year 1 and 2 are: where u i is the error term that represents the combined effect on y of all unobserved confounders that are constant over time; and ε it is the "idiosyncratic error"-random error that changes over time and across states.Therefore, there is a different u I for every state and that remains the same across the years, and a different set of ε it for each state that varies from year to year. Further, we cannot assume statistical independence of u i and x it due to confounding.However, when we subtract Eq. (2) from Eq. (1), we get the following first-difference equation without u i : where is the difference score.When it comes to multiple years, first we calculate the mean value of outcome (antibiotic use) and each predictor variable over time for each state as below. where n i is the number of measurements for state i . Second, we subtract the state-specific means from the observed values for each variable. (1) With this, we can estimate the change in DID as a function of changes in values of predictors after adjusting all time-invariant observed ( z i ) and unobserved ( u i ) confounders (fixed effect) that could influence the coefficient, β. Our final fixed effects model was: Statistical analysis First, we prepared a spreadsheet with the state-wise annual DID data as reported previously 31,48 .Then, we collated and organized predictor variables data in a second spreadsheet.We then merged these two sheets and created the final panel data for analysis using STATA software version 17.0 (Stata Corp LP, 2021). We conducted descriptive analysis of antibiotic use (DID) and independent variables using median, and interquartile range, and plotted line graphs to visually inspect the changes in the outcome and predictor variables over years.We presented the summary of these variables in tables and compared the values across HF and nHF states.We used Wilcoxon rank sum test to assess statistically significant differences as the data were not normally distributed and the sample sizes were small.Then we conducted pair-wise correlation of each predictor with the dependent variable and calculated pairwise Pearson correlation coefficients.A p value less than 0.05 was considered statistically significant in all analysis.We also used scatter plots to visualize the relationship between the pairs using normalized values for independent variables.We used R software version 4.1.1(R Core Team, 2020) and packages tidyverse and ggplot2 for the analysis. Then we created linear fixed effects and random effects regression models using the xtreg command in STATA by specifying the state ID as the variable to identify the record for each state.We tested for autocorrelation in our data using Wooldridge test which confirmed first-order autocorrelation (F (1,18) = 14.4,p = 0.001).The overall F-test showed that the fixed effects are non-zero (F (13,18) = 18.36, p < 0.001); and that a pooled ordinary least square regression will be biased.The unobserved heterogeneity ( u i ) strongly correlated with explanatory variables ( x it ) − (Cov ( x it , u i ) = − 0.94)-evidence of confounding, supporting the choice of fixed effects model.Further, the Hausman test verified that the random effects model will be biased ( χ 2 = 115.4,p < 0.001) and confirmed our choice of fixed effects model.Finally, we used the vce (robust) command to get cluster robust standard errors to control for heteroskedasticity and within panel serial correlation.Additionally, errors were clustered by state to account for high serial correlation.Finally, we conducted sensitivity analyses-by conducting the fixed effects regression analysis separately for high-focus (HF) and non-high-focus (nHF) states-with and without population as a predictor in the model.A p value less than 0.05 was considered statistically significant for all statistical tests. x Figure 1 . Figure 1.Time series of private-sector antibiotic consumption in Indian states, 2011-2019.Note: DDD defined daily doses. Figure 2 . Figure 2. Scatterplots and superimposed best-fit lines showing associations between private-sector antibiotic consumption (DID) and (A) normalized per-capita GDP, (B) normalized government spending on health, (C) normalized girls' tertiary enrollment rate, and (D) normalized measles vaccination rate, India, 2011-2019.DID defined daily doses (DDD) per 1000 population per day, GDP gross domestic product. Figure 3 . Figure 3. Pairwise Pearson correlation coefficients between private-sector antibiotic consumption (DID) and predictor variables, India, 2011-2019.The values inside the square boxes are the correlation coefficients and the values in parenthesis are the p values for corresponding coefficients.DID defined daily doses per 1000 population per day, GDP-Gross Domestic Product, in Indian Rupees (₹ in '000 s, at current prices); Vaccination-Measles Vaccination rate, per 1000 eligible children; LRTI-Lower Respiratory Tract Infection per 100 population; Education-Girls' enrollment in tertiary education, %; Government spending-per capita government spending on health, ₹; Sources of data: DID-PharmaTrac, GDP-Reserve Bank of Indian report, Government spending on health-state budget documents, Measles vaccination rate-India health management information system, LRTI incidence-GBD data, IHME, Girls' enrollment rate data -CEIC database sourced from Ministry of Education. Total DDDs Consumed = Strength * Pack Size * Packs Consumed DDD of Molecule/formulation DIDs = Total DDDs Consumed Population in thousands * 365
6,140.4
2024-02-29T00:00:00.000
[ "Medicine", "Economics" ]
THE MACROECONOMIC DETERMINANTS AND THE IMPACT OF SANCTIONS ON FDI IN IRAN We examine the impact of the macroeconomic determinants of foreign direct investment inflows. We also investigate the moderating role of sanctions in FDI inflows into Iran. The results reveal that macro determinants such as infrastructure, exchange rate, inflation rate, investment return, and governance have a long-run effect on FDI inflows in Iran. Our findings also show that GDP growth rate and trade openness have no significant effect on FDI. Our results indicate that sanctions do not have a significant moderating role in the relationship between macroeconomic factors and FDI. Surprisingly, international sanctions have a positive relationship with FDI inflows in Iran. Furthermore, sanctions have a positive impact on the inflation rate and exchange rate in Iran. Finally, our findings show that sanctions have had a significant impact on Iran’s economic growth in recent years due to increasing the severity level of sanctions. INTRODUCTION Foreign direct investment is an indispensable source of finance for developing countries, but policymakers must minimise their risks. FDI can help host countries generate employment, technology diffusion, economic growth and sustainable development (UNCTAD, 2015). The World Bank's edition of global development finance emphasises the importance of 'absorptive capacities' in the success of FDI. However, according to Alfaro et al. (2004), absorptive capacities include (1) macroeconomic management (e.g., inflation and trade openness), (2) infrastructure (e.g., telephone lines and paved roads), and (3) human capital (e.g., share of the labour force with secondary education and percentage of the population with access to sanitation). Furthermore, a potential risk in developing countries should be minimised through good governance and strong institutions, high absorption capacity and an effective legal framework (UNCTAD, 2015). However, prospects for global FDI inflows are good, with a projected growth of 11 percent to $1.37 trillion in 2015. It is expected that global FDI flows may increase further to $1.5 trillion in 2016 and $1.7 trillion in 2017. Thus, UNCAD's FDI forecast model and its survey related to multinational enterprises (MNEs) show an increasing rise in FDI flows in the future. According to UNCTAD (2017), weak oil prices and political uncertainty continue to affect FDI inflows in West Asia, including Iran. FDI flows to the region in 2016 dropped by 2 percent to $28 billion due to persistently low oil prices, political and geopolitical uncertainties, as well as regional conflicts. FDI figures for oil and gas do not give a detailed picture of FDI in the industry; however, foreign entry into oil and gas industries often includes unconventional arrangements such as management contracts and production sharing agreements. Depending on the levels of economic, social, and political development, there is much literature on determinants of the FDI inflows (for example, Stack et al., 2017;Villaverde & Maza, 2015;Naude & Asiedu, 2002). There are studies on market size and growth (Bevan & Estrin, 2004); availability of natural resources (e.g., Elheddad, 2017); skilled and qualified human capital (e.g., Kar 2013;Ndeffo, 2010); quality of infrastructure (Cheng & Kwan, 1999) and government policies (Cleeve, 2008); governance quality (Abdioglu et al., 2013) and political stability (Cleeve, 2012;Musibah, 2015). Therefore, these factors might help countries with slow or high economic growth. In other words, countries that have FDI determinant factors are more likely to attract foreign direct investment. However, in the absence of FDI determinants, some countries might lose out on the attraction and retention of FDI (Cleeve et al., 2015). While it is generally assuming that the boycott of bilateral direct trade between the United States and Iran has been the channel for economic losses for both sides, nothing could be further from the truth. For Iran, the real cost of direct trade losses is partly due to the impact of the decline in FDI, capital inflows and joint ventures. The impact of these non-trade effects on Iran is significant and, as a result, it will be difficult for Iran to go back to business as usual with the US and its allies when sanctions are lifted (Askari et al., 2002). Our research has two contributions. First, we identify and empirically examine the issue of sanctions on FDI, which is less considered in the literature, and, second, we attempt to shed light on the FDI determinants by examining macroeconomic factors and the business environment in light of Iran's policymaking on its FDIs. By the same token, this study would attempt to investigate the impact of macroeconomic factors and the imposed sanctions on Iran's capability of attracting FDI inflows. FDI INFLOW IN IRAN Iran is considered an energy superpower. According to Goldman Sachs (2011), Iran has the potential to become one of the world's largest economies in the 21st century. Iran as OPEC's second-largest oil producer possess approximately 94 billion barrels (10 percent of world oil reserves); and has 812 trillion cubic feet reserves of natural gas in the world (17 percent of total). Iran also has enormous mineral resources, including iron, coal, copper, sulphur, zinc, as well as gold. Thus, these natural resources generate several processing industries. However, in the case of doing business with Iran, political and currency stability are considered the most problematic factors. Furthermore, due to sanctions, adversity to access international financing is also a major concern. According to recent policy in Iran, the development of non-oil exports is a priority. Iran has a broad domestic industrial base, an educated and motivated workforce, energy resources and geographical location advantages, which provide access to an estimated population of 300 million people in Caspian markets, Persian Gulf states, and countries further east. The years of government control over the economy and the lack of private investment coupled with market liberalization and recent reforms have led to interesting business and investment opportunities in many sectors. However, there is no challenge in finding areas of the Iranian economy that require investment. Despite the uncertainty about the nuclear energy policy, the level of technology and infrastructure available to many industries in Iran makes it possible to develop partnerships with foreign companies. In fact, the presence of MNCs in Iran has increased dramatically over the past 20 years, due to open regulatory policy that makes multinational corporations face less difficulty in investing in Iran (Soltani & Wilkinson, 2011). According to the Vision 2025 plan of the Iranian government, within two decades , Iran needs $ 3.7 trillion investment, of which $ 1.3 trillion should be in form of foreign investment. Table 1 shown FDI inflows and outflows in the last six years. The opening of the Iranian market for foreign investment can also create a new investment opportunity for multinational corporations that invest in different sectors of production and services in the next decade of about $ 600 billion to $ 800 billion in Iran. Foreign investors focus on several sectors of Iran's economy, including oil and gas industries, vehicle manufacturing, copper extraction, petrochemicals, food, and pharmaceuticals. Iran absorbed US$34.6 billion in financing for 485 projects from 1992 to 2009, and $24.3 billion of foreign investment from 1993 to 2007. Fig. 1 demonstrates the trend of FDI inflows in Iran from 1990 to 2018 according to UNCTAD (2019) reports. Jafarnejad et al. (2009) found a significant positive impact of openness of trade and GDP per capita have on FDI. Further, inflation, oil extraction, and production had a negative correlation with FDI. Furthermore, infrastructural factors, market size, research and development (R&D), education and scientific output encourage FDI inflows. Soltani and Wilkinson's study (2010) examines international assignees' perceptions and experiences in a sample of Iranian-based MNC affiliates in high growth sectors. Their study indicates that the international assignees' perceptions of managing an MNE affiliate in Iran were often formed before their departure and their performance was strongly linked to the level of congruence between MNC and subsidiary's managerial orientation. Their finding reveals that performance tends to deteriorate when subsidiaries are requested to conform to MNC policies and practices. Gross Domestic Product The GDP growth rate is a measure of the country's economic performance. The national economic development can be determined by criteria such as the amount of production, consumption, quality, diversity of goods, and other economic indicators (Musibah et al., 2015). The growth of GDP can be a determinant of FDI inflows to countries (UNCTAD, 1998). However, Sahoo (2006) asserts that countries with a higher and sustained growth rate will receive more FDI flows. Kahai (2011) argues that foreign investors consider the size of the current market as well as the potential for future growth in the market. Moreover, many studies have mentioned the importance of GDP growth (Stack, 2017;Arbatli, 2011;Nonnemberg & Mendonca, 2004). Mina (2014) studied 52 middle-income countries and found an effect of GDP on FDI. Further, Pradhan and Kelkar (2014) and Badr & Ayed's (2015) studies indicated a positive relationship between GDP and FDI inflow. Furthermore, favourable investment conditions and the rapid economic development in a host country would attract FDI. H1. GDP growth has a positive impact on FDI inflows. Infrastructure Good infrastructure is essential in recipient countries to realise FDI benefits. The existence of developed infrastructure significantly reduces the transaction cost of investment and, as a result, increases investment returns (Morisset, 2001). Therefore, a good infrastructure is one of the characteristics of economic development. The literature on development economics also emphasises the need for access to basic infrastructure for poverty alleviation (Yamin and Sinkovics, 2009). Economics and Business __________________________________________________________________________ 2020 / 34 Iran has a strong and extensive economic infrastructure. For an instant, Iran's transportation network includes 12 000 kilometres of railways and 220 000 kilometres of roads. The country has nine commercial facilities in the south, including the Shahid Rajaee port in the north of the Strait of Hormuz, which deals with more than 80 foreign ports through 35 container lines. Moreover, there are three commercial ports of the Caspian Sea in the north. Iran now has 167 Internet servers or 2.12 per million people and 31 percent of the people use the Internet. Furthermore, Iran has 29 million landline numbers and 65 million mobile phone numbers. H2. Infrastructure has a positive impact on FDI inflows. Exchange Rate The literature acknowledges that there is a relationship between the exchange rate and the inflow of foreign direct investment. For instance, Clare & Gang (2010), Kiyota & Urata (2008) and Mowatt & Zulu (1999) have learned that the exchange rate could lead to fluctuations in foreign direct investment by affecting the cost of acquiring foreign currency. This is because the devaluation of the domestic currency against the value of the foreign currency will make the investment less expensive for a foreign investor in the host country. However, depreciation of the domestic exchange rate will stimulate foreign direct investment inflows to that country (Musibah et al., 2015). On the other hand, if the value of a country's currency is decreasing, foreign investors are encouraged to buy assets at lower prices in that country (Blonigen & Ma, 2011). H3. The exchange rate has an impact on FDI inflows. Inflation Rate The rate of inflation represents the overall financial performance of host countries. Further, high inflation indicates the government's failure to manage the country's budget (Hailu, 2010;Schneider & Frey, 1985). Inflation is considered an important element in the flow of foreign direct investment. In general, higher inflation rates will reduce FDI inflows (Bissoon, 2012;Kok & Ersoy, 2009). However, inflation has a positive impact on FDI (Ali, Khrawish, & Siam, 2010;Azam & Lukman, 2010). In contrast, studies such as Shahzad and Al-Swidi (2013), Anyanwu (2012) and Parajuli & Kennedy (2010) have found no significant relationship between inflation and FDI inflow. H4. The inflation rate has an impact on FDI inflows. Political Stability Alcantara and Mitsuhashi (2013) reveal that political risk is one of the risks that affects the choice of the location and indicates the unpredictability and instability of legal and political conditions in a host country. Host countries where the political structure or even the preferences of policymakers are unstable create more uncertainty and risk for MNCs, because changes in laws, taxes, and government permission after entry have led to undesirable shifts of their FDI (Henisz and Macher, 2004;Globerman and Shapiro, 2003;Delios and Henisz, 2003). Shahzad et al. (2012) and Younis et al. (2008) reveal that political instability has a significant impact on FDI inflow. Madani and Nobakht, (2014) and Kim, (2010) assert that property rights and civil rights as proxies of political stability have a key role in the attraction of FDI into the country. H5. Political stability has a positive impact on FDI inflows. Trade Openness The traditional neoclassical theory states that the liberalization of trade and investment accelerate technological progress, improves labour efficiency, increases trade, and ultimately boosts economic growth (Cleeve et al., 2015). The positive association between trade openness and FDI has led to many studies in developing countries, for example, Little et al. (1970) studied the association of trade orientation and economic performance in developing countries. The more a country opens up its domestic market to external trade, the more the country can attract FDI. Trade openness is captured by the ratio of the country`s exports plus imports to the GDP (Sahni, 2012;Nunes et al., 2006). In the host country, two main channels determine the relationship between trade and FDI. First, countries with a high degree of openness tend to attract more FDI inflows. Second, the inflow of foreign direct investment can affect trade flows through technology transfer and export expansion in the manufacturing sector (Chowdhury & Mavrotas, 2006). In the case of Iran, the total volume of imports increased by 189 percent from $ 13.7 billion in 2000 to $ 39.7 billion in 2005 and $ 55.189 billion in 2009. Over the past five years, Iran's imports have fallen by 8.9 percent year-on-year, from $ 70.4 billion in 2010 to $ 43.9 billion in 2015, and Iran is currently the world's 51 largest importer. The main trading partners of Iran are China, India, Germany, Japan, France, South Korea, Italy, and Russia. About 80 percent of machines and equipment in Iran are of German origin (Gheissari, 2009). Trade openness is considered a key determinant of FDI and it is generally expected to have a positive influence on FDI inflows (Sahni, 2012;Sahoo, 2006;Asiedu, 2002). H6. Trade openness has a positive impact on FDI inflows. Investment Return Foreign direct investment goes to countries with higher returns. However, finding a suitable measure for the return on investment for developing countries is difficult due to the lack of a well-functioning capital market (Asiedu, 2002). Profitability is one of the key determinants of investment. Therefore, the rate of return on investment in a host economy affects the investment decision. Further, the marginal product of capital is equal to the return on capital. However, capital-scarce countries have higher returns (Alavinasab, 2013;Asiedu, 2002). Edwards (1990), Jaspersen et al. (2000 and Asiedu (2002) employed the inverse of per capita income as a measure for return on investment, and their results showed that GDP per capita was inversely related to FDI. In contrast, Schneider and Frey (1985) revealed a positive relationship between GDP and FDI. It can be argued that GDP provides better prospects for foreign direct investment in the host country. H7. Investment return has a positive impact on FDI inflows. Governance In recent years, discussions have been held on international development and political discourse within the framework of good governance, and, for this reason, the attraction of foreign investment is an important factor for the good functioning of the country's market. Thus, governments seeking to attract foreign direct investment should create favourable conditions for multinational corporations. On the other hand, the FDI decision-making process for investors and foreign organisations is valuable in understanding the status of governance indicators in terms of transparency of administrative processes, reducing corruption and the peaceful environment (World Bank, 2006). Morisset (2000) draws the conclusion that an increase in administrative costs due to corruption and bad governance will reduce FDI inflows. Moreover, other studies argue that political and institutional factors are necessary to encourage FDI to the developing countries (e.g., Stein and Daude, 2001;Stevens, 2000). Samimi and Ariani (2010) employed three governance indicators, including political stability, corruption control and the rule of law for investigating the impact of a better quality of governance on FDI inflows in the MENA region. They found that these indicators have a positive impact on FDI inflows and improve governance. In another study, Mengistu and Adhikary (2011) employed six indicators of good governance that included political stability, government effectiveness, and rule of law, the absence of violence and control of corruption. Their result revealed that these six indicators had an impact on FDI inflows in 15 Asian countries and, therefore, could increase the attraction of FDI. H8. Governance has an impact on FDI inflows. SANCTIONS Sanctions are an economic weapon for countries to fulfil their foreign policy goals. Over the last century, various countries imposed many international economic sanctions against other nations (Hufbauer et al., 2007). Thus, Eaton and Engers (1992), Elliot and Hufbauer (1999), Davis and Engerman (2003), and van Bergeijk (2009) suggested various theoretical frameworks to explain how sanctions work. After the Iranian revolution and after the hostage-taking of US agents in 1979, the United States stopped its economic and diplomatic ties with Iran, banned the import of Iranian oil and froze approximately $ 11 billion of its assets (Krauss, 2015). In 1996, the US government approved the Iranian-Libyan Sanctions Act (ILSA), which prohibited US (and non-US) companies from investing and trading more than $ 20 million annually with Iran. Since 2000, items such as pharmaceuticals and medical equipment have been excluded from these sanctions (DeRosa and Hufbauer, 2012). Iran's nuclear programme has been debated over suspicions of its intentions since 2006. The UN Security Council has imposed sanctions on selected companies associated with the nuclear programme, which would cause the country's economic isolation (Gheissari, 2009), in particular, targeted sanctions on nuclear, missile and many military exports to Iran, and investment in oil, gas, and petrochemicals, the export of refined petroleum products, financial transactions, banks, shipping, and insurance. In 2012, the European Union made its sanctions harder by joining the US oil embargo against Iran (Solomon, 2014). Furthermore, the last round of sanctions can bring about $ 50 billion in lost oil revenue annually to Iran. Over the years, sanctions have serious consequences for the Iranian economy and people. The United States has made many international efforts to convince Western governments of the threat of Iran's uranium enrichment programme and the development of nuclear weapon capability. However, Iran has denied it and believes its nuclear programme is for civilian purposes, including power generation and medical purposes (Guzman, 2013). Monetary factors also cause problems, as sanctions cause a sharp fluctuation in the value of the Iranian Rial. Moreover, a weak currency will make imports more expensive, and affect everything that is based on Rial, including wages, stocks, homes, pensions, and gold. Thus, businesses also can hardly determine the price of goods and the value of their services. However, there is difference between (i) the sanctions imposed on imports of nuclear-related products in 2006 and 2007, (ii) the sanctions imposed on non-oil exports in 2008, and (iii) the financial sanctions (such as SWIFT, Banking) against Iran in 2012 (Haidar, 2015). Therefore, sanctions have been categorised based on their effect on the Iranian economy: 1) Political sanctions: block the assets of individuals who are determined to support international terrorism. The list includes dozens of Iranian individuals and institutions, including banks, defence contractors. The Iran-Iraq Non-Proliferation Treaty (1992) prohibits anyone or entity that contributes to Iran's nuclear, chemical or biological weapons. 2) Trade sanctions: The United States bans sanctions that most US companies are banned from trading or investing in Iran until 1995. Although it slowed down in 2000, it almost finished decades later. The Obama administration has taken exception to the sanctions on the sale of consumer telecommunications equipment and software. 3) Energy sanctions: The US's main focus is on reducing Iran's oil revenues. In this way, the pressure on the nonproliferation of nuclear weapons will increase. Before 2012, oil exports accounted for half the revenue of the Iranian government and made up one-fifth of the GDP. Extraterritorial sanctions target foreign companies that provide services or participate in investing in energy activities, including oil and gas and petrochemicals, supplying equipment used in oil refining as well as oil export activities, such as shipbuilding, port operations, and shipping insurance. 4) Financial and banking sanctions: US sanctions by the Treasury Department have sought to isolate Iran from the international financial system. Thus, foreign financial institutions, or subsidiaries that deal with banned banks prevented from conducting transactions with US dollars. In late 2011, the United States also prevented importers of oil imports to make payment through the Central Bank of Iran. Other aspects of the financial sanctions include limiting Iranians' access to foreign currencies so that the funds from oil exports can only be used for bilateral trade with the buyer country or access to humanitarian goods. Askari et al. (2002) believe that financial sanctions policies that are less discussed have had more important and long-term effects. Financial sanctions and policies that can be adequately measured include a restriction on export financing, limiting the IMF and World Bank financing, reducing commercial financing, restricting Iran's debtrescheduling efforts, and reducing FDI inflows (especially in the energy sector). However, effects that are not measurable include air travel restrictions, tourism, and risk assessment of Iran, which in turn affect foreign direct investment in non-energy sectors and other joint ventures. Many international companies are also reluctant to do business with Iran because of the fear of losing access to larger Western markets. The United Nations Security Council Resolution (No. 2231) was adopted on 20 July 2015. Therefore, a plan was made to suspend and eventually abolish United Nations sanctions with provisions to re-impose UN sanctions in case of nonperformance by Iran. Under the Joint Comprehensive Plan of Action (JCPOA), to suspend and eventually lift UN sanctions, almost immediately the EU and the United States announced that sanctions already imposed on Iran were lifted. In practice, all sanctions imposed by the EU were removed. Some US sanctions, but not all of them, were lifted. Between $ 100 billion and $ 150 billion of Iranian financial assets were released. Besides, trade sanctions that limited Iran's oil exports, as well as restrictions on imports of many goods, were also lifted. Hence, it is expected that the lifting of the EU sanctions would have the greatest impact on macroeconomic policy in Iran and elsewhere, because oil accounts for 64 percent of Iran's export earnings, and Iran has a relatively high share (8 percent) of total world exports. Furthermore, the removal or reduction of inspections on imports and exports of Iran were imposed as part of the regime of sanctions. Moreover, transport costs are expected to decrease in trade with Iran. Furthermore, due to the fact that the US and other partners have abolished restrictions on financial transactions services, Iran's import of financial services is expected to increase (Ianchovichina et al., 2016). Nevertheless, on 8 May 2018, the United States announced its withdrawal from the JCPOA, also known as the "Iran nuclear deal". However, we can state that in better political circumstances, such as non-US sanctions, it is likely to have much higher FDI. H9. Sanctions moderate the relationship between macroeconomic factors and the FDI inflow in Iran. The research framework in Fig. 2 shows the impact of selected macroeconomic factors on FDI inflows. However, we expect that the sanction has a moderation role in the expected relationship of macroeconomic variables with the FDI inflows in Iran. DATA AND METHODOLOGY The study tries to identify the determinant factors of foreign direct investment in the Iranian economy based on the secondary data sources for the period of 1991-2014. We seek to explain inward investment in the country based on a number of macroeconomic variables such as infrastructure, trade openness, governance, growth rate, political stability, inflation, and exchange rates. These variables have already been used in literature as factors that may influence FDI inflows. Since international sanctions have made some restrictions on Iran's economy and those may affect FDI as well as macroeconomic factors, we study the moderating effect of sanctions on FDI during the above-mentioned period. The data are taken on an annual basis from various sources, including UNCTAD statistics, the World Bank Indicator reports, Political Risk Services (PRS), Worldwide Governance Indicators (WGI), the International Telecommunication Union (ITU) database as well as Iran's Central Bank reports. FDI INFLOW is the actually used FDI. GDP GROWTH is the real gross domestic product (GDP) growth rate. The growth rate can be representation of the wealth of a country. The good INFRASTRUCTURE will increase investment productivity and encourage FDI inflows (Asiedu, 2002). However, infrastructure measured by the number of mainline telephones per 100 population is used to proxy for the level of infrastructural development. GOVERNANCE measures the level of governance and institutional quality in a country. The data regarding the variable is taken from the World Bank Website and the Worldwide Governance Indicators. However, we study the impact of the six governance indicators on FDI inflow measured by the KKM Index, a broad governance measure developed by Kaufmann, Kraay, and Mastruzzi (2009) which consists of the average of six indicators, including voice and accountability, political stability and absence of violence, regulatory quality, government effectiveness, rule of law, and control of corruption. Trade OPENNESS represents the degree of openness of a country to international trade and foreign investors. It is measured by the ratio of total imports and exports over gross domestic product. Further, it is recognised as a key factor in attracting FDI to the country (e.g., Cleeve et al., 2015;Sahni, 2012;Sahoo, 2006). INFLATION Corresponding to the Consumer Price Index (CPI) changes in years. It is a proxy for macroeconomic stability. Therefore, it shows the government's overall ability to manage the economy. The high inflation rate creates uncertainty about the assets and liabilities of investors. Therefore, companies have less incentive to invest in high inflation countries (Abdellah et al., 2012). Thus, inflation hurts FDI. POLITICAL Stability is measured by ICRG, which is the measure of democracy published in the International Country Risk Guide (ICRG). Political Risk Services (PRS) publish the data; it reflects the extent to which elections are free and fair and the degree to which the government is accountable to its electorate. However, the index ranges from one to six, a higher score of which implies more democracy and accountability (Asiedu and Lien, 2011). EXCHANGE Rate is the time-variant real exchange rate. It represents competitiveness in international trade and the extent of market liberalization in the foreign exchange market (Yao and Zhang, 2001). The depreciation of a host country's currency makes the host country's assets become interesting investment targets for foreign investors. In order to measure the INVESTMENT Return, we follow Jaspersen et al. (2000) and Asiedu (2002) to use the inverse of per capita income as a proxy for return on investment. Thus, investing in countries with higher per capita income should have a lower return, and, therefore, real GDP per capita is inversely related to FDI inflow. SANCTIONS have been categorised based on their effect on the Iranian economy: 1) Political sanctions: frozen assets of entities determined to be supporting international terrorism, including individuals and institutions, defence contractors; and any person or entity that assists Iran in weapon development. 2) Trade sanctions: an embargo that prohibits most firms from trading with or investing in Iran. 3) Energy sanctions: sanctioning services and investment related to the energy sector, including investment in oil and gas fields, sales of equipment, and participation in activities related to oil and gas export. 4) Financial sanctions: isolating Iran from the international financial systems such as a central bank, credits and swift system. However, to measure the SANCTIONS variable, we have first explored all sanctions against Iran. However, from 1990 to 2014, about 25 sanctions by the United States (US), 6 sanctions by the European Union (EU) and 9 sanctions by the United Nations Security Council (UN) were imposed against Iran. Then, we have listed the sanctions in a form and asked twenty economists to score these sanctions, and determined the significance of each sanction based on its intensity and impact on Iran's economy. As a result, financial boycotts found to be the most effective and toughest sanctions on Iran, followed by energy sanctions, trade and, finally, by political sanctions that have less impact on the Iranian economy. Therefore, we have calculated the value of sanction effects each year by weighting them based on the type and the number of sanctions imposed for each year. For example, in 2006, there were three political sanctions and one trade sanction; therefore, the value of the sanction variable for this year was 0.4 (2×0.1+1×0.2). Moreover, since previous sanctions were still in place, the value of the previous year was added to the value of sanctions in the year 2006. Having established co-integration among the variables, we have investigated their impact on the FDI inflows. For this purpose, we propose an Ordinary Least Squares (OLS) method to estimate long-term relationships. RESULTS We have used the Unit Root Test and Augmented Dickey-Fuller (ADF) method in this study. As shown in Table 2, the dependent variable of FDI inflows and the variety of macroeconomic variables are stationary. The number in parenthesis is t-value. The Durbin-Watson Statistic has been used to test for the presence of serial correlation among the residuals. The value of Durbin-Watson for Model 3 is 1.879, approximately equal to two, indicating no serial correlation. Table 4 demonstrates the effect of sanctions on macroeconomic factors. The result of simple linear regression reveals that sanctions have a positive significant impact on the inflation rate (t = 16.943), trade openness (t = 6.215), infrastructure (t = 19.223), and exchange rate (t = 6.563). Nevertheless, the international sanction against Iran has a negative significant impact on GDP growth (t = −2.422) and governance (t = −6.903). Furthermore, sanctions have a positive impact on FDI inflows in Iran. Table 5 show the effect of the macroeconomic variable on foreign direct investment in Iran. As expected, infrastructure is very significant in FDI. However, previous studies (e.g., Jafarnejad et al., 2011;Ramirez, 2009;Asiedu, 2005) indicate a positive significant relationship between infrastructure and FDI. The governance of the host country has a negative significant impact on attracting FDI into Iran. It means a 1 percent depreciation in the level of governance causes FDI to increase by approximately 0.44. Thus, our result is inconsistent with other studies (e.g., Mengistu and Adhikary 2011;Samimi and Ariani, 2010;Globerman and Shapiro, 2002). Moreover, the results illustrate that the exchange rate is significant in explaining changes in FDI. The finding is in line with other studies (e.g., Nurudeen, Auta, & Wafure, 2011;Adam & Tweneboah, 2009;Kaya & Yilmaz, 2003). Accordingly, they found a positive impact of exchange rate on FDI inflows. However, others such as Masayuki and Ivohasina (2005) found that exchange rate depreciation might encourage the inflow of foreign direct investment to the host country. The regression results in Furthermore, the results reveal that trade openness of the economy and political stability are statistically insignificant but positively related to foreign direct investment. However, the result is in the context of developed countries (Jimenez et al., 2011;Bitzenis et al., 2009). Similarly, the results show that GDP growth has an insignificant effect on foreign direct investment in Iran. This is consistent with Abdel-Rahman's (2002) results that indicate that GDP growth rate has a positive but mainly insignificant impact on FDI in Saudi Arabia. Another result of estimation is that the investment return of the economy has a positive relation with FDI inflows. The positive impact of investment return on FDI reflects the situation in Iran's oil and gas sectors that have continued to attract more foreign investment regardless of the imposed sanctions. Furthermore, the results have shown a negative effect of governance on FDI inflows; therefore, our OLS regression results are consistent with the empirical literature (e.g., Kuzmina et al. 2014). The worse the governance quality, the less foreign investment we observe in Iran. A common explanation of this evidence would be that corruption and potential pressure create uncertainty for investors in terms of their future cash flows, acting as an additional tax and increasing the risks of business capture, thereby decreasing the attractiveness of a particular region (Kuzmina et al. 2014). Moreover, the result in the model 3 shows that when the sanctions moderate the relationship between macroeconomic determinants and FDI, political stability and exchange rate have a significant negative impact on FDI inflow in Iran. DISCUSSION AND CONCLUSIONS Following the literature, our research examines the known macroeconomic factors from the literature in the context of Iran, which has a unique context, quite different from those of other countries, given its unique geographical and historical situation and especially the unique international sanctions faced by the country. Comprehensive research into the established FDI macroeconomic factors in Iran in the light of the unique international sanctions would, therefore, throw new light on the subject, particularly, the impact of international sanctions on incoming FDIs in a country. This is obviously an important issue to post-revolutionary as well as postsanctions Iran. Our findings demonstrate that most of the macroeconomic factors have an impact on the FDI inflow into Iran. Gross & Trevino (1996) state that countries that have high levels of GDP growth are highly inclined to increase foreign direct investment flows by attracting trust from multinational corporations and encouraging them to invest. However, according to Biglaiser & DeRouen (2011), more economic development attracts investors and they believe that the potential market is for a high return on investment. Further, a positive relationship between infrastructure and FDI implies that the development of infrastructure will increase inflows of FDI to Iran (Alavinasab, 2013). FDI investors usually look for a location that has suitable infrastructures such as roads, transportation, and telecommunications. Investing in developed host markets can reduce investors' production costs and then increase their profits. Foreign investors in Iran are more focused on energy sectors, including oil and gas, petrochemicals, as well as telecommunications, car manufacturing, and mine industries. Javidan & Dastmalchian (2003) have, however, indicated that there are two internal movements with totally different thoughts to determine the direction of the future of the country: those who are not opposed to development and consider it as a means to achieve religious goals; and those who feel that the survival of Islam and the progress of Iran require a more modern perspective. However, a continued confrontation between the two streams has caused political instability and turmoil and has slowed the progress of the country. Furthermore, our finding indicates that sanctions have a significant impact on governance. This implies that sanctions may lead to corruption and bad governance, which increase administrative costs and, therefore, reduce FDI inflows. However, governance affects the security of property rights, transparency, and legal process. Furthermore, the results indicate a negative effect of sanctions on GDP. This may be due to embargos on Iran's oil and gas, which reduce oil exports. Along with the dependency on real GDP growth in Iran is oil. Therefore, sanctions have had a significant impact on Iran's economic growth in recent years due to an increase in the severity level of sanctions. Additionally, sanctions have a positive impact on the inflation rate and exchange rate in Iran. When international financial sanctions hampered access to oil revenues, Iran experienced a currency crisis that led to a sharp decline in the Rial. On the other hand, the government faced a problem to increase foreign currencies for its import needs, since the demand for foreign currencies exceeded supply, which in turn led to a depreciation of the Rial. Moreover, sanctions are not the major cause of the exchange rate crisis in recent years. Our results indicate that sanctions do not have a significant moderating role in the relationship between macroeconomic factors and foreign direct investment. Surprisingly, international sanctions have a positive relationship with FDI inflows in Iran. It means that, despite the sanctions, some multinational companies have realised the opportunities in the Iranian market as a developing economy and have invested in less-under-threatened industries. Moreover, the special conditions of Iran during the years after the Iraq-war, including the abundance of natural resources, geographic location, the young and educated population; and the growing economy, have set the country as one of the objectives of direct foreign investment. However, over the years, sanctions have serious consequences for the people and the economy of Iran. Nevertheless, the impact of sanctions is often denied in the Iranian press. Iran has taken measures to circumvent sanctions, in particular through using barter trade and with the help of front countries or companies. Moreover, in response to the sanctions, the Iranian government has backed a "resistance economy", such as more domestic use of oil due to limited export markets and the use of alternative industries. After the agreement between Iran and the P5+1 in 2015, the so-called postsanction era has begun in Iran. Moreover, sanctions relief will affect Iran's economy in four main ways: (1) the release of Iran's frozen funds abroad by 2015, which is over $ 100 billion; (2) the lifting of the sanctions against Iran's oil exports; (3) allowing foreign companies to invest in oil and gas, automobiles, hotels and other parts of Iran; (4) permitting trade with the rest of the world and access to a global banking system, such as SWIFT. However, with lifting sanctions, prospects are brighter for Iran, with new opportunities arising in oil and gas, and investment in manufacturing industries. Iran's government has established several incentive programs in order to encourage foreign companies to invest in Iran. In addition, providing incentives will attract more foreign investment, create jobs, provide access to new technologies and result in other social and economic benefits. However, Cleeve (2008) argues that incentive costs outweigh their benefits, and he believes that improvements in local infrastructure, political stability, and macroeconomic stability are better tools for stimulating foreign direct investment inflows. Nevertheless, in order to maximise the benefits of sustainable development through FDI (and other external sources of finance), policymakers must be mindful of minimising risks. Therefore, through good governance, stakeholder participation, creating relevant local capacities, increasing absorption capacity (entrepreneurship, technology, skills, and communication), and creating effective standards and regulatory framework, risks can be minimised (UNCTAD, 2015). Haidar (2015) asserts that while export sanctions against Iran have not reduced total exports, they have increased export costs. If the goal is to reduce total exports, export sanctions may not be effective in global economy. He argues that sanctions may be less effective in a globalised world because exporters can shift their exports from an export destination to the other. Thus, the idea that a country can impose trade sanctions on another does not necessarily prove the effectiveness of such sanctions. The Iranian government must strive to make more deregulation in its economy to attract more foreign direct investment. It is true as the inflow of foreign direct investment has increased since the introduction of the investment incentive programme in 2005. The sanctions on Iran and the current crisis in the Middle East region have been a major obstacle to the instability of Iran's economy. Thus, the restoration of peace in the region and the removal of sanctions will encourage more foreign investment to Iran. Furthermore, Iran needs to increase the competitiveness of the investment environment by investing more in infrastructure and ultimately increasing the inflow of FDI. Finally, all of the above-mentioned considerations should be accompanied by ongoing reforms in the Iranian economy.
9,247
2020-01-01T00:00:00.000
[ "Economics", "Business" ]
Reprocessable Networks from Vegetable Oils, Salts, and Food Acids: A Green Polymer Outreach Demonstration for Middle School Students Massive amounts of mismanaged plastic waste have led to growing concerns about their adverse impacts on the environment, ecosystem, and human health. Enabling efficient plastic recycling is a key component for developing a sustainable future, which requires cohesive efforts in technology innovations, public awareness, and workforce development. Particularly, outreach activities to inform the broader community about current efforts to fabricate sustainable polymeric materials can play a central role in inspiring future generations while also improving their knowledge, viewpoints, and behaviors to address plastic waste challenges. Herein, this account demonstrates an effort to educate middle school students about a key emerging concept in polymer science for sustainable material development: reprocessable polymer networks. Background information is provided to the students about the need to transition from petroleum-based chemical feedstocks to their bioderived counterparts. We note that the materials used in this demonstration lesson are all produced from common household foods, with which students routinely interact in various applications, making them not only safe but also compelling for the middle school classroom. ■ INTRODUCTION Polymer materials have a ubiquitous role in our everyday lives due to their low cost, ease of production, and ability to access a broad range of different properties. 1 The annual production volume of plastics has reached nearly 500 million metric tons.−4 To address these challenges, the plastic industry is experiencing an exciting transition from a linear to circular system, of which recycling is a crucial component.Collaborative efforts from the government, industry, and academia have made significant contributions in research and outreach which provides an exciting opportunity to educate future generations about the design and need of sustainable polymer materials. 5 general, there are two major types of polymers, including thermoplastics and thermosets. 6The key difference between them is associated with processability (i.e., describing the ability of them to transform from materials to products).Specifically, thermoplastics can flow at elevated temperatures enabling them to be engineered into different products, while thermosets are crosslinked and maintain a permanent shape.The crosslinks are covalent bonds that interconnect different polymer chains into a network, making thermosets very challenging to be (re)processed into new materials and products.Currently, thermoplastics are significantly easier to recycle at a large scale.Commercially, the majority of recycling processes taking place is mechanical recycling (melting/ reprocessing into a new product).However, it is best suited for single-plastic waste streams that are extremely difficult to obtain. 7Chemical recycling (e.g., depolymerization) is another option, but it is used far less frequently on a commercial level due to its high energy requirements and costs.As a result, the recycling of thermosets is an intractable challenge in the plastic industry. 8n recent years, extensive research efforts have been focused on the development of reprocessable networks as an alternative to conventional thermosets.These materials contain crosslinks that can be dynamic (or rearranged) under specific conditions (different external stimuli), such as heat or light, enabling simultaneous crosslinking features and processability, which hold great potential for various applications including selfhealing coatings, stimuli-responsive materials, and recyclable composites. 9Moreover, an important aspect in designing new materials is how the process aligns with the principles of Green Chemistry, 10,11 which is defined as the "design of chemical products and processes to reduce or eliminate the use and generation of hazardous substances".Specifically, the need to transition from petroleum-based systems to bioderived and renewable resources is strong, providing potential opportunity to reduce carbon emissions and other environmental impacts from current plastic manufacturing activities. This paper describes a simple demonstration to middle school students about the concept of reprocessable polymer networks using household food-relevant resources, wherein current educational literature for reprocessable networks primarily focuses on undergraduate laboratory experiments/ research and not K−12 outreach. 12,13This demonstration can be accomplished over two 50 min class periods.Specifically, the first class focuses on introducing fundamental concepts about polymers, their recycling challenges, and their environmental impacts.In the second class, hands-on demonstrations are provided to show differences in conventional polymer networks and their reprocessable counterparts that are derived from food grade acids, vegetable oils, and salts.Through these engaging activities, students can increase their knowledge of chemistry and polymer science while becoming more aware and inspired about the opportunity to address the plastic waste challenge through science and technology. Materials and Consumables The necessary materials and equipment for the demonstration are listed below: • Epoxidized soybean oil (ESO) • Citric acid • Sodium bicarbonate (i.e., baking soda) • Ethanol • Water (tap water is suitable for demonstration purposes) • Glass Petri dishes • ∼500 g weight • Aluminum weigh boats, glass vials, pipettes, a razor blade, gloves, and a hot plate.The soybean oil used was obtained from Thermo Scientific and was epoxidized with meta-chloroperoxybenzoic acid (mCPBA, described in Supporting Information) before synthesizing the polymer networks.This step is accomplished in the lab prior to visiting middle school classrooms.We note that epoxidized soybean oil is also commercially available.Ethanol (200 proof) was obtained from Fischer Chemical, and citric acid and sodium bicarbonate were obtained from Sigma-Aldrich; all were used as received. Procedures for Material Synthesis Both the thermoset and the reprocessable network are based on the esterification reaction between the epoxide groups in epoxidized soybean oil (ESO) and the carboxylic acid groups of citric acid.The chemical structure of compounds used in this demonstration are shown in Figure 1. To prepare the thermoset, ESO (5.0 g, 5.07 mmol, 1 equiv) and citric acid (1.3 g, 6.74 mmol, 1.33 equiv) were weighed into separate glass vials with a 1:1 epoxide/carboxylic acid ratio.Subsequently, a small amount of ethanol (∼5 mL) was added to the vials for dissolving the ESO and citric acid.The contents of the vial were then poured into an aluminum weigh boat for evaporation.After removal of the ethanol, the reaction mixture was heated on a hot plate at 100 °C for 30 min and then at 125 °C for 45 min for curing.A similar procedure was employed for preparing reprocessable networks, with the addition of sodium bicarbonate (63 mg, 1 wt %) to allow reprocessing to occur.Fourier transform infrared spectroscopy (FTIR) was used to confirm the formation of the materials during the design of the demonstration but is not necessary for future instructors; the reported procedure is reproducible, and the results can be found in the Supporting Information. Demonstrating Reprocessability To demonstrate reprocessability, two pieces of each network (∼1.3 cm squares) were cut from the samples using a razor blade and placed on the top of a glass Petri dish.The pieces from each network were then stacked, partially overlapping.The bottom of the Petri dish was then placed on top of the dish to give a flat surface for adding the weight.The Petri dish was then moved to a hot plate set to 125 °C with the 500 g weight on top for approximately 10 min.Subsequently, the weight was removed, and samples were naturally cooled to room temperature.Students were then allowed to come forward and look at the two networks up close, as well as put on gloves and touch samples of both networks to further understand their differences.Detailed procedures for material preparation are included in the Supporting Information. ■ SAFETY AND HAZARDS Personal protective equipment must be worn when performing the demonstration (including a lab coat, safety glasses, gloves, and closed toe shoes).During heating with the hot plate, attention should be paid due to elevated temperatures and heat gloves should be worn.Epoxidized soybean oil is slightly irritating to the eyes and skin.Citric acid should be used in a well-ventilated area, and ethanol is flammable and should not be used around an open flame.It is also recommended that extra caution be used when handling the razor blade to avoid cuts. Rationale of Chemical Selections Vegetable oils are triglycerides extracted from various plants and have been used in food and cosmetic applications for centuries.In recent years, vegetable oils have gained significant attention in academic research and industrial applications due to their low cost, renewability, biodegradability, abundance, and versatility.Specifically, vegetable oil-based biofuels have become a promising renewable energy source. 14In this demonstration, soybean oil was selected due to its prevalence, as it is one of the most widely produced vegetable oils, only second to palm oil, with over 40 million metric tons produced worldwide each year for the past decade. 15Moreover, this raw material is abundant in the Southern regions of the U.S., which can make Mississippi students feel more relevant and engaged. Citric acid, the other main component of the networks (crosslinker), is an organic tricarboxylic acid found in a variety of citrus fruit juices and pineapples that has broad applications from a vitamin preservative to use in the detergent industry as a substitute for phosphates. 16We note citric acid is environmentally friendly and can be commercially produced by different microorganisms (e.g., bacteria, fungi, and yeast). 17aking soda, the catalyst added to the reprocessable networks, is a water-soluble acid salt of sodium and bicarbonate with a wide range of applications, including cleaning, fire extinguishing, deodorizing, and baking.Baking soda is made from soda ash (sodium carbonate) which can be prepared using the Solvay process or from mined trona ore. 18e note that these three reagents (all shown in Figure 2a) were selected because it is very likely that most students have already interacted with them, such as in their home kitchens.It is important to recognize that using bioderived materials does not necessarily mean they are more sustainable and/or "greener" than their petroleum-derived counterparts. 17How-ever, for the purpose of outreaching, starting with materials that they are familiar with, instead of other "unfamiliar chemicals", helps the demonstration and concept be easier to follow and grasp.Additionally, using reagents that can be found in nature or are biorenewable helps to address and make connections with socioscientific issues such as sustainability, that can inspire the students to have improved critical thinking skills and greater awareness of the world around them. 19igure 2b shows a cured, reprocessable network sample.After heating, both types of samples were relatively translucent (sample roughness caused opacity in some samples) and the thermosets tended to be more yellow in color, while the color of the reprocessable networks was a pale yellow. Mechanisms of Cross-Linking and Reprocessability The reaction scheme of network formation and transesterification can be found in Figure 3. From a fundamental chemistry perspective (information for teachers), Figure 3b shows the transesterification between the initially formed esters between the citric acid/epoxide (Figure 3a) and the free alcohols formed after epoxide ring-opening.This reaction enables the reprocessable network to rearrange and flow with the application of heat and pressure.However, transesterification will not occur under neutral, uncatalyzed conditions, leading to the permanent shape formed in the thermosetting samples.Specifically, baking soda serves as a basic catalyst in this system and causes a two-step addition−elimination process to occur (nucleophilic acyl substitution). To briefly describe this process in a grade-appropriate manner, by the time students are in the 7th−8th grade, they have been taught what atoms/elements are, so we explained the concept of functional groups as specific arrangements of the atoms that allow reactions to occur.Only certain functional groups can react together under specific conditions and can be considered puzzle pieces that click together (Figure 3c).Once they understand the concept of catalysis, it is easier for them to grasp where the conditions can be made less intense (lower temperatures) for reactions to proceed or make reactions more efficient. A simple demonstration was done for the class through visualizing the differing nature between a permanently crosslinked network and its reprocessable counterpart.The sample containing baking soda has a rearrangeable network structure and, thus, the ability to flow upon pressure at elevated temperatures.As shown in Figure 4, a sample of each network was first provided for the students to compare their appearances.During demonstrations at the school, the samples were not dyed but are here for clarity, so it is easier to follow the shape changes in the materials.Methyl red and methylene blue hydrates (both obtained from TCI Chemicals) in ethanol were used to dye the sample pieces.Subsequently, both samples were cut in half and stacked partially overlapping, followed by the application of pressure and heat (500 g weight at 125 °C).It can be observed that the reprocessable network can become a continuous network, while the thermoset is still a separate piece that cannot be reformed.Here, students noticed that the reprocessable network has also gotten much flatter because not only is it reformable but also the heat allows the material to flow (confirmed by the flow features of incorporated dyes), while the thermoset is not able to; thus, it keeps a similar shape (and thickness) to its original.This can also be seen in the images in Figure 4, where the thermoset dye concentrations are about the same before heat and pressure are applied; the only major difference is the distortion and slight fractures which can be attributed to the applied pressure during demonstration.However, in the reprocessable network, the dye appears more dilute in the matrix after heat and pressure due to the processable nature of the sample, leading to the formation of characteristic flow patterns. Classroom Activities This demonstration was performed in the classroom of a local middle school (7th grade) over two 50 min class periods.A total of ∼80 students participated in a total of 5 classes.During the first class, instructors began with a Kahoot!quiz that asked students questions to gauge their understanding of polymers/ plastics followed by a lecture over relevant information to understand the demonstration (all instructional materials and instructor notes can be found in the Supporting Information).The lecture goes over the information in the introduction section.Specifically, we covered a basic introduction of polymers (and their difference from small molecules), the applications of polymers in various subjects, what is recycling and why it is important (which help students understand the importance of sustainability), and common categories of polymers (thermoplastics vs thermosets), as well as reprocessable networks.These topics are crafted to smoothly transition students toward understanding a crucial, albeit slightly complex concept by starting with foundational knowledge, while integrating insights on the challenges and opportunities for enhancing environmental sustainability.Throughout the lecture, we employ examples that can resonate with middle school students, such as the use of plastics for food packaging and textiles, making the scientific principles accessible and facilitating effective learning.This itself is not a part of the demonstration but is necessary to understand the demonstration if lectures on polymer science have not already been given in the existing curriculum.To further aid in student understanding, while the lecture occurred, students were given print-outs of the slides with words removed/blanks to fill in (see Supporting Information).In the second class, the demonstration was introduced, and students were then given vials with the starting materials and asked to make observations guided by a handout.While the networks were prepared by the instructors, students were led through the process with pictures and the formed networks were passed around so they could observe and compare them.The students were asked to make a hypothesis/educated guess about which network would be able to be reformed upon subsequent cutting, heating, and application of pressure.After the demonstration, the students were again asked to write down their observations and evaluate if their hypothesis was correct and explain why using their observations postdemonstration. The demonstration handout (provided in the Supporting Information) contains some questions to guide the students, which also allow the instructor to assess the learning outcomes.The questions were designed around the learning targets based on Mississippi College-and Career-Readiness Standards for Science 20 (P.7.5A,P.7.5B (1 and 3), and E.7.9B) where there were four learning targets for the students: (1) define what a polymer is, (2) describe how the two networks are different (based on observation), (3) be able to differentiate recyclable and nonrecyclable materials, and (4) explain how current research is targeting sustainability (Table 1). The design of the lesson implemented several pedagogical strategies including active learning and game-based learning.−23 Since active learning is widely integrated into the K−12 classroom, it was strategically integrated into the presentation of this lesson.Specifically, game-based learning 24 and multimedia learning were implemented through the Kahoot!quiz since the students used the class set of laptops to complete the quiz.Think-pair-share 25 was also incorporated; when a question was asked, students first developed their own thinking prior to discussing their responses with classmates and subsequently did their worksheets in groups.The design of hands-on experiments and demonstrations to help students visualize the difference in processability in distinct samples leverages the experiential learning strategy. 26Bloom's taxonomy 27 was also employed during the planning; first, the focus was on simply communicating the information, and by the end of Day 2, higher-level thinking was required to complete the worksheet with peer discussion encouraged. While this demonstration/lesson plan was originally designed for seventh grade science students in Mississippi, the learning standards state to state in the U.S. are similar; therefore, adapting this to fit individual classrooms should be relatively simple with the instructor notes provided.Potential room for improvement is also identified in the Supporting Information along with suggested solutions.For example, if your classroom does not have the capability for every student to have a phone/laptop to play the Kahoot!, we recommend adapting the questions into another type of game, so it is still a fun introduction for the students.Opportunities also exist for the application of this demonstration in a high school or even college setting as a hands-on experiment.Details are in the Supporting Information, but this is proposed to take place over four class periods with day 1 as lecture, day 2 for preparing the two samples, day 3 for curing the samples, and day 4 for testing reprocessability. Students' Response The students were quite excited for the Kahoot!quiz, and starting with the use of multimedia (e.g., laptops) helped to grab their attention for the rest of the lecture.They were engaged throughout the lessons, particularly during observation of the demonstrations.Students were also excited when they were given gloves and interacted with different polymer networks, all derived from common household foods.Overall, the students had a fun time with the hands-on, nontraditional lesson, and the chosen pedagogical approaches were found to be successfully implemented. Test Lesson Handouts were given at the conclusion of each lecture to determine middle school students' level of understanding and evaluate the learning objectives.This demonstration was performed for ∼80 students over a two-day period.After day one, the students were given an exit ticket and asked to write down a new definition that they learned and what they wanted to learn more about on the second day.This was done to see if they would be able to give the definition of a polymer or polymer related term.When reviewing their answers, it was found that 56% did not leave with a good understanding and/ or could not communicate clearly; 31% understood and were able to communicate clearly; and 13% excelled and exceeded expectations.These categories were based on their responses to the question of what they wanted to learn about in the next meeting (building on what was covered during this class) and were made by comparing their answers to each other's.More specifically, among the 56% who did not leave with a good understanding and/or could not communicate clearly, more than 50% of them seemed to have an understanding about overarching concepts (such as describing the reprocessing and cross-linking nature), but their use of scientific terminology was incorrect (i.e., switched up the words "monomer" and "polymer").Answers varied from "polymers" to complex concepts regarding some of the current research that is being done in universities that was briefly mentioned during the lecture. After the second day, the handout asked questions probing students' understanding of the demonstration.We note that the question sheet (for learning outcome assessment) as well as the answer key are included in the Supporting Information, which teachers can either directly use or tailor to address their own class.Through our demonstration, it is found that 84% of students can effectively describe what happened to the Journal of Chemical Education pubs.acs.org/jchemeduc Demonstration materials during curing (from Concluding Question 1), 58% of students recognized the need of adding baking soda as a catalyst (from Concluding Question 2), and 62% of students were able to understand the concept of reprocessing networks and their difference compared to permanently cross-linked networks (from Concluding Questions 3−5).Concluding Question 6 is an open question, in which 85% of students provided an answer that explained how the results confirmed or refuted their original hypotheses.We believe these results confirm that our demonstration was grade-appropriate and successfully indicated the students' understanding of the learning targets.Therefore, this demonstration could effectively help middle school students to grow their knowledge in polymer materials as well as be further aware of sustainability needs. ■ SUMMARY This work presents a middle school demonstration aiming to increase the general knowledge of sustainable polymers along with showing how polymer scientists are currently developing new materials to address the plastic waste problem.The materials used in this demonstration include epoxidized soybean oil, citric acid, and baking soda, all common household, bioderived food items that are readily available in grocery stores.The procedures described do not involve highcost equipment and are suitable for middle school students.In the Supporting Information, we provide all PowerPoint presentations and handouts used as well as thorough instructor notes.The students are asked thought provoking questions to assess their understanding of polymers/plastics and are brought through the process of making the networks, and the reprocessability of the dynamic networks is demonstrated in real-time. ■ Figure 1 . Figure 1.Chemical structures of the reagents used in the formation of the networks. Figure 2 . Figure 2. (a) Starting reagents and solvent for the networks and (b) cured reprocessable network. Table 1 . Summary of Pedagogical Goals and How They Were ImplementedFrom 2018 Mississippi College-and Career-Readiness Standards for Science for 7th graders. a
5,061
2024-06-05T00:00:00.000
[ "Environmental Science", "Education", "Materials Science" ]
Live cell-lineage tracing and machine learning reveal patterns of organ regeneration Despite the intrinsically stochastic nature of damage, sensory organs recapitulate normal architecture during repair to maintain function. Here we present a quantitative approach that combines live cell-lineage tracing and multifactorial classification by machine learning to reveal how cell identity and localization are coordinated during organ regeneration. We use the superficial neuromasts in larval zebrafish, which contain three cell classes organized in radial symmetry and a single planar-polarity axis. Visualization of cell-fate transitions at high temporal resolution shows that neuromasts regenerate isotropically to recover geometric order, proportions and polarity with exceptional accuracy. We identify mediolateral position within the growing tissue as the best predictor of cell-fate acquisition. We propose a self-regulatory mechanism that guides the regenerative process to identical outcome with minimal extrinsic information. The integrated approach that we have developed is simple and broadly applicable, and should help define predictive signatures of cellular behavior during the construction of complex tissues. Introduction Understanding organogenesis, organ morphostasis and regeneration is crucial to many areas of biology and medicine, including controlled organ engineering for clinical applications (Lancaster et al., 2013;Boj et al., 2015;Sato and Clevers, 2015;Willyard, 2015). External tissues sustain continuous injury and must recurrently repair to maintain physiological function during the life of the organism (Levin, 2009). Structural reproducibility depends on the re-establishment of cell identity, number, localization and polarization. Two aspects of organ regeneration are the current focus of intense attention. First, how multiple cells interact to recapitulate organ architecture. Second, what is the mechanism that controls the correct reproduction of cell number and localization. Here we use the neuromasts of the superficial lateral line in larval zebrafish to gain a global perspective on sensoryorgan regeneration. The neuromasts are ideally suited for this purpose because they are small and external, facilitating physical access and three-dimensional high-resolution videomicroscopy of every cell during extended periods. We have combined live single-cell tracking, cell-lineage tracing, pharmacological and microsurgical manipulations, and multidimensional data analysis by machine learning to identify features that predict cell-fate decisions during neuromast repair. Our comprehensive approach is simple and model independent, which should facilitate its application to other organs or experimental systems that are accessible to videomicroscopy. It should help reveal the basic rules that underlie how complex structures emerge from the collective behavior of cells. Complete neuromast ablation is irreversible in larval zebrafish The neuromasts of the superficial lateral line in zebrafish are formed by a circular cuboidal epithelium of 60-70 cells (Ló pez-Schier and Hudspeth, 2006;Ghysen and Dambly-Chaudière, 2007;Norden, 2017). Mechanoreceptive hair cells occupy the center of the organ, whereas non-sensory sustentacular supporting cells are found around and between the hair cells ( Figure 1A). A second class of supporting cell called mantle cells forms the outer rim of the organ. The invariant spatial distribution of these three cell classes generates a radial symmetry ( Figure 1B) (Pinto-Teixeira et al., 2015). Neuromasts also have an axis of planar polarity defined by the orientation of the hair-cells' apical hair bundle ( Figure 1C) (Ghysen and Dambly-Chaudière, 2007;Wibowo et al., 2011). In addition to this geometric organization, cell-class number and proportions are largely constant, with around 40 sustentacular, 8-10 mantle, and 14-16 hair cells. Non-sensory cells can proliferate, whereas the sensory hair cells are postmitotic (Ló pez-Schier and Hudspeth, 2006;Ma et al., 2008;Cruz et al., 2015;Pinto-Teixeira et al., 2015). Finally, a string of interneuromast cells connects each neuromast along the entire lateral-line system ( Figure 1A) (Ghysen and Dambly-Chaudière, 2007). Previous studies have extensively characterized the regeneration of the mechanosensory hair cells (Williams and Holder, 2000;Harris et al., 2003;Ló pez-Schier and Hudspeth, 2006;Hernández et al., 2006;Ma et al., 2008;Behra et al., 2009;Faucherre et al., 2009;Wibowo et al., 2011;Namdaran et al., 2012;Steiner et al., 2014;Jiang et al., 2014). However, the regeneration of non-sensory cells remains largely unexplored. To obtain quantitative data of whole sensory-organ regeneration we developed an experimental assay that combines controllable neuromast damage, long-term videomicroscopy at cellular resolution, and live cell-lineage tracing. We used combinations of transgenic lines expressing genetically encoded fluorescent proteins that allow the precise quantification and localization of each cell class in neuromasts, and which also serve as a direct and dynamic readout of tissue organization. This is important because it enables the visualization of cell-fate transitions in living specimens within the growing tissue at high temporal resolution. Specifically, the Tg[alpl:mCherry] line expresses cytosolic mCherry in the mantle and interneuromast cells ( Figure 1D). The Et(krt4:EGFP)sqgw57A (hereafter SqGw57A) expresses cytosolic GFP in sustentacular cells ( Figure 1E). The Tg[À8.0cldnb:LY-EGFP] (Cldnb:lynGFP) express a plasma-membrane targeted EGFP in the entire neuromast epithelium and in the interneuromast cells ( Figure 1F), and the Tg[Sox2-2a-sfGFP] (Sox2:GFP) expresses cytosolic GFP in all the supporting cells and the interneuromast cells ( Figure 1G). For hair cells, we use Et(krt4:EGFP)sqet4(SqEt4) that expresses cytosolic GFP ( Figure 1H), or the Tg(myo6b:actb1-EGFP)(Myo6b:actin-GFP) that labels filamentous actin ( Figure 1I). These transgenic lines have been previously published, but are reproduced here for clarity and self-containment of this work (Ló pez-Schier and Hudspeth, 2006;Kondrychyn et al., 2011;Kindt et al., 2012;Shin et al., 2014;Steiner et al., 2014;Pinto-Teixeira et al., 2015). To induce tissue damage in a controllable and reproducible manner, we used a nanosecond ultraviolet laser beam that was delivered to individual cells through a high numerical-aperture objective, which was also used for imaging. The stereotypic localization of the neuromasts along the zebrafish larva varies only marginally between individuals and during larval growth ( Figure 1J) (Ledent, 2002;Ló pez-Schier et al., 2004). This permits the unambiguous identification of the manipulated neuromast throughout the experiment, and the comparison between corresponding organs in different animals. Using Sox2:GFP 5 day-old zebrafish larvae that ubiquitously express a nucleus-targeted redfluorescent protein (H2B-RFP) ( Figure 1K-L), we certified that laser-targeted cells are rapidly eliminated from the neuromast epithelium with no detectable collateral damage ( Figure 1M-P and Video 1). Having established a well-controlled injury protocol, we decided to probe the limits of neuromast regeneration. We first used specimens co-expressing Alpl:mCherry and Cldnb:lynGFP, which reveal all neuromast cells in green and the mantle cells in red ( Figure 2A). We began by ablating entire neuromasts and assessed regeneration for 7 days ( Figure 2B-E). Specifically, we looked at the response of flanking interneuromast cells because it has been demonstrated that they can proliferate and generate additional neuromasts, particularly upon loss of ErbB2 signaling (Ló pez-Schier and Hudspeth, 2005;Grant et al., 2005;Sánchez et al., 2016). Four hours post-injury (4 hpi) a wound remains evident at the target area ( Figure 2B). One day post-injury (1 dpi), the damaged area was occupied by a thread of Alpl:mCherry(+) cells, which based on marker expression are Figure 2C). None of the removed neuromasts regenerated after 7 days (n = 22) ( Figure 2D-E). We obtained an identical outcome using the independent pan-supporting cell marker Sox2:GFP (n = 9) ( Figure 2F-J). Finally, incubation of Alpl:mCherry specimens with Bromodeoxy-Uridine (BrdU) to reveal the DNA synthesis that occurs prior to mitosis showed that interneuromast cells do not proliferate after neuromast ablation ( Figure 2K-N) (Gratzner, 1982). These data indicate that in contrast to what occurs in embryos (Sánchez et al., 2016), the complete elimination of a neuromast is irreversible in larval zebrafish. Neuromasts have isotropic regenerative capacity To further explore neuromast repair we decided to use milder injury regimes. We systematically produced controlled damage of well-defined scale and location in double transgenic specimens that combine the supporting cell marker Cldnb:lynGFP and the mantle-cell marker Alpl:mCherry ( Figure 3A-O). We found that ablation of the posterior half of the neuromast was followed by closure of the wound within 24 hr ( Figure 3A-C). At 3 dpi, target neuromasts regained normal cell-class spatial distribution (n = 6) ( Figure 3D). At 7 dpi, neuromasts recovered approximately 70% of the normal cell number ( Figure 3E,Z). We found no difference in speed and extent of regeneration after concurrently ablating the posterior half of neuromasts and flanking interneuromast cells (n = 5) ( Figure 3F-J,Z). The ablation of the posterior or the dorsal half of the epithelium resulted in identical outcome, suggesting that neuromasts are symmetric in their regenerative capacity (n = 6) ( Figure 3K-O,Z). Next, we assessed mantle-cell regeneration using a double transgenic line expressing Sox2:GFP and Alpl:mCherry, which reveal mantle cells in red and sustentacular cells in green ( Figure 3P-Y). The complete elimination of mantle cells was followed by their re-emergence 3 dpi ( Figure 3Q-S), and the reconstitution of the outer rim of the neuromast 7 dpi (n = 15) ( Figure 3T,Z). The simultaneous ablation of the mantle cells and the adjacent interneuromast cells led to identical outcome (n = 6) ( Figure 3U-Z). The ablation of the interneuromast cells in fish co-expressing Sox2:GFP and Alpl:mCherry on one side of a neuromast (n = 12), or between two adjacent organs (n = 8) did not trigger the proliferation of the remaining interneuromast cells over a period of 7 days (Figure 3-figure supplement 1A-J). Because the complete ablation of mantle cells leaves intact the sustentacular-cell population, and the hair cells are postmitotic, these results yield three important and novel findings: (1) interneuromast cells are not essential for neuromast regeneration in larval zebrafish, although they may contribute to mantle cell regeneration; (2) neuromasts have isotropic regenerative capacity; (3) sustentacular cells are tri-potent progenitors able to self-renew and to generate mantle and hair cells. Neuromast architecture recovers after severe loss of tissue integrity To test the limits of neuromast regeneration we systematically ablated increasing numbers of cells. Extreme injuries that eliminated all except 1 to 3 cells almost always led to neuromast loss (not shown), whereas ablations that left between 4 and 10 cells, reducing the organ to a combination of 2-3 mantle and 2-7 sustentacular cells, allowed regeneration ( Figure 4A-E,K). We found that after losing over 95% of their cellular content, neuromasts recover an average of 45 cells at 7 dpi (or approximately 70% of the normal cell count), with exceptional cases reaching 60 cells (equivalent to over 90% of a normal organ) (n = 15) ( Figure 4K). Regenerating neuromasts became radial-symmetric as early as 3 dpi ( Figure 4D), and had normal cell-class composition and proportions 7 dpi ( Figure 4L-M). Next, we concurrently ablated 95% of the neuromast and the flanking interneuromast cells ( Figure 4F-G). This intervention was followed by a similar regeneration process, but lead to smaller organs (n = 6) ( Figure 4H-J,N-P). These observations reinforce our previous suggestion that interneuromast cells have a non-essential, yet appreciable contribution to regeneration. Timed quantification of cell-class number and localization showed a reproducible pattern of tissue growth and morphogenesis. During the first 24 hpi, the intact cells rebuilt a circular epithelium ( Figure 4B). From 1 dpi to 3 dpi, cell number increases rapidly and proportion is restored ( Figure 4C,K-M). After Next, we examined if the orthogonal polarity axes of the epithelium are re-established after the severest of injuries. To assess tissue apicobasal polarity we used a combination of transgenic lines that allows the observation of the invariant basal position of the nucleus and the apical adherens junctions ( Figure 4Q-R) (Ernst et al., 2012;Harding and Nechiporuk, 2012;Hava et al., 2009). We found correct positioning of these markers in the regenerated epithelium (n = 4), including the typical apicobasal constriction of the hair cells ( Figure 4S-T). To assess epithelial planar polarity, we looked at hair-bundle orientation using fluorescent phalloidin, which revealed that 7 dpi the regenerated neuromasts were plane-polarized in a manner indistinguishable from unperturbed organs, with half of the hair cells coherently oriented in opposition to the other half (n = 10) ( Figure 4U-W). To test if plane-polarizing cues derive from an isotropic forces exerted by the interneuromast cells that are always aligned to the axis of planar polarity of the neuromast epithelium, we ablated these cells flanking an identified neuromast, and concurrently killed the hair cells with the antibiotic neomycin ( Figure 4X-Y). In the absence of interneuromast cells regenerating hair cells recovered normal coherent planar polarity (n = 16), suggesting the existence of alternative sources of polarizing cues ( Figure 4Z). Collectively, these findings reveal that as few as four supporting cells can initiate and sustain integral organ regeneration. Sustentacular and mantle cells have different regenerative potential Injury in the wild is intrinsically stochastic. Thus, we hypothesized that the regenerative response must vary according to damage severity and location, but progress in a predictable manner. To test this assumption and unveil the underlying cellular mechanism, we systematically quantified the behavior of individual cells by high-resolution videomicroscopy. We conducted 15 independent three-dimensional time-lapse recordings of the regenerative process using a triple-transgenic line co-expressing Cldnb:lynGFP, SqGw57A and Alpl:mCherry ( Figure 5A-B), ranging from 65 to 100 hr of continuous imaging (each time point 15 min apart). Starting immediately after the ablation of all except 4-10 cells, we tracked every intact original cell (called founder cell) and their progeny (cellular clones) ( Figure 5A and Video 2). We followed a total 106 founder cells (76 sustentacular cells and 30 mantle cells). We tracked individual cells manually in space and time, recording divisions and identity until the end of the recording, resulting in 763 tracks and 104,863 spatiotemporal cell coordinates ( Figure 5A-B). Each clone was represented as a tree to visualize the contribution of each founder cell to the resulting clones ( Figure 5C). We found that the majority of the founder sustentacular cells underwent three divisions and that some divided up to five times ( Figure 5D). 14 out of 30 founder mantle cells did not divide at all, and the rest divided once or, rarely, twice. Founder sustentacular cells required on average 19 ± 6 hr (mean ±s.d., n = 76) to divide, whereas the founder mantle cells that divided required on average 27 ± 5 hr, (mean ±s.d., n = 30) ( Figure 5E). Clones from founder sustentacular and founder mantle cells were markedly different: founder sustentacular cells produced all three cell classes (sustentacular, mantle and hair cells), whereas founder mantle cells produced clones containing only mantle cells ( Figure 5F). We categorized all cell divisions according to the fate of the two daughter cells at the time of the following division, or at the end of the timelapse recording ( Figure 5G). This analysis revealed that 97% of the sustentacular-cell divisions were symmetric: 78% produced two sustentacular cells (SS), 16% produced a pair of hair cells (HH), and Neuromasts depleted from their dorsal half (n = 6) also recover epithelial size, proportions and geometry in a manner indistinguishable from equatorial-side ablation after 7 days. (P-T) 7 days after their complete laser-mediated ablation, mantle cells regenerated for neuromasts to recover the mantle. (U-Y) The ablation of interneuromast cells flanking both sides of neuromasts that were depleted of mantle cells resulted in the same outcome (n = 6). (Z) Quantification of the number of cells in regenerated neuromasts at 7 dpi. Number of neuromast cells was no statistically significant between groups of different damage regimes as determined by one-way ANOVA (F(4,27)=1.013, p=0.4183). Scatter plot shows mean ±s.e.m.ns: non-significant. Scale bars: 10 mm. DOI: https://doi.org/10.7554/eLife.30823.005 The following figure supplement is available for figure 3: Previous studies have firmly established that hair-cell regeneration is strongly anisotropic because hair-cell progenitors develop almost exclusively in the polar areas of horizontal neuromasts, elongating the macula in the dorsoventral direction (Wibowo et al., 2011;Romero-Carvajal et al., 2015). Although our static images suggest that neuromasts have isotropic regenerative capacity, we nevertheless wondered whether regeneration of non-sensory cells is directional. To this end, we fractioned the epithelium of horizontal neuromasts in four quarters of equal dimension (dorsal, ventral, anterior and posterior) ( Figure 6A-B), which reflects the known functional territorialization of the neuromast epithelium based on the expression of transgenic markers and Notch signaling (Ma et al., 2008;Wibowo et al., 2011). We first assessed the spatial distribution of cell divisions during the first 60 hr of regeneration and found no pattern that would suggest regeneration anisotropy ( Figure 6A). However, 60 hpi, most divisions (74%) took place in the dorsal and ventral (polar) quarters ( Figure 6B). This is expected because later divisions mainly produce hair cells from polar progenitors ( Figure 4L, M). Thus, the regenerating epithelium is initially homogeneous and becomes territorialized 60 hpi. We reasoned that epithelial territorialization could occur either by the migration of similar cells that are scattered throughout the tissue, or by position-adaptive differentiation of an initially equivalent population of cells. To test these possibilities, we generated a virtual Cartesian coordinate system at the center of the neuromast to fit all founder cells at the beginning of regeneration (4hpi). Next, we analyzed the localization of their progeny 60 hpi ( Figure 6C-H). We found that 60% of the progeny of anterior-localized founder cells were located in the anterior side of the resulting epithelium, whereas 64% of the progeny of posterior-located founder cells were found in the posterior side ( Figure 6C-E). We also found that 72% of cells derived from dorsal founder cells and 74% of cells from ventral founder cells were located on the same side of the virtual dorsal/ventral midline ( Figure 6F-H). Therefore, most of the clones remain ipsilateral to the founder cell. These results indicate that neuromasts have isotropic regenerative capacity and their territorialization occurs by location-adaptive cellular differentiation. The sustentacular-cell population is tri-potent and plastic To answer the long-standing question of whether the sustentacular-cell population is homogeneous and approach the problem of what determines symmetric versus asymmetric modes of division, we characterized the composition of all 72 clones from founder sustentacular cells. We found four types of clones: containing only sustentacular cells (S), sustentacular and mantle cells (SM), sustentacular and hair cells (SH), and all three cell classes (SHM) ( Figure 6I). Of note, founder mantle cells produced clones containing only mantle cells (M) (Figures 5G and 6I). We observed that 37/72 of the clones from founder sustentacular cells were SH, 21/72 were S, 12/72 were SM, and 2/72 were SHM ( Figure 6I-K). The proportion of each clone type suggests that either the sustentacular-cell population is heterogeneous, or that it is homogeneous but plastic. In searching for potential sources of clone heterogeneity, we noted that in some developmental contexts cell-cycle length or proliferative potential can determine the fate of the daughter cells (Calegari et al., 2005;Rossi et al., 2017). Therefore, we quantified the kinetics of proliferation of founder sustentacular cells and of their daughters and compared them to clone composition. We found three clear waves of cell divisions, each spaced by 8-10 hr ( Figure 7A), respectively peaking at 20 hr, 28 hr and 38 hr ( Figure 7B-C), suggesting that cell-cycle length is strictly regulated. Cell-cycle length in the 1 st generation peaks around 10 hr (9.8 ± 3.3 hr, median ±interquartile range (iqr)) ( Figure 7C), but it begins to increase and to vary in the 2nd generation (11.5 ± 7.3 hr, median ±iqr), and more so in the 3rd generation (18.8 ± 20.3 hr, median ±iqr). To identify transition points in cycle lengths, we tested the goodness of fit of a two-segment regression model with variable change points. We found that the length of cell cycles is initially around 11 ± 3 hr (mean ±s.d.) and slowly increases up to 47 hpi. Afterwards, cell-cycle length increases more rapidly and is more variable ( Figure 7D). To test if cell number influences cell-cycle length we used a similar two-segment regression model to define when cellcycle length loosens, and discovered that the vast majority of the cell cycles (76%) span 7-13 hr below a threshold of 24 cells ( Figure 7E). Above this threshold, cell-cycles lengths show large variation. With these data, we plotted proliferation kinetics against clone type, and found no significant difference between clones ( Figure 7F-G). Thus, the length of the cell cycle or the proliferative potential of founder sustentacular cells cannot explain clone composition. Machine learning identifies predictive features for cell-fate acquisition Multiple extrinsic factors that vary in space and time could determine cell-fate choices. Because manual analysis of such multidimensional data might be biased or neglect certain factors we implemented a quantitative and unbiased computational approach based on machine learning to identify variables (features) that correlate with clone composition. The first step of the workflow is the extraction of spatiotemporal coordinates and cell-lineage information from the manual tracks of the videomicroscopic data sets (n = 15) ( Figure 8A). For each cell-track coordinate, we extracted 32 quantifiable features (Table 1), which were used to train the machine-learning algorithm. In a preanalysis, we compared the performance of 20 algorithms (support vector machines, decision trees and nearest neighbor classifiers) in terms of accuracy and area under the curve (AUC) and chose the ensemble bagged tree random forest algorithm (Breiman, 2001) as the best performing method (Figure 8-figure supplement 1). To avoid overfitting, we trained the random forest using 14 samples to predict clone composition in the remaining sample in a round robin fashion. We evaluated the quality of predictions using Matthews correlation coefficient (MCC) to compensate for imbalances of clone frequencies ( Figure 6K) Using machine learning, we were able to predict the occurrence of SH vs. SM clones from a founder sustentacular cell with high accuracy (42 out of 49 correctly predicted clones, MCC = 0.63 ± 0.09, mean ± s.d., n = 15 bootstrapped samples), while neither SH nor SM clones could be discriminated when compared to S clones ( Figure 8B). Of the 32 features that we used, those that best discriminated SH vs SM clones were the sustentacular cells' distance to the center of the epithelium, and the distance to the mantle cells ( Figure 8C and Figure 8-figure supplement 2). Next, we focused on the decision-making process of individual sustentacular cells at the time of their division. We trained a random forest to discriminate between SS, HH and SM/MM divisions in a pairwise fashion. The HH and SM/MM divisions were highly predictable (63 out of 66 divisions correctly predicted, MCC = 0.91 ± 0.07, mean ± s.d., n = 15 bootstrapped samples), while the discrimination between SS and HH or SM/ MM divisions was much less accurate ( Figure 8D). Again, the most informative features were the distance to the neuromast center and the distance to the mantle cells ( Figure 8E, Figure 8-figure supplement 3). SM/MM divisions occur consistently at the outer perimeter of the neuromast ( Figure 8F), whereas HH divisions take place near the center. Self-renewing SS divisions occupy the area between HH and SM/MM divisions. Interestingly, SM/MM divisions were never seen in the anterior-most region of the organ, suggesting that progenitor sustentacular Video 2. 100 hr time-lapse recording of a regenerating neuromast after severe ablation. A neuromast regenerates its original architecture from as few as six founder cells. Founder cells are identified by 1-6 (n) and their daughter cells receive 2 n and 2n + 1 identities. Recording starts 4 hr post injury (hpi) and shows single focal planes. Time is in hours post injury. DOI: https://doi.org/10.7554/eLife.30823.009 cells are routed into generating mantle cells specifically in the perimetral areas that lack mantle cells but not elsewhere. Therefore, regenerating neuromasts appear to sense cell-class composition and route cellular differentiation in a spatially regulated manner to regain cell-class proportion and distribution. Discussion One long-standing goal of biological research is to understand the regeneration of tissues that are exposed to persistent environmental abrasion. Here we address this problem by developing a quantitative approach based on videomicroscopic cell tracking, cell-lineage tracing, and machine learning to identify features that predict cell-fate choices during organ regeneration. Using the superficial neuromasts in zebrafish, we demonstrate that a remarkably small group of resident cells suffices to rebuild a functional organ following severe disruption of tissue integrity. Our findings reveal that the sustentacular-cell population is tri-potent, and suggest that integral organ recovery emerges from multicellular organization employing minimal extrinsic information. Below, we discuss the evidence that supports these conclusions. By systematically analyzing cellular behavior, we reveal a hierarchical regenerative process that begins immediately after injury. First, surviving founder cells reconstitute an epithelium. Second, Wallis test). In the box plots, the boundary of the box indicates the 25th and 75th percentile, respectively the black Figure 7 continued on next page sustentacular cells become proliferative and restore organ size. Cellular intercalation is rare. Third, daughter cells differentiate in a position-appropriate manner to recreate cell-class proportions and organ geometric order. Fourth, the epithelium returns to a homeostatic state that is characterized by low mitotic rate. The milder damage regimes that eliminated one half of the epithelium show that neuromasts are symmetric in their regenerative capacity, and that they preferentially regenerate the cells that have been eliminated. Importantly, these findings, which rely on the quantitative spatiotemporal analysis of regeneration data, could not have been predicted from previous studies using static and largely qualitative information (Williams and Holder, 2000;Ló pez-Schier and Hudspeth, 2005;Dufourcq et al., 2006;Ló pez-Schier and Hudspeth, 2006;Ma et al., 2008;Wibowo et al., 2011;Wada et al., 2013;Steiner et al., 2014;Romero-Carvajal et al., 2015;Cruz et al., 2015;Pinto-Teixeira et al., 2015). An important corollary of these results is that neuromasts do not contain specialized cells that contribute dominantly to repair. We propose that progenitor behavior is a facultative status that every sustentacular cell can acquire or abandon during regeneration. We did not observe regenerative overshoot of any cell class (Agarwala et al., 2015), suggesting the existence of a mechanism that senses the total number of cells and the cell-class balance during tissue repair (Simon et al., 2009). Together with previous work, our results support the possibility that such mechanism is based on the interplay between Fgf, Notch and Wnt signaling (Ma et al., 2008;Wibowo et al., 2011;Wada et al., 2013;Romero-Carvajal et al., 2015;Dalle Nogare and Chitnis, 2017). Our combination of machine learning and quantitative videomicroscopy shows clear differences between sustentacular and mantle cells, but does not indicate heterogeneity within the sustentacular-cell population. However, further application of this integrated approach and new transgenic markers may reveal uncharacterized cells in the neuromast. This may be expected given recent work that showed the existence of a new cell class in neuromasts of medaka fish (Seleit et al., 2017). It is technically challenging to consistently maintain fewer than 4 cells in toto without eliminating the entire neuromast. Thus, we cannot rule out the possibility that a single founder cell may be able to regenerate a neuromast. We show that the complete elimination of a neuromast is irreversible in larval zebrafish. However, Sá nchez and colleagues have previously reported that interneuromast cells can generate new neuromasts (Sánchez et al., 2016). By assaying DNA synthesis prior to mitosis, we show that interneuromast cells do not proliferate after neuromast ablation. These differences may be explained by differences in ablation protocols (electroablation versus laser-mediated cell killing), the age of the specimens (embryos versus early larva) or the markers used to assess cellular elimination. We find that interneuromast cells are not essential for neuromast regeneration because severely damaged organs recover all cell classes in the appropriate localization in the absence of interneuromast cells. However, we systematically observed smaller organs when interneuromast cells where ablated. These observations suggest that these peripheral cells may yet help regeneration, either directly by contributing progeny, or by producing mitogenic signals to neuromast-resident cells. The behavior of the mantle cells is especially intriguing. Complete elimination of parts of the lateral line by tail-fin amputation have revealed that mantle cells are able to proliferate and generate a new primordium that migrates into the regenerated fin to produce new neuromasts (Dufourcq et al., 2006). This observation can be interpreted as suggesting that under some injury conditions, mantle cells are capable of producing all the cell classes of a neuromast. Transcriptomic profiling of mantle cells following neuromast injury revealed that these cells up-regulate the expression of multiple genes (Steiner et al., 2014). Furthermore, a recent study has revealed that mantle cells constitute a quiescent pool of cells that re-enters cell cycle only in response to severe depletion of sustentacular cells (Romero-Carvajal et al., 2015), suggesting that these cells may conform a stem-cell niche for proliferation of sustentacular cells. Thus, the collective evidence indicates that the line within the box marks the median. Whiskers above and below the box include points that are not outliers. (G) Sustentacular founder cells of S, SM, and SH clones divide similarly early (p=0.42, Kruskal Wallis test) after approximately 18 hr after neuromast injury. (H) Sustentacular founder cells that produce SH (cyan) and S clones (green) are distributed similarly around the center of the organ (at x = y = 0). Those that generate SM clones (pink) are localized further away from the center and are biased towards the posterior side. DOI: https://doi.org/10.7554/eLife.30823.011 mantle cells respond to damage and contribute to the regenerative processes, and may drive the regeneration of an entire organ if every other cell class is lost. One outstanding question is how regeneration is controlled spatially. The epithelium may respond to damage via dynamic formation of an injured-intact axis at the onset of repair. Our results support this scenario by unveiling the adaptability of the neuromast epithelium to the localization and scale of damage. We suggest a model in which the invariant radial symmetry of the neuromast serves as a rheostat to identify the site of damage to guide regeneration spatially (Figure 9). A polarized axis along structurally intact and injured areas underlies this process. However, the complete reconstruction of a neuromast by as few as 4 cells suggests that a partial maintenance of radial symmetry is not essential for organ regeneration. Therefore, radial-symmetry maintenance cannot have a deterministic impact on the recovery of geometric order. Yet, partial structural maintenance and polarized tissue responses may optimize repair, respectively, by preventing superfluous cellular Figure 9. Schematic model of neuromast regeneration. The top diagram exemplifies the architecture of an intact neuromast. A, B and C indicate three types of injury: A when mantle cells are lost, B when hair cells are ablated, and C when a localized combination of all three cell classes is lost. Under the model that we present, radial symmetry serves to localize damage and canalize regeneration spatially. If central hair cells are lost (A), radial symmetry is maintained for sustentacular progenitors to regenerate hair cells centripetally (grey arrows in A). If outer cells are lost (B), radial symmetry is also maintained for the generation of progeny that will acquire mantle fate and propagate centrifugally to reform the outer rim of the neuromast (grey arrows in B). Upon asymmetric damage, however, the radial symmetry is partially broken (C). The neuroepithelium repolarizes along an injured-intact axis, which canalizes regeneration towards the damaged areas (grey arrows in C). Individual cells are color-coded (mantle cells in red, sustentacular cells in light blue, and hair cells in green), and in each case we indicate the type of division that the intact cells undergo: symmetric (S) when they produce two equivalent cells or self-renew, and asymmetric (A) when their division generates sibling cells that differentiate into different classes. DOI: https://doi.org/10.7554/eLife.30823.017 production in undamaged areas and by biasing the production of lost cells in the damaged areas. For organs that have evolved under the pressure of persistent damage, compliance to the extent of the injury may be an advantage because the regenerative responses can be scalable and localized, allowing faster and more economical regeneration. After the severest of injuries, regenerated neuromasts were plane polarized in a manner indistinguishable from unperturbed organs. This startling result indicates that as few as four founder supporting cells can re-organize the local coherent planar polarity of the epithelium during neuromast repair. An alternative explanation is that founder cells have access to external polarizing cues. One source of this information is an isotropic mechanical forces exerted by the interneuromast cells that flank a neuromast. This is possible because interneuromast cells are always aligned to the neuromast's axis of planar polarity. Yet, the concurrent ablation of resident hair cells and the interneuromast cells around an identified neuromast led to regenerated hair cells whose local orientation was coherent. Interestingly, recent studies have identified a transcription factor called Emx2 that regulates the orientation of hair cells in neuromasts of the zebrafish (Jiang et al., 2017). Emx2 is expressed in one half of the hair cells of the neuromast (those oriented towards the tail) and absent in the other half (which are coherently oriented towards the head). Loss-and gain-of-function of Emx2 alter planar cell polarity in a predictable manner: loss of Emx2 leads to neuromasts with every hair cells pointing towards the head of the animal, and Emx2 broad expression orients hair cells towards its tail. Because the coherent local axis of polarity is not affected by these genetic perturbations, Emx2 may act in hair cells as a decoder of global polarity cues. This evidence, together with our results, suggests that during neuromast regeneration founder cells autonomously organize the variegated expression of Emx2 in the regrowing epithelium with consequent recovery of a coherent axis of planar polarity and with one half of the hair cells pointing opposite to the other half. The future development of live markers of Emx2 expression will be able to test this prediction. We would like to highlight that we do not currently understand the global polarization of the neuromast epithelium relative to the main body axes of the animal. External sources of polarity may impinge in the recovery of these global axes during neuromast regeneration. Previous work has demonstrated that local and global polarization occur independently of innervation (Ló pez-Schier and Hudspeth, 2006), but other potential polarizing cues remain untested. Therefore, at present we can only support the notion that local coherent polarity is self-organizing, whereas global orientation may be controlled externally. Our results beg the question of whether neuromast cells self-organize. Our operational definition of self-organization is an 'autonomous increase in order by the sole interaction of the elements of the system' (Haken, 1983), implying that a cellular collective organizes a complex structure without the influence of external morphogenetic landmarks, patterning cues, or pre-existent differential gene-expression profiles. If these conditions are not met, cellular groups may nevertheless form a complex structure through a process of 'self-assembly' (Sasai, 2013;Turner et al., 2016). The reduction of neuromasts to around 5% of their original size shows that intact resident cells can rapidly recreate their original microenvironment to rebuild a neuromast with normal organization, proportions and polarity. Although these observations suggest autonomy, extrinsic sources of information including the extracellular matrix that remains intact after cell loss may serve as a blueprint for epithelial organization. Yet, unless such patterns are rebuilt together with the organ, neuromasts architecture and proportions would depend on the area occupied by the regrowing epithelium. In other words, cell-fate acquisition and cell-class distribution must be tissue-size dependent. However, we show that neuromast regain geometric order as early as 2 days after injury, when their cellular content is less than 60% of the original. Although our results do not irrefutably demonstrate self-organization during neuromast regeneration, they strongly support this idea. We argue that self-organization is an optimal morphogenetic process to govern organ repair because (i) it requires the least amount of previous information and (ii) it is robust to run-off signals that could lead to catastrophic failure. Conclusions Understanding how tissues respond to the inherently random nature of injury to recapitulate their architecture requires the identification of cues and signals that determine cell-fate acquisition, localization and three-dimensional organization. Here we reveal an archetypal sensory organ endowed with isotropic regenerative ability and responses that comply to damage severity, nature and localization. An important corollary of our findings is that progenitor behavior is a facultative status that every Radial distance between current cell location and location of last cell division (or start of the movie in case of founder cell division). If the current location is nearer to the center the value is (+) in case it is further away the value is (-) Movement distance since last division Euclidean distance between current cell location and last cell division (or start of the movie in case of founder cell division) Normalized distance to center Radial distance of current cell location divided by the radial distance of the current furthest cell (to approximate the neuromast size) (Blanpain and Fuchs, 2014;Wymeersch et al., 2016). Importantly, we illustrate a machine learning implementation to identify features that predict cell-fate choices during tissue growth and morphogenesis. This quantitative approach is simple and model-independent, which facilitates its application to other organs or experimental systems to understand how multiple cells interact dynamically during organogenesis and organ regeneration in the natural context of the whole animal, and to identify how divergences from the normal regenerative processes lead to failed tissue repair. Materials and methods Zebrafish strains and husbandry (Shin et al., 2014). To label cell nuclei, in vitro transcribed capped RNA coding for histone 2B-mCherry was injected in 1-4 cell embryos at a concentration of 100 ng/ml (Rosen et al., 2009). Throughout the study, zebrafish larvae were anesthetized with a 610 mM solution of the anesthetic 3-aminobenzoic acid ethyl ester (MS-222). Laser-mediated cell ablations For in toto cell ablation, we used the iLasPulse laser system (Roper Scientific SAS, Evry, France) mounted on a Zeiss Axio Observer inverted microscope equipped with a 63X water-immersion objective (N.A. = 1.2) (Xiao et al., 2015). The same ablation protocol was used for all experiments using five dpf larvae. Briefly, zebrafish larvae were anesthetized, mounted on a glass-bottom dish and embedded in 1% low-melting point agarose. Three laser pulses (355 nm, 400 ps/2.5 mJ per pulse) were applied to each target cell. After beam delivery, larvae were removed from the agarose and placed in anesthesia-free embryo medium. All ablations were systematically performed on the L2 or L3 posterior lateral-line neuromasts, except for those in Figure 6F, for which we targeted the LII.2 neuromast. Videomicroscopy, cell tracking and lineage tracing Larvae were anesthetized, mounted onto a glass-bottom 3 cm Petri dish (MatTek) and covered with 1% low-melting point agarose with diluted anesthetic. Z-stack series were acquired every 15 min at 28.5˚C using a 63X water-immersion objective. Cells were tracked overtime using volumetric Z-stack images with FIJI plugin MTrackJ (Meijering et al., 2012). Movies were registered two times for image stabilization and centered upon the centroid of the surviving group of cells and the subsequent regenerating organs. Founder cells are identified from 1 to 6 (n) and their daughter cells receive 2 n and 2n + 1 identities. All images were processed with the FIJI software package. Random forest prediction Random forest algorithms use the majority vote of numerous decision trees based on selected features to predict choices between given outcomes (Murphy, 2012). We used a list of spatial, movement and neighborhood features (see Suppl. Table 1) to perform the random forest prediction of fate choice. We trained the random forest on 14 experiments and tested our prediction on one leftout experiment in a round robin fashion, leading to 15 test sets overall. To evaluate our prediction, we calculated Matthews correlation coefficient (MCC) (Matthews, 1975), which accounts for imbalance in our data (e.g. 78% of all divisions are SS divisions). The MCC is calculated by: where TP denotes true positive, TN true negative, FP true positive and FN false negative predictions. The MCC can have values between À1 and +1, where À1 is a completely incorrect, 0 a random and +1 a perfect prediction. To evaluate the variance of the MCC on the 15 test sets we used a bootstrapping approach, where we draw 15 samples from all test sets with replacement 15 times. From this resampled data we calculated the mean MCC and the standard deviation as shown in Figure 8B and D. All machine-learning analyses were performed using MATLAB (Version 2015b on a Windows 7 machine)
9,280.6
2018-03-29T00:00:00.000
[ "Biology" ]
Mechanics of ultrasound elastography Ultrasound elastography enables in vivo measurement of the mechanical properties of living soft tissues in a non-destructive and non-invasive manner and has attracted considerable interest for clinical use in recent years. Continuum mechanics plays an essential role in understanding and improving ultrasound-based elastography methods and is the main focus of this review. In particular, the mechanics theories involved in both static and dynamic elastography methods are surveyed. They may help understand the challenges in and opportunities for the practical applications of various ultrasound elastography methods to characterize the linear elastic, viscoelastic, anisotropic elastic and hyperelastic properties of both bulk and thin-walled soft materials, especially the in vivo characterization of biological soft tissues. Introduction The elastography method, which was proposed in the 1990s, enables probing the elastic properties of living soft tissues and has found wide medical applications in the past two decades [1][2][3][4][5][6][7]. The key steps involved in an elastography method can be summarized as in figure 1 [8]. (1) An external or internal stimulus is imposed onto a target soft tissue. (2) The responses of the soft tissue, including its static and/or dynamic deformation behaviours are monitored using a medical imaging technique, such as ultrasound or nuclear magnetic resonance imaging (MRI) methods. monitor the responses of soft tissue using a medical imaging method (2) infer mechanical properties from the response of the soft tissue (3) mechanical properties clinical diagnosis (4) biological soft tissue external/internal stimuli (1) Figure 1. An illustration of the key steps involved in elastography [8]. (Online version in colour.) that many diseases, such as cancer [9,10], liver fibrosis [11,12], cardiovascular diseases [13] and thyroid nodules [14], are accompanied by variations in the tissue mechanical properties; therefore, in vivo and quantitative measurements of the elastic properties of soft tissues via elastography methods provide valuable information for the diagnosis and therapy of these diseases. The vast studies published in the literature regarding the development and practical applications of elastography methods may be classified by considering the four key steps in figure 1. In step (1), different stimuli can be adopted to deform a soft tissue. In the literature, static loads [15], external vibrators [16,17] and acoustic radiation forces (ARFs) [18][19][20][21] have been applied to generate diverse responses in a soft tissue, leading to different static and dynamic elastography methods. Accurately tracking the mechanical responses of target soft tissues generated by various stimuli (step (2)) is a key step in an elastography method. To this end, different medical imaging methods have been used, giving rise to ultrasound elastography, magnetic resonance elastography, optical elastography and so on [18,[22][23][24]. Also driven by this need, some dedicated imaging techniques have been introduced. For instance, besides the measurement of axial displacements, techniques based on the ultrasound imaging have been presented to obtain the lateral displacements and strains [25]. Another example is that taking advantage of ultrafast ultrasound imaging techniques (the frame rate can be up to 6000 Hz or even higher), the method to image two-dimensional motion vectors has been developed [24]. With the known responses of soft tissues under given stimuli tracked with various medical imaging methods, it is possible to infer the mechanical properties of soft tissues (step (3) in figure 1), which has received considerable attention from different disciplines. Besides linear elastic parameters, it has been demonstrated that hyperelastic [8,[26][27][28], viscoelastic [29][30][31][32] and anisotropic elastic [33][34][35][36] parameters of soft tissues may be inferred using different inverse methods reported in recent years. The mechanical properties of living soft tissues inferred from their responses to imposed stimuli may provide valuable information for the diagnosis and therapy of some diseases (step (4)). This step is the main interest of clinicians who use elastography, and most publications from clinical research focus on this aspect. Over the past years, numerous valuable clinical data have been reported in the literature, which indeed help identify the extent to which elastography methods are useful in clinics [10,12,37]. This review focuses on ultrasound elastography for which quite a few review papers [1][2][3][4][5][6][7] and guidelines for its clinical use [38,39] have been published. It can be seen from figure 1 that continuum mechanics plays an essential role in both steps (1) and (3). In particular, the questions of how to understand the responses of living soft tissues to various external/internal stimuli in ultrasound elastography and how to establish robust inverse approaches to infer different material parameters of soft tissues have come under the spotlight of the mechanics and applied mathematics research communities. Bearing this issue in mind, distinct from previous review 3 rspa.royalsocietypublishing.org Proc. R. Soc . . papers, this review focuses on the mechanics principles underpinning elastography methods to highlight the limitations, challenges and opportunities of these methods from the viewpoint of continuum mechanics. To this end, we divide the commonly used ultrasound elastography methods into three categories based on the loads or stimuli imposed on the soft tissue and its responses: static elastography, dynamic elastography with harmonic stimuli (DEHS) and dynamic elastography with transient stimuli (DETS). From the viewpoint of continuum mechanics, the governing equations and boundary conditions (BCs) characterizing the responses of soft tissues involved in these three types of methods differ. This review paper is organized as follows. In §2, a brief introduction to the commonly used ultrasound elastography methods and their applications is presented. Section 3 describes the mechanics theories involved in elastography methods. In §4, particular attention is given to some limitations of the current data analysis methods and future prospects for developing novel inverse analysis methods within the framework of continuum mechanics. Section 5 provides the concluding remarks. Ultrasound-based elastography methods Ultrasound imaging is a low-cost, safe and mobile imaging modality that can generate real-time images and has found broad applications in clinical radiology. Safety is one of its major strengths; indeed, this technique does not involve ionizing radiations. Ultrasound-based elastography methods use ultrasound imaging to track the deformation behaviours of soft tissues and further infer the elastic properties of both healthy and diseased soft tissues. Depending on the features of the stimuli used to deform the soft tissue, the ultrasound-based elastography methods can be divided into three categories: static elastography, DEHS and DETS. This section gives an overview of different ultrasound-based elastography techniques and their applications for the mechanical characterization of soft tissues and diagnosis of some diseases. (a) Static elastography methods Static elastography, which was proposed in the early 1990s, has been widely used in clinics in the past two decades [2,15,25,[40][41][42][43][44]. When using this method, static compression is typically imposed onto a targeted soft tissue (figure 2a). The resulting displacement field (mainly the axial displacement in the early use of static elastography) generated by the compressive load can be directly measured via the ultrasound imaging method. The strain field can then be calculated according to the measured displacement. Furthermore, dedicated inverse approaches can be used to extract the elastic properties of the target soft tissues according to the strain field [2,6,41,48]. Briefly, harder tissues have lower strains, whereas softer tissues have higher strains under compression, as shown in figure 2a. In principle, it is possible to quantitatively infer the elastic properties of soft tissues using a static elastography method; however, this is challenging because of the complexity of the associated inverse problem, as discussed in detail in §3. Therefore, the static elastography method is usually regarded as a qualitative method that reveals the contrast between hard and soft tissues. Although the limitations of the static elastography method in quantitatively measuring the mechanical properties of soft tissues have been recognized, this method is simple and easy to realize and allows us at least to qualitatively distinguish between regions with different stiffnesses; therefore, it has been widely used in clinics. For example, static elastography finds important applications in the classification of breast lesions [45,49]. A low-echogenic region emerges in the B-mode image when a lesion exists, as shown in figure 2b. Traditionally, such a low-echogenic region (the region indicated by arrows in figure 2b) in the B-mode image may be suspected to correspond to a breast lesion. Now with the help of the static elastography method, the position and size of a lesion may be detected more accurately than using the result estimated solely based on the B-mode image [45]. Moreover, the mechanical properties of thermal lesions induced by high-intensity focused ultrasound (HIFU) have been demonstrated to differ rspa.royalsocietypublishing.org Proc. R. Soc Detection of a breast lesion, which appears as the low-strain region in the image. Compared with the B-mode image, the elastography method may provide more accurate information with respect to both the size and the position of a lesion [45]. (c) Application of static elastography to image a thermal lesion induced by HIFU, revealing that this technique may be useful to guide HIFU treatment [46]. (d) Measurement of the elastic properties of skeletal muscles using the static elastography method. To quantitatively infer the elastic properties of the muscle, two soft layers with known mechanical properties are used [47]. Reprinted from references [45], [46], and [47] with permission. (Online version in colour.) from those of the surrounding soft tissues; therefore, the static elastography method can be a useful tool to guide HIFU treatment when the lesions are not too deep [46,50]. The dark region in figure 2c denotes the low-strain region (i.e. the location of the thermal lesion). In addition to lesion detection, static elastography can also be used to characterize other soft tissues. For instance, in a recent work, Chino et al. [47] used two referenced layers with known elastic moduli to cover the skin and measured the compressive strains in both the referenced layers and the skeletal muscles. Although the anisotropy of the skeletal muscles was ignored in their study, by comparing the strains in the referenced layers and skeletal muscles, the elastic properties of the skeletal muscles were evaluated, as shown in figure 2d [47]. Other applications of static elastography in predicting malignancies in thyroid nodules can be found in [14,51]. In the clinical use of static elastography methods, the quality of the examination depends significantly on the experience and technique of the sonographer clinicians. For instance, an appropriate compressive amount should be imposed onto the soft tissue to improve the image quality. The appropriate pre-compression of the soft tissues before the imaging process may help increase the contrast and reduce the decorrelation noise [3,46]. However, determining the amount of pre-compression required is by no means trivial because a small pre-compression may not increase the imaging contrast, whereas a large pre-compression may lead to hardening of the soft tissue. This issue will be discussed in detail in §3. Reprinted from reference [52] with permission. (Online version in colour.) or ARFs generated by focused ultrasound beams are used as stimuli. The ARF (figure 3) produced by the momentum transfer from acoustic waves to the medium is determined by where α (dB/m) and c L (m s −1 ) denote the acoustic absorption and sound speed in the target biological soft tissues, respectively, and I (W m −2 ) is the temporal average intensity of the acoustic beam [18,52,53]. f (N m −3 ) is a type of body force, and its direction is along the acoustic wave propagation direction. Sarvazyan et al. [18] argued that the ARF provides physicians with a 'virtual finger' that helps them to touch the internal regions of human bodies. (i) Sonoelastography The sonoelastography imaging method uses an external harmonic vibrator to generate harmonic vibrations within target soft tissues [5,16,56,58,80]. A schematic of the sonoelastography method is shown in figure 4a. The steady-state responses (e.g. the map of the vibration amplitude [55,56] and phase [16]) of the tissues are then measured using the Doppler spectrum of the reflected signals [82,83]. Accordingly, different inverse approaches are established to obtain the elastogram of the tissues. Based on the distribution of the vibration amplitude, local lesions, which may be harder or softer than surrounding tissues, can be distinguished [55,80]. According to the map of the phase, the phase velocity c of the shear wave can be measured using a phase gradient algorithm, i.e. where the phase gradient is calculated by the phase shift ϕ over a distance r and ω denotes the angular frequency. Therefore, the elastic properties of the tissues may be determined [16,58]. The two approaches mentioned above are named vibration amplitude sonoelastography and vibration-phase gradient sonoelastography, respectively [5]. In general, vibration amplitude sonoelastography is a qualitative method that mainly provides information about the positions of local lesions, whereas vibration-phase gradient sonoelastography may be used to quantitatively measure the elastic and viscous properties of soft tissues. Figure 4b shows the B-mode image and vibration amplitude map of a porcine liver with a thermal lesion. This figure shows that the lesion can be clearly distinguished from the vibration amplitude map obtained via sonoelastography. [81]. Reprinted from references [5] and [81] with permission. (Online version in colour.) Indeed, using vibration-phase gradient sonoelastography, the in vivo elastic properties of skeletal muscles [58] and livers [54] of healthy volunteers have been determined. Later on, an improved method based on the sonoelastography, named crawling wave imaging, was developed by Wu et al. [57,81]. The key concept is presented in figure 4c. Two vibrators with frequencies of ω 1 and ω 2 , respectively, are used to generate shear waves: ω 1 = ω + δω/2 and ω 2 = ω − δω/2, where δω/ω ≈ 0.01 and ω is typically hundreds of hertz. The shear waves generated by the two vibrators form an interference pattern that propagates with velocity (δω/2ω)c, where c is the phase velocity of the shear wave at frequency ω in the soft tissue. Because δω/ω ≈ 0.01, the velocity of the travelling interference pattern, which is named the 'crawling wave', is much smaller than c [57,81]. The slow crawling wave can be visualized and tracked by a conventional ultrasonic scanner modified for sonoelastography [84]. The crawling waves in a phantom consisting of a hard layer (left part in (i) and (ii)) and soft layer (right part in (i) and (ii)) are shown in figure 4d. Clearly, the wavelength of the crawling wave in the harder region is larger. Initial ex vivo experiments on livers and prostates and in vivo experiments on skeletal muscles indicate that crawling wave imaging is a promising method for quantitatively characterizing the mechanical properties of biological soft tissues [85][86][87][88]. (ii) Shear wave-induced resonance elastography A recently developed method named SWIRE adopts an external vibrator to generate shear waves with frequencies in the range of 45-205 Hz within soft materials, as shown in figure 5a. The propagation of the shear waves is monitored by an ultrafast ultrasound scanner. Then, Fourier transformation is conducted for the time-domain displacement at each point in the regionof-interest (ROI) to obtain the frequency-domain displacement. The resonance frequencies of the soft inclusion, which correspond to the low-order eigenmodes of a soft inclusion-hard matrix system, can be identified from the peak values of the frequency-domain displacement curve figure 5b, and from these curves, the resonance frequency may be determined. The stationary shear wave displacement field in the ROI at the resonance frequency is also presented. Clearly, the resonance response of the soft inclusion distinguishes itself from the surrounding harder tissues [59,61]. It should be pointed out that the limitation of SWIRE lies in that it can be only used for the characterization of soft inclusions. (iii) Vibro-acoustography In the VA technique [62][63][64]89], confocal transducers with centre frequencies ω 1 and ω 2 , as shown in figure 6a, are applied to a target soft material. ω 1 and ω 2 are on the order of megahertz, whereas δω = ω 2 − ω 1 is on the order of kilohertz. Thus, the low-frequency (kHz) harmonic ARF can be imposed onto a focused point within the target soft tissue [62]. The resulting vibration, which is determined by the local elastic properties of the tissue at the focused point, will induce an acoustic emission field. Both the magnitude and phase of the acoustic emission can be detected by a hydrophone. When the two focused ultrasound beams sweep through the whole object, images based on either the magnitude or phase of the acoustic emission can be obtained as shown in figure 6a. These images are the so-called magnitude and phase acoustic spectrograms; their resolutions are determined by the ultrasound resolution and the point-spread function of the system, and are roughly hundreds of micrometres [62,63]. Figure 6b shows the amplitude and phase images of normal and calcified excised human iliac arteries obtained with δω = 6 kHz. Subsequent studies using this technique have demonstrated its promising use in imaging of breast [89,90] and prostate tissues [65,67]. In principle, VA is an interesting imaging technique and has a resolution similar to that of X-ray images when used to measure calcified arteries (figure 6b). However, interpreting the acoustic spectrogram to quantitatively determine the mechanical properties of soft tissues is challenging, as recently addressed by Brigham et al. [66]. (iv) Harmonic motion imaging To identify the local mechanical properties of soft tissues, Konofagou et al. [68][69][70] modified the experimental set-up of the VA method and proposed the HMI method. In the HMI method, an additional ultrasound beam, as shown in figure 6c, is used to monitor the harmonic motion induced by confocal transducers. The amplitude of the harmonic motion of soft tissues is determined by both the local elastic properties at the focus and the magnitude of the ARF. [71]. Reprinted from references [62] and [71] with permission. (Online version in colour.) Equation (2.1) shows that the ARF relies on the intensity of the acoustic beam and the acoustical properties of the medium at the focal region; both of them may vary from site to site. Therefore, the magnitude of the ARF is difficult to control, and quantitatively measuring the local elastic modulus is challenging. However, the highly localized harmonic ARF provides a useful way to probe the relative variation of the mechanical properties within biological tissues [69]. Accordingly, the HMI method has been successfully used to monitor HIFU treatment [71,73,91]. Figure 6d shows the HMI displacement variation and corresponding pathology images of liver tissue for 30 s of sonication. In a recent study, Vappou et al. [72] developed a two-step inverse approach to quantitatively probe the viscoelastic properties of a soft solid based on HMI. Briefly, the phase velocity of the propagating shear wave induced by the harmonic motion at the focus is measured, and then, the phase shift between the stress (i.e. the harmonic ARF) and the strain (i.e. the monitored harmonic motion) is measured to determine the loss tangent [92]. Hence, the quantitative viscoelastic properties of a soft tissue may be determined. Experiments on phantoms have validated the effectiveness of the inverse approach. (v) Shear wave dispersion ultrasound vibrometry The SDUV technique, which was developed by Chen et al. [74,75], uses the harmonic ARF generated by an amplitude-modulated ultrasound beam to induce low-frequency shear waves (typically ranging from 300 to 900 Hz) within target soft tissues (figure 7a). The shear waves are detected at different locations along the propagation direction, and then the phase velocities can be determined via the phase gradient method. For a viscoelastic solid, the phase velocities depend on the frequencies of the shear waves, and the relationship between the phase velocities and the frequencies is the so-called dispersion relation. By controlling the modulation frequency, 9 rspa.royalsocietypublishing.org Proc. R The viscoelastic parameters of the phantom determined using the SDUV method agree well with those obtained from an independent measurement [74]. Reprinted from reference [74] with permission. the frequency of the generated shear wave can approximately vary from 300 to 900 Hz and the dispersion curve can be obtained. Assuming that the viscoelastic properties of a solid can be described by a Voigt model [92], the theoretical dispersion relation can be derived. Then, by fitting the experimental dispersion curve with the theoretical solution, the elasticity and viscosity parameters can be obtained. A typical experimental dispersion curve for a viscoelastic phantom is shown in figure 7b. The elasticity and viscosity parameters of the phantom identified using the SDUV method agree well with those obtained via an independent measurement [74]. Experiments on in vitro porcine muscles and human prostates have been conducted to validate the effectiveness of this method [76,93]. A common feature of the aforementioned methods is their reliance on the use of harmonic stimuli to deform soft tissue. By determining the phase velocities of the shear waves, the elastic or viscous parameters of soft tissues may be quantitatively determined. However, when the wavelength of the shear wave is comparable to the dimension of the soft tissue, the phase velocities may depend not only on the physical properties of the soft tissues, but also on the geometrical parameters of the system. In this case, caution should be taken when inferring the tissue parameters based on the shear wave velocities. The SWIRE method uses the resonance frequency information of soft tissues to determine their mechanical properties (e.g. the elastic properties of an inclusion). However, this method relies on the use of a numerical method (e.g. the FE method (FEM)) to simulate the experiments and further evaluate the elastic parameters of an inclusion. This reliance complicates the use of this method in clinics. (c) Dynamic elastography with transient stimuli In this section, we discuss the dynamic elastography methods that use transient stimuli to deform a soft tissue. In these methods, the responses of soft tissues (e.g. the shear wave velocities) are measured to extract the local elastic properties [17,18,20,[94][95][96][97][98]. Note that in this method, the frame rate of the scanner is usually high enough to acquire the propagation process of the shear waves. A key merit of DETS is that it is not so sensitive to the BCs. The reflected waves are separated from the incident wave in the time domain and can be filtered out [17]. Therefore, DETS enables the quantitative determination of elastic properties. Similar to DEHS, various external and internal stimuli may be used to generate transient shear waves within soft tissues. In recent years, DETS, in which the ARF is used to generate a transient shear wave, has attracted considerable attention [4,7,18,20,95,96,99,100]. This method, which generates remotely transient shear waves within biological tissues, permits us to probe the mechanical properties of biological tissues and is relatively suitable for clinical use. Here, we focus on the following methods. (i) Transient elastography Transient elastography (TE), which was proposed by Sandrin and colleagues [17,24,101,102], uses an external vibrator to introduce a low-frequency transient wave in biological tissues and tracks the propagation of this transient wave along the axis of the vibrator (figure 8a), with a frame rate of approximately 4000 Hz. The velocity of the transient wave is measured, and the elastic modulus of the soft tissue can be quantitatively determined by assuming that the tested material is elastic and its dimension is larger than the wavelength. The TE technique forms the basis of Fibroscan ® (Fibroscan, Echosens™, France), which is an effective approach for staging liver fibrosis [103,104]. Figure 8b shows the in vivo experimental results obtained from three livers with different degrees of fibrosis (F 0 to F 4 denote the degree of fibrosis). Clearly, for a liver with a higher degree of fibrosis (e.g. F 4 ), the velocity of the transient wave in the ROI is greater. (ii) Shear wave elasticity imaging and ARF impulse Sarvazyan et al. [18] proposed the shear wave elasticity imaging (SWEI) method, which uses the ARF to generate shear waves within soft tissues (figure 9a). The magnitude of the ARF is fairly small, and, thus, the displacement induced by the acoustic force within the soft biological tissue is usually on the order of micrometre. However, Sarvazyan et al. [18] demonstrated that using an additional system (e.g. MRI or optical imaging) facilitates recording the shear waves, as shown in figure 9b. Subsequently, Nightingale et al. [94] and Palmeri et al. [ [94]. Reprinted from references [18] and [94] with permission. to track the propagation of shear waves in in vivo and ex vivo experiments. The experimental set-up used by Nightingale et al. [94] is based on their previously developed ARF impulse (ARFI) technique, which has been implemented commercially by Siemens Medical Solutions on their ACUSON S2000 TM [21,99]. The ARFI technique imposes the impulse radiation force onto local regions of the tissues, and information, such as the displacements immediately after the excitation, the peak displacement, the time to reach the peak displacement and the recovery time of the deformed region after the force is removed, can be used to determine the local elastic properties of soft tissues based on dedicated inverse approaches [21,105]. Typically, the map of the peak displacement, which is assumed to be inversely proportional to the elastic modulus of the local tissue, is provided to indicate the stiffness distribution within the target soft tissue. Here, the method in which ultrasound imaging is used to track the shear waves induced by ARFI is named the AFRI-based SWEI method [35,94,106]. Figure 9c and d shows the displacement induced by the AFRI and the normalized displacement used to evaluate the velocity of the shear wave, respectively. AFRI-based SWEI method [35,94,106] is a promising strategy for quantitatively probing the local elastic properties of biological soft tissues. Both generating the ARF and monitoring the propagation of the shear waves can be realized using a single ultrasound probe, unlike in experimental systems that require an extra vibrator to generate shear waves. Subsequent studies of ARFI-based SWEI have demonstrated its usefulness for in vivo characterization of the mechanical properties of various soft biological tissues [ (iii) Supersonic shear imaging The supersonic shear imaging (SSI) technique uses ultrasound beams focused at different depths within biological tissues to create a moving ARF [20,95]. The ARF moves at a high speed in the soft material; thus, the resulting displacement field is confined within a Mach cone, as shown in figure 10a. In this case, two quasi-plane shear wavefronts interfere along the Mach cone and propagate in opposite directions. This phenomenon is known as the elastic Cherenkov effect (ECE) [20,34,107]. In the SSI technique, the propagation of the interfered front is monitored using an ultrafast imaging technique. The fast acquisition reduces the risk of artefacts resulting from the movements of patients or investigators. Typical experimental results for a homogeneous phantom are shown in figure 10b, which illustrates the propagating process of the interfered wavefronts within 14 ms after imposing the moving ARF. Furthermore, the shear wave velocity is measured using the time-of-flight algorithm [95,108], and the elastic moduli of the target soft tissues can be determined. Figure 10c shows an in vivo elastogram obtained in breast tissues. The central region of the ROI (the red region), in which Young's modulus is higher than in the surrounding tissues, is suspected to be a lesion. The SSI technique has now been commercialized (figure 10d) and used in clinics. Applications of this technique to detect breast lesions [95,109] and thyroid nodules [110] and stage liver fibrosis [111] have been explored. Moreover, determination of the nonlinear elastic properties of soft tissues using the SSI technique has attracted considerable attention in recent years [8,[26][27][28]33,34]. In these studies, the tested soft materials are pre-deformed, and then the shear wave velocities in the deformed materials are measured. According to the relationship between the velocities of shear waves and the pre-deformation and material parameters, the nonlinear elastic properties can be determined. The nonlinear elastic properties for human breasts, heel fat pads and skeletal muscles have been measured in vivo [8,33] in this way. To evaluate nonlinear elastic properties using these methods, the pre-deformation must be determined with a reasonable accuracy. Li and colleagues [8,27,33] evaluated the pre-deformation based on B-mode images. In a recent study, Bernal et al. [28] developed an experimental system in which the pre-deformation is evaluated using the static elastography method, and the shear wave velocities are measured using the SSI technique. Thus, the two-dimensional images of the nonlinear elastic properties can be obtained. They argued that the map of the nonlinear elastic properties might produce better contrast between lesions and normal soft tissues [28]. (iv) Comb-push ultrasound shear elastography Song et al. developed the comb-push ultrasound shear elastography (CUSE) method [96,[112][113][114]. In this method, several unfocused (or focused) ultrasound beams as shown in figure 11a, are used as multi-stimuli to generate shear waves in the whole field-of-view (FOV). Additionally, the elastic properties in the whole FOV can be identified with one acquisition. Figure 11b shows the shear waves generated by the CUSE in a homogeneous phantom. Both homogeneous and inclusion phantom experiments have been performed to validate the effectiveness of this method, as shown in figure 11c and d. In vivo experiments to detect breast masses and evaluate thyroid nodules have also been conducted, and the results show that this technique is promising [115,116]. In the subsequent study, Song et al. further developed the time-aligned sequential tracking (TAST) method, which enables the CUSE being realized on traditional ultrasound scanner [113]. (v) Guided wave elastography When using the dynamic elastography methods mentioned above, body wave theories are typically used to relate the shear wave velocities to the material parameters of soft issues. However, when dynamic elastography methods are used to characterize thin-walled soft tissues, the waves are guided, and, therefore, the guided wave theory should be used to interpret the experimental data [100,[117][118][119][120][121][122]. Guided wave elastography (GWE) methods have attracted increasing interest in recent years, although they have not been commercialized yet. For thinwalled soft tissues, the wall thickness may be smaller than or comparable to the wavelength of the elastic waves generated in soft tissues; in this case, the waves are guided within the wall and are strongly dispersive [123,124]. The dispersion relation is crucial for inferring the material parameters from the measured experimental responses. The general steps involved in GWE using the ARF as stimuli may be summarized as follows. (1) The focused ARF is used to generate the broad-band guided waves in the walls of soft tissues. (2) The propagation of the guided shear waves is tracked along the propagation direction. (3) Two-dimensional Fourier transformation can be applied to analyse the spatio-temporal imaging of the guided waves and extract the dispersion relation [125]. (4) A guided wave model (e.g. the Lamb wave model [123]) can be used to fit the experimental dispersion curve and identify the elastic properties of the thin-walled soft tissues. Typical experimental results of a vessel-mimicking phantom are shown in figure 12, in which the wave propagates along the axial direction of the vessel. From the dispersion curve given in figure 12f, the elastic properties of the vessel-mimicking phantom can be determined by using the guided wave model. Ex vivo and in vivo experiments performed on arteries [117,118], bladders [126], tendons [120] and heart wall [121] demonstrate that the GWE method is a promising tool for measuring the elastic properties of thin-walled biological tissues. The key issue affecting the use of the GWE method is the development of robust inverse approaches based on the dispersion relations given by appropriate guided wave models. The dispersion relation is generally sensitive to the BCs, geometrical parameters and pre-stresses in the soft tissues [100,[117][118][119]127,128], which should be addressed during the development of a robust inverse method. Mechanics underpinning ultrasound-based elastography Continuum mechanics plays an essential role in the development, evaluation and improvement of both static elastography and dynamic elastography methods. From the viewpoint of direct analysis, continuum mechanics enables the prediction of the responses of a biological soft tissue under either a static load or a dynamic stimulus. For instance, in static elastography, continuum mechanics predicts that the softer part will undergo larger deformation than the stiffer part. Therefore, measuring the deformation of soft tissues using a medical imaging method enables differentiating the parts with different elastic moduli. In dynamic elastography, understanding the correlation between the dynamic responses of soft tissues and their mechanical properties under either harmonic or transient stimuli within the framework of continuum mechanics forms the basis of developing data analysis methods to infer the material parameters of soft tissues. Moreover, inferring the mechanical properties of a biological soft tissue based on the responses of the soft tissue to an external or internal stimulus represents an inverse problem in elasticity. Unlike direct problems, many important inverse problems in engineering and science are ill-posed [129]. An inverse problem is ill-posed if one of the following properties is not respected. (1) A solution to the problem exists (existence). (2) There is, at most, one solution to the problem (uniqueness). (3) The solution depends continuously on the data (stability). To identify the extent to which the material parameters of a soft tissue can be effectively inferred using an elastography method, the properties of the inverse problem (i.e. the existence, uniqueness and stability of the solution) should be addressed by invoking the mathematical theory of inverse problems. In this section, we first summarize the governing equations involved in the use of the aforementioned elastography methods to characterize the mechanical properties of soft tissues. Then, the specific mechanics models for different types of elastography methods are discussed. Particular attention is paid to the theoretical solutions that describe the correlations between the experimental responses and material parameters, and their limitations are emphasized. (a) Governing equations (i) Equilibrium equations The equilibrium equation describing the conservation of momentum [130,131] is given by In the above equation, u denotes the displacement of the elastic solid, t denotes time, ρ denotes the mass density and b is the body force. T is the surface traction force and is related to the Cauchy stress σ by where n denotes the outer unit normal. Using the divergence theorem, we can obtain the differential form of equation (ii) Kinematic equations Here, we use B r and B to denote the reference (undeformed) and current (deformed) configurations, respectively; the points related to B r and B are labelled using vectors X = X α E α and x = x i e i (α, i ∈ {1, 2, 3}, respectively, where the Roman and Greek indices refer to the configurations B and B r , respectively. The displacement u is u = x − X, and the deformation gradient tensor F is defined as The determinant of F, which is denoted as J, gives the local volume ratio between the deformed and undeformed configurations. The soft biological tissues considered in this study are typically assumed to be incompressible in the literature (i.e. J = 1). Furthermore, the Green strain tensor can be defined as where C is the right Cauchy-Green deformation tensor Inserting equations (3.4) and (3.6) into equation (3.5) gives where ∇ r () stands for the gradient operator in B r . When the deformation is infinitesimal, the difference between B r and B can be ignored, and the Green strain tensor can reduces to the small-strain tensor ε = 1 2 (∇u + u∇). (3.8) The amplitudes of the shear waves used in the elastography techniques are much smaller than the feature size of the tissues [4]. Moreover, the static strain used in the static elastography technique is small, and typically, the axial strain is of the order of 2% [6,15]. However, in many cases, the target soft tissues may be pre-loaded (say by the ultrasound probe) before the imaging process, and finite deformations may occur (e.g. the strain may reach 10-20%). Such a pre-load is necessary and can help to increase the contrast, reduce the decorrelation noise [3,46], and measure the nonlinear elastic properties of soft biological tissues [8,[26][27][28]. In all of these cases, the problem can be summarized as 'small on large', that is, the small deformation caused by the shear wave or the static compression is superimposed on the finite deformation caused by the pre-load. To address the effect of the pre-deformation, the finite elasticity and incremental theory should be applied [131,133,134]. (iii) Constitutive laws For a linear elastic and isotropic solid, the linear constitutive law is given by σ = Cε, (3.9) where C is the fourth-order elasticity tensor. Inserting equation (3.8) into equation (3.9), we can obtain the linear constitutive law in component form as using the kinematic relations given by equation (3.8), equation (3.9) reduces to σ = μ(∇u + u∇) + λ(∇ · u)I, (3.11) where λ and μ are the Lamé constants and are related to the elastic modulus E and Poisson ratio For most soft biological tissues, ν is close to 0.5, and thus λ μ and μ ≈ E/3. For the elastography of anisotropic soft tissues, here, we mainly concentrate on the transversely isotropic (TI) model, which is typically used to model anisotropic biological tissues, such as skeletal muscles and tendons [135,136]. In the TI model, C has five independent components, and this number further reduces to three with the constraint of incompressibility [137]. Usually, the three elastic parameters, namely μ T , μ L and E L , can be used as three independent parameters to fully describe the mechanical properties of incompressible TI materials [34]. In general, μ T and μ L denote the transverse and longitudinal shear moduli, respectively, and E L is the longitudinal elastic modulus. Details about the relationships between μ T , μ L and E L and the components of C can be found in [34,35]. To describe the nonlinear elastic deformation of soft tissues, constitutive laws are usually defined using the strain energy function W, which is a scalar function of the strain invariants [131,138]. Then, the nominal stress S, which is defined as S = JF −1 · σ , can be determined by where p is a Lagrange multiplier used to ensure the incompressibility constraint [131,133]. For an isotropic solid, the first two principal invariants of C, which are denoted as I 1 and I 2 , are Because of the constraint of incompressibility, the third invariant I 3 = 1, and W = W(I 1 ,I 2 ). Here, we list several hyperelastic models that have been widely used in the literature. The neo-Hookean model is a simple and broadly used hyperelastic model, and its strain energy density function is given by where μ 0 is the initial shear modulus. In acoustoelasticity theory, the following fourth-order strain energy function has been used by many authors [26,139] where A and D are the third-and fourth-order Landau constants, respectively, describing the nonlinear elastic deformation behaviours of soft materials. To describe the nonlinear elastic deformation of a soft tissue, Demiray [140] and Fung et al. [141] proposed a strain energy density function in the following form: where the parameter b > 0 is linked to the hardening effect [8,27]. For an anisotropic material, the material characteristic directions in the reference configuration can be denoted by the unit vectors A α (α = 1, 2, . . . , N)). These characteristic directions may be caused by fibre-reinforced effects, such as spatially oriented collagen fibres [138,142]. In this case, some other invariants must be introduced to define W. Following the definitions used in the literature and commercial FE software [143], the following invariants, denoted as I 4(αα) and I 5(αα) (no sum on α), are defined: (3.18) where C 10 , k 1 and k 2 are material constants, and κ is defined as 19) where ρ(Θ) is the orientation density function which characterizes the distribution of the fibres. function is In particular, for TI hyperelastic materials (i.e. those in which the fibres run along only one direction), N = 1. In this case, Murphy [144] noted that W should include both I 4 (11) and I 5(11) to ensure compatibility between the linear elastic and nonlinear elastic models, and suggested that W may be written as Furthermore, the following function, which generalizes the Humphrey-Yin model [145], has also been suggested [144] where c 2 > 0 and c 4 > 0 are the isotropic and anisotropic strain-hardening parameters, respectively. (iv) Incremental theory Here, we give a brief overview of the incremental deformation theory, which is involved in finite deformation analysis. The finite deformation from the referenced (undeformed) configuration B r to the current (deformed) configuration B is described by the deformation gradient tensor F, and the small incremental deformation from the deformed configuration B toḂ is denoted asḞ, where (˙) denotes the small increment from B toḂ. As mentioned above, the deformation from B toḂ is assumed to be infinitesimal, and, thus, the two configurations B andḂ are very close to each other. (b) Mechanics of static elastography In static elastography, the deformation and strains are assumed to be small, and the linear elasticity theory is typically used. The specific mechanical model is briefly introduced here based on the governing equations in §3a. Inserting equation (3.11) into equation (3.3) and ignoring the body force and inertia force, we obtain As shown in figure 13a, here, we take an inclusion buried in surrounding soft tissues as an example. For simplicity, we consider a plane strain problem (i.e. the materials cannot deform in the x 3 -direction) [48]. The Lamé constants of the surrounding tissue and the inclusion are λ (1) , μ (1) , and λ (2) , μ (2) , respectively, where the superscripts '(1)' and '(2)' denote the surrounding tissue and the inclusion, respectively. When the tissue is slightly compressed relative to the undeformed configuration B r , the normal displacement at the contact region (−a < x 1 < a, 2a is the width of the ultrasound probe) between the probe and the skin is described asu 2 , whereas the tangential displacement is not constrained. Therefore, the BCs on and σ The boundary ∂B r is assumed to be fixed, i.e. u (1) 2 = 0, on ∂B r . (3.32) At the interface Γ between the inclusion and the surrounding tissue, the two materials are assumed to be tied together; therefore, the interfacial conditions (ICs) are With the BCs and the ICs, the direct problem given by equation (3.30) can be solved either theoretically or numerically, such as via FE. In figure 13a, the map of ε 22 within the ROI is calculated for illustration. The distribution of the axial strain within the biological tissues/organs provides physicians with useful information regarding the homogeneity of the tissues/organs. As shown in figure 13a, the strain contrast (i.e. the low compression strain within the elliptical area) obviously indicates the existence of a hard inclusion. The use of elastography to obtain axial strain maps has found wide applications in clinical diagnosis [14,42,45,47]. Although the axial strain can reflect the stiffness distribution in soft tissues, quantitatively measuring the elastic properties of soft tissues using static elastography by solving the inverse problem remains challenging [6,41,48]. This is because, in static elastography, the strain field in the soft tissue depends not only on the physical parameters of the system, but also on the BCs and ICs, which are difficult to accurately determine in most cases. In the literature, methods for obtaining maps of the relative elastic moduli (e.g. the shear moduli ratio μ (2) /μ (1) ) have been investigated [ (a) l (1) , m (1) u . 2 details [6] regarding the challenges and usefulness of determining μ (2) /μ (1) . Assuming that the inclusion is a small cylinder embedded in an infinite matrix and that the composite is compressed, Kallel et al. [48] obtained , (3.34) where Q = μ (2) /μ (1) , P =ε 22 , and ν is the Poisson ratio of both the inclusion and the matrix. Note thatε (1) 22 andε (2) 22 denote the axial strain far from the inclusion and the axial strain at the centre of the cylinder [48], respectively. Using this simple relation, the moduli ratio μ (2) /μ (1) can be determined from the measured axial strain ratio [147]. Another important issue affecting the practical use of static elastography is the effect of nonlinear elasticity. Indeed, in many cases, the pre-compressions were used to improve the image contrast [3,46]. To illustrate this issue, here, we use static incremental theory to analyse the deformation of a soft tissue. In this case, the incremental motion, which is used to compress the pre-deformed tissue and obtain the axial strain image, is quasi-static; i.e. ∂ 2u /∂t 2 = 0. We consider a homogeneous tissue modelled using the Demiray-Fung model [140,141] and pre-compressed along the x 2 -direction (figure 13b). The deformation gradient tensor is homogeneous and denoted as F = diag(λ −1 ,λ,1), where λ is the stretch ratio along the x 2 -axis (i.e. the axial direction). In this case, the incremental stress can be determined using equation (3.27) and is given by whereλ denotes the incremental stretch ratio (figure 13b). When λ = 1 (i.e. there is no precompression), equation (3.35) reduces to S 022 = 4μ 0λ . However, when pre-compression is applied, it stiffens the tissues. For illustration, the variation of the dimensionless stiffness S 022 /(4μ 0λ ) with the hardening parameter b in the Demiray-Fung model [140,141] for different amounts of precompression is plotted in figure 13b. Clearly, the pre-compression results in the overestimation of the tissues stiffness, especially for larger b. For different biological tissues, such as brain, liver and breast, b varies over a wide range (e.g. from 0.2 to 4.5) [8,27]. In the static elastography of breast tissues, whose hardening parameter b is approximately 3.0, the pre-compression strain used in the measurement may be as high as 10% [149]. According to figure 13b, in this case, Young's modulus may be overestimated by approximately 60%. To reduce the risk of misdiagnosis [39,150], choosing a proper level of pre-compression is crucial, which requires knowledge of the nonlinear elastic parameters of soft tissues (e.g. the hardening parameter b in the Demiray-Fung model). To further illustrate the effects of pre-compression on the axial strain image contrast, we consider a simple multi-layered model, as shown in figure 13c. In this model, a stiff layer is embedded in two soft layers. The initial shear modulus of the stiff layer is denoted as μ (2) 0 , and μ (1) 0 is the initial shear modulus of the soft layers. The hardening parameters of the stiff layer and softer layers are denoted as b (2) and b (1) , respectively. For illustration, we take b (1) = b (2) = 3. When an overall pre-compression is applied to the composite, the compression strain in the soft layers is greater than that in the stiff layer because μ (1) 0 , and the overall pre-compression is 10%, the compression strain in the softer layers is 12.3%, whereas that in the stiff layer is 5.5%. According to figure 13b, the hardening effect in the softer layers is more significant, indicating that the hardening effects within these layers are different. As shown in figure 13c, the axial strain ratio will drop to 1.7 instead of 3.0 when a pre-compression of 10% is applied (i.e. the contrast of the axial strain image will decrease). This issue has also been discussed by Shiina et al. [39] (figure 13d). (c) Mechanics of dynamic elastography with harmonic stimuli For elastography methods with harmonic stimuli, the steady responses of the target tissues are typically used to determine the tissue mechanical properties. Then the inverse problem involved in this case may be divided into two categories: the modal analysis method and the shear wave analysis method. Modal analysis methods include the vibration amplitude sonoelastography and SWIRE [55,61]. Taking a homogeneous medium with a hard/soft inclusion as an example, when the target soft material is forced to vibrate, a state-steady pattern that relies on the mechanical properties of the inclusion is formed. For example, a softer inclusion may yield larger local displacement. Thus, lesions within the tissues can be identified [56]. Moreover, when the frequency of the excitation reaches the resonance frequency of the soft inclusion, the amplitude of the displacement within the inclusion will reach the peak value. Therefore, the contrast between the inclusion and the surrounding tissue also increases. Under the condition that the inclusion is softer than the surrounding matrix, the uniqueness of the resonance mode can be guaranteed; therefore, the rspa.royalsocietypublishing.org Proc. R resonance frequency can also be used to quantitatively measure the mechanical properties of the inclusion, assisted by an FE model [61]. One challenge in modal analysis methods is that the modal shape relies strongly on the BCs [151]. For example, Gao et al. [55] used fixed BCs, whereas Schmitt et al. [61] used the so-called perfectly matched layer to avoid reflection of the shear waves [152]. Moreover, quantitatively inferring the material parameters from the experimental responses via modal analysis methods requires the use of the FEM, which might complicate the use of this technique by clinicians. Most other DEHS methods, including vibration-phase gradient sonoelastography, CWI, HMI and SDUV, use the shear wave analysis method to extract the mechanical properties of soft tissues. The wave motion equation can be obtained by inserting equation (3.11) into equation (3.3) When considering the steady state of plane wave propagation, the body force ρb may be assumed to be zero. To consider the dispersion of the shear wave resulting from the viscoelastic deformation of soft tissues, the viscosity of biological tissue must be considered by assuming that the stress depends on the derivatives of the strain components and the strain components themselves [153]. Thus, in equation (3.11), the Lamé constants should be written as where μ 2 and λ 2 are the coefficient of shear and the volume viscosity, respectively. For the plane shear wave considered here, without loss of generality, we suppose that u 2 = u 20 e i(kx 1 −ωt) and that other displacement components are zero, where ω and k are the angular frequency and wavenumber, respectively. Thus, the transverse (shear) wave propagates along the x 1 -axis. Inserting u 2 and equation (3.37) into equation (3.36), we obtain k = ρω 2 μ 1 − iωμ 2 , (3.38) where i denotes the imaginary unit. Because k is a complex number, its real part determines the phase velocity of the shear wave , (3.39) whereas its imaginary part determines the attenuation of the wave [153,154]. Inserting equation (3.38) into equation (3.39), we obtain (3.40) The above equation has been used to describe the dispersion of low-frequency shear waves in biological tissues [16,29,36,74,75]. By fitting the experimental dispersion curve, both μ 1 and μ 2 can be obtained. Based on shear wave elastography methods using harmonic stimuli, in theory, we can quantitatively measure both the elasticity and viscosity parameters of a homogeneous soft tissue. However, a drawback of this type of method is that the resolutions of the methods are limited by the low frequency of the shear waves which corresponds to a relatively large wavelength. When the typical dimension of a soft tissue (e.g. the size of a lesion) is comparable to or even smaller than the wavelength, the mechanical properties cannot be simply determined using an analytical solution such as the one given by equation (3.40). (d) Mechanics of dynamic elastography with transient stimuli The mechanics of DETS is presented in this subsection. Considering that in many soft tissues are clearly anisotropic and that the effects of pre-stresses may come into play, elastography of anisotropic soft tissues and pre-deformed soft tissues is also discussed here. It should be pointed out that the mechanics models discussed here focus on the elastic medium. In this case, the plane wave assumption can be used to derive the correlation between the velocities of elastic waves and material parameters. However, if the medium is modelled as a viscoelastic material and the attenuation of the shear wave is used to infer the viscoelastic parameters, attenuation caused by both the geometry of the wavefront and the viscosity of the medium should be considered. (i) Mechanical model for the transient elastography method The mechanical model underlying the TE method may be simplified as the transient motion of an elastic half-space induced by a local vibrator imposed on the surface. The theoretical analysis of this issue dates back to the classical works of Lamb [155] and Pekeris [156]. The key results have been summarized in detail in the textbook [123] and are briefly presented here. To consider the transient shear wave induced in TE, we first consider the motion of a halfspace induced by a concentrated force perpendicular to the surface of the half-space, as shown in figure 14a. The BCs at x 3 = 0 are σ 13 = 0, σ 23 = 0 and σ 33 = F 0 δ(x 1 )δ(x 2 ), (3.41) and the initial conditions (ICs) are u i = 0 and ∂u i ∂t = 0, when t < 0. (3.42) It is convenient to solve equation (3.36) with the above BCs and the ICs in cylindrical coordinates because this problem is axisymmetric. Achenbach solved this problem by using the integral transformation method [123]. Once this problem is solved, the displacement field in the TE induced by a local distribution force can be obtained using the superposition principle [151]. To demonstrate the key mechanics underlying the TE method, FE analysis (FEA) is performed here, and the results are shown in figure 14. Figure 14b indicates the displacement field at a typical time revealed by FEA after the push by the vibrator. Clearly, the shear waves mainly propagate along the direction corresponding to an angle of approximately 45°relative to the vertical direction. The problem is axisymmetric; therefore, the displacements of the material points along the axis of symmetry (i.e. their polarized rspa.royalsocietypublishing.org Proc. R. Soc directions) are parallel to the loading direction. Because the propagation direction of the wave is also along the loading direction, the wave along the loading direction appears to be a longitudinal wave. By tracking the propagation velocity of this wave (i.e. the arrival time of the peak displacement at different points), as shown in figure 14c, the velocity of the wave is c t instead of c 1 , where c t and c 1 are the bulk transverse (shear) and bulk longitudinal (compression) wave velocities, respectively, which are defined as [123] Sandrin et al. explained this phenomenon in terms of the diffraction effect [101]. In their recent work, Catheline & Benech [157] discussed this longitudinal shear wave in detail [157]. The longitudinal shear wave is shown to quickly decay when the wave propagates away; however, it can be monitored and tracked in TE because of its long wavelength in soft tissues. By tracking the propagation of the longitudinal shear wave along the loading direction, the shear velocity c T can be measured. Then, according to equation (3.43), the elastic modulus of the incompressible soft tissue can be determined as (ii) Mechanical model involved in ARFI-based shear wave elasticity imaging Here, we consider the initial value problem involved in the ARFI-based SWEI method, which uses the focused ARF to induce shear waves in soft tissues. This issue may be modelled as the action of the focused ARF at an internal point in an infinite solid. The body force is determined by equation (2.1) and is given by Note that the amplitude of f in equation (3.45) can be arbitrary. Without loss of generality,f is parallel to the x m -axis (m = 1,2,3); therefore,f = δ im e i , where δ im is the Kronecker delta. Using the f given by equation (3.45) to replace ρb in equation (3.36), we can obtain the equilibrium equation in the following component form: The above equation and the initial condition given by equation (3.42) form the initial value problem involved in SWEI. The solution to this problem is referred to as solution of the Green function [123,158,159], and that can be tracked with ultrasound elastography in soft tissues reads [34] (3.47) where r = |x|, γ i = x i /r = ∂r/∂x i . The superscript of u Iso im (x, t) indicates that the solid is isotropic, and the subscript m indicates that f is applied along the x m -axis. Equation (3.47) indicates that the shear wave has a spherical wavefront in space. The propagation velocity of the wavefront is c t . Note that according to the elastodynamic representation theorem [123], for any body force f(x,t) which may vary with time and coordinates, the solution to equation (3.36) can be obtained from the Green function solution by For example, various ultrasound push beams have been applied to induce shear waves within soft tissues in CUSE [96,112]. In this case, the distribution of the ARF may be diverse, and the resulting displacement field can be determined by solving equation (3.48) when the distribution of the body force is given. (iii) Mechanics underlying supersonic shear imaging In the SSI technique, the ultrasound beams are successively focused at different depths within the soft tissue (i.e. the ARF moves with a given velocity). In this case, the body force induced by the ultrasound beam can be simplified as (3.49) where v f is the moving velocity of the focused ARF. It should be noted that the ARF is discrete in a practical use and applied at different points instead of a continuous function as given in equation (3.49); however, such a simplification indeed can retain the physical nature of the problem and brings great ease for theoretical and computational modelling. Studying the elastodynamic fields produced by moving loads is a classical issue in mechanics, and many relevant works exist in the literature [160][161][162]. Here, the moving direction of the force is alongf. Without loss of generality, letf = (0, 0, 1), which indicates that the force is moving along x 3 . We consider the case in which the moving velocity v f is substantially greater than the velocity of the shear wave in the soft tissue. In practical applications of the SSI technique, the velocity of the moving ARF is essentially the velocity of the ultrasound wave, which is hundreds of times larger than the shear wave velocity in soft tissues. Inserting equations (3.49) and (3.47) into equation (3.48), the resulting displacement field induced by the moving force has been obtained as [34,107] where M Iso = v f /c t is defined as the Mach number, and The solution given by equation (3.50) forms the basis of the SSI technique [20,107]. According to equation (3.50), M Iso > 1 confines the displacement within a circular cone (i.e. |sinΘ(t)| < 1/M Iso ), indicating the formation of the shear-wave Mach cones. This phenomenon is the so-called ECE [107]. The Mach cone of the elastic wave within the elastic solid can also be induced by other stimuli, such as the movement of a dislocation. In this case, the formation of shear-wave Mach cones is also referred to as the elastodynamic Tamm problem, which has recently been studied by Lazar & Pellegrini [162]. In the SSI technique, the Mach number is large, and the angle of the Mach cone is very small. Therefore, quasi-plane waves form and the interfered wavefronts basically propagate in opposite directions. The moving velocity of interfered wavefronts can be measured to further determine the local elastic properties of soft tissues. (iv) Mechanics of guided wave elastography The aforementioned methods (i.e. the TE, ARFI-based SWEI and SSI methods) use wave theories in infinite media to interpret experimental data and infer the material parameters by assuming that the wavelength of the shear wave is much smaller than the dimension of bulk tissues such as liver and breast. However, elastic waves in a thin-walled structure with thickness smaller than or comparable to the wavelength are guided. The guided waves are strongly dispersive as shown in figure 15a, and in this case, guided wave theory should be used. A typical guided wave is rspa.royalsocietypublishing.org Proc. R. Soc can propagate within the elastic plate. In equation (3.54), p = k 2 l − k 2 , and q = k 2 t − k 2 , where k 1 = ω/c 1 , and k t = ω/c t . h denotes the thickness of the plate. Equation (3.54) can be solved numerically. As shown in figure 15b, both the antisymmetric and symmetric modes, denoted as A n and S n (n = 0, 1, 2, . . . ), respectively, have countless branches. Guided waves such as Lamb waves have been widely used in non-destructive testing [163,164] and the mechanical characterization of engineering materials [165,166]. Unlike traditional engineering materials, elastic waves in soft materials (e.g. biological soft tissues) have much lower frequencies (typically no more than 2000 Hz). Therefore, only the dispersion curve of the zero-order antisymmetric mode (i.e. the A 0 mode) is adopted in GWE methods. GWE has promising applications in the characterization of arterial stiffness, which may potentially be used to diagnose some cardiovascular diseases [167,168]. Following previous studies [117,118], the arterial wall has been considered to be an elastic hollow cylinder surrounded by water both inside and outside. As shown in figure 15c, the inner radius and wall thickness of the hollow cylinder are R and h, respectively. Furthermore, the curvature of the hollow cylinder is ignored. This assumption will be discussed in detail later and is justified only when the frequency is sufficiently high. Then, the model can be simplified as an elastic plate immersed in fluid as shown in figure 15c. In this case, the dispersion equation is [169] (k 2 − q 2 ) 2 tan p h 2 + 4k 2 pq tan q h 2 + i ρ F pω 4 ρrc 4 t = 0, for antisymmetric mode, where r = ω 2 /c 2 p − k 2 , c p = √ κ/ρ F is the velocity of pressure wave in the fluid, and κ and ρ F are the bulk modulus and mass density of the fluid, respectively. Directly comparing equations (3.54) and (3.55) shows that the effect of the surrounding fluid appears in the third term of equation (3.55). When ρ F is close to ρ, the surrounding fluid will significantly affect the dispersion relation, as shown in figure 15d. In practice, the fluid is considered to be water, for which κ = 2.2 GPa and ρ F = 1000 kg m -3 . The mass density of soft biological tissues is usually very close to that of water and typically assumed to be 1000 kg m −3 . For a plate immersed in water, the phase velocities of the A 0 mode are significantly smaller than those of a plate in vacuum within the frequency range of 0-2000 Hz. Equation (3.55) can be used to fit the experimental dispersion curve to infer the elastic modulus of the arterial wall. Recently, vessel-mimicking phantom experiments have revealed that only the experimental dispersion curve in the high-frequency range can be well approximated by equation (3.55) [117,118]. Therefore, Maksuti et al. [170] adopted a frequency of 500 Hz as a critical frequency and used only the data beyond this critical frequency in the curve fitting. As mentioned above, this deviation stems from the assumption that the effect of curvature on the dispersion relation is negligible. In fact, this assumption is only valid when the frequency is sufficiently high [171]. The critical frequency, which is denoted as f c , and beyond which the Lamb wave model given by equation (3.55) works in principle, depends on both R/h and the elastic modulus of the cylinder. In a recent study, using dimensional analysis and systematic FE simulations, the explicit expression of the critical frequency was obtained [119] When the frequency exceeds f c , the relative error between the phase velocity given by equation (3.55) and that of the guided axial wave is less than 5%. Furthermore, an inverse approach based on equations (3.56) and (3.55) has been proposed to identify both the elastic modulus of an artery and the critical frequency from the experimental dispersion curve [119]. Typical results for vessel-mimicking phantoms with different elastic moduli are presented in figure 16. (v) Shear wave elastography of anisotropic soft tissues Many soft tissues such as skeletal muscles and tendons are anisotropic and may be described using the TI model given by equation (3.10). In the following discussion, the fibre direction is always along the x 3 -axis. In anisotropic soft media, the velocities of shear waves are direction-dependent. The displacement caused by the plane wave is assumed to be u = Ue i(k·x−ωt) , where k = kk is the wavevector, k = |k| denotes the wavenumber, and ω represents the angular frequency. The phase velocity c is defined as c = ω/k, and U = U i e i , where e i denotes the base vector. Inserting u into equation (3.10) to obtain the stress components, the equilibrium equation (3.3) can be written as ( 3.57) According to equation (3.57), the existence of a non-zero U requires From equation (3.58) and using the relationship between C ijkl and μ T , μ L and E L , the two bulk shear wave speeds can be obtained from and The subscripts 'SH' and 'qSV' denote the horizontally and quasi-vertically polarized shear waves, respectively [172]. Here, we define C = (E L + μ T −4μ L )/2. Note C = 0 denotes a special type of TI material; in this case, the phase velocity of the 'qSV' mode becomes isotropic, as shown in figure 17. Besides, the group velocity, which is defined as c g = ∂ω/∂k (i.e. the velocity of a wave packet) can be obtained according to equation (3.58). When measuring the arrival time of the shear waves generated by a concentrated force, the group velocity can be determined from the travelling distance of the wave divided by the arrival time [154,173]. In SWEI, the group velocities can be measured in this way, and the mechanical parameters can be determined [35,106]. Figure 18 shows the shear waves in TI media generated by a simulated ARF [35]. The force is applied along the Z-direction, which has an angle of 45°w ith the x 3 -axis direction (fibre direction); therefore, both shear wave modes are induced. The wavefront propagates away from the load point at the group velocity. Clearly, for materials with different values of C, the shapes of wavefronts for the qSV mode are different, as predicted by equation (3.59). Because the qSV mode, which is related to E L , can be induced in this way, Rouze et al. [35] suggest using the ARFI-based SWEI with the experimental set-up described by their FEA to fully characterize the elastic parameters of a TI material. More conveniently, if the phase velocities can be measured, then equation (3.59) can be used to determine the mechanical properties. The phase velocities are easy to measure for the plane waves. Because of the ECE in the isotropic elastic medium, quasi-plane waves can be generated within the soft tissue by the ARF moving at a high speed. Addressing the ECE in the anisotropic elastic media is very important for the determination of the anisotropic elastic parameters of soft tissues. Li et al. [34] recently studied and revealed the ECE in TI medium through both theoretical analysis and numerical simulations. Figure 19a-f shows the displacement field when the moving direction of the ARF has a 45°angle with the fibre direction. According to equation (3.59), the phase velocity of the SH mode is only dependent on the two shear moduli μ T and μ L [36,174]. However, the phase velocity qSV mode is highly dependent on parameter C (or E L ). For illustrative purposes, the normalized displacements of the qSV mode at two points located on the X-axis (denoted as P 1 and P 2 respectively) are plotted in figure 19g-i. The shear wave velocity can be determined from the arrival time of the peak displacement. In this case, the phase velocities of the qSV mode with wavevectork = √ 2/2, 0, √ 2/2 can be determined [34]. Then, according to equation (3.59), we have E L = 4ρc 2 qSV,45 • − μ T , (3.60) where c qSV,45 • denotes the velocity of the interfered wavefronts measured from figure 19g-i. To fully determine the mechanical properties of the TI medium, the experimental set-up shown in figure 20 has been proposed [33]. When the ultrasound probe is placed as shown in figure 20, the qSV mode shear wave can be generated and used to measure E L if μ T has been determined. Such an experimental protocol can be easily realized using the SSI technique, and the three rspa.royalsocietypublishing.org Proc. R. Soc. A independent elastic parameters of skeletal muscles can be measured in vivo. Experiments have been conducted on the biceps brachii and gastrocnemius muscles of volunteers [33], and the shear moduli determined are in good agreement with previously reported data [36,47,[175][176][177]. (vi) Shear wave elastography of pre-stressed soft tissues In most ultrasound elastography measurements, the soft tissue it is assumed to be stress free. However, in practical measurements, the contact between the probe and soft tissues may lead to finite deformation. Moreover, analysing the propagation of the shear wave in a pre-stressed soft tissue enables the determination of the tissue's hyperelastic parameters. The effect of pre-stress on wave propagation has been investigated previously [26,[178][179][180][181]. Here, we use the incremental dynamic theory described in §3a to address the effect of pre-stress on the propagation velocities of shear waves in soft tissues [8,34,131,133,134,182]. The deformation gradient (i.e. F) induced by the pre-deformation is assumed to be homogeneous in the ROIs. In this case, equation (3.29) can be simplified as (a) denotes the stretch ratio along the x i -axis (figure 21), and λ 1 λ 2 λ 3 = 1 because of the incompressibility constraint. Under these conditions, the fourth-order tensor A 0piqj can be determined according to equation (3.28) First, we consider isotropic hyperelastic materials. For a plane wave propagating within the plane of x 1 − x 2 , i.e.u 3 = 0, and we can define,u 1 = ψ ,2 , andu 2 = −ψ ,1 . ψ is a scalar function of (x 1 ,x 2 ,t) and is assumed in the form where k and c denote the wavenumber and phase velocity, respectively, and θ denotes the wave propagation direction, as shown in figure 21. Inserting the incremental displacement components into equation (3.61) and eliminatingṗ, we can obtain the explicit expression for the phase velocity [133,139] (C 1 + C 3 − 2C 2 )cos 4 θ + 2(C 2 − C 3 )cos 2 θ + C 3 = ρc 2 , where In practice, we may compress the soft tissue along the x 1 -axis, i.e. λ 1 = λ is prescribed. Then we define λ 2 = λ −ξ , where the parameter ξ is determined by the deformation state within the soft solid; for example, ξ = 0.5 indicates uniaxial compression, whereas ξ = 1 represents the planestain state. In practical measurements, ξ can be measured [8,27]. Using those notations, equation (3.65) can be written as (3.66) Figure 22a shows the dependence of the wave speed on parameter ξ for different λ when b = 5. Clearly, when 0.1 ≤ ξ ≤ 0.8, the effect of ξ on the wave velocity is not significant. Equation [8,27]. Figure 22b,c shows the shear wave velocities measured before and after compression. By fitting the shear wave velocities at different compression strains with equation (3.66), the hardening parameter b can be obtained. Effects of the pre-stress on wave propagation have been studied [26][27][28]182] for isotropic hyperelastic materials described with the constitutive law given by equation (3.15). The following analytical solution was derived: assuming that the soft material is loaded with uniaxial pre-compression stress σ 11 along x 1 -axis and that the shear wave is propagating along the x 2 -axis. Using equation (3.67), parameter A has been measured for agar-gelatine and polyvinyl alcohol cryogel phantoms. In their recent work, Bernal et al. [28] further adopted equation (3.67) to establish an inverse approach to measure A. In their approach, the strain field is measured using static elastography, and, thus, the stress field can be determined. By gradually increasing the compression and measuring the incremental strain and shear wave velocities, the relationship between the stress and the shear wave velocity can be obtained and used to determine A according to equation (3.67). The initial experiments, as shown in figure 23, have demonstrated the potential of this method for enhancing the contrast between a lesion and its surrounding tissues [28]. In addition to isotropic hyperelastic materials, the effects of pre-stress on shear wave propagation in TI hyperelastic materials have also been studied [33,34,[183][184][185]. For the constitutive law given by equation (3.21), the phase velocity of the SH mode shear wave can be obtained as where I 1 is defined in equation (3.13). When no pre-deformation exists (i.e. λ 1 = λ 2 = λ 3 = 1), equation (3.68) reduces to equation (3.59). The hardening parameter c 2 can be measured by using the inverse method proposed in [34]. The experiments conducted in [33] determined the hardening parameter c 2 of beef muscles ex vivo (figure 24). . Axial-shear strain images from FEM of (a) a firmly bonded inclusion and (b) a loosely bonded inclusion. The coefficient of friction was 0.01, and the applied axial strain was 2%. The inclusion was twice stiffer than the background in both cases [186]. Reprinted from reference [186] with permission. (Online version in colour.) Discussion This paper provides an overview of both static elastography and dynamic elastography with focus on the mechanics principles underlying these methods. Static elastography is relatively easy to realize and can qualitatively differentiate between soft tissues with different stiffnesses because soft tissues with different elastic moduli will undergo different amounts of deformation under external or internal stimuli. In addition to the compression strain along the loading direction, other deformation information may be acquired in static elastography. For instance, the interfacial bonding conditions between a benign tumour and the surrounding soft tissue and those between a malignant tumour and surrounding soft tissue are usually different. Therefore, by tracking the deformation at the interface (e.g. the shear strains at the interface), it is possible to differentiate a malignant tumour from a benign one using the static elastography method [186], as illustrated in figure 25. Here, the challenge is to accurately evaluate shear strains at the interface, which deserves further efforts. In DEHS, a vibration source with a given frequency can be used to stimulate the soft tissue. Low-frequency (typically 100 Hz) shear waves generated in the soft tissue in the steady state are relatively easy to track using ultrasound imaging methods. However, a low-frequency shear wave corresponds to a relatively large wavelength. When the typical dimension of a soft tissue (e.g. the size of a lesion or inclusion) is comparable to or even smaller than the wavelength, its mechanical properties cannot be simply determined using an analytical solution such as equation (3.40). By contrast, shear waves generated in DETS are generally broadband, and include more information that can be used to infer the material parameters. The centre frequency is typically 1000 Hz; therefore, the DETS in theory has better resolution than DEHS. In this sense, the DETS methods may be more suitable for evaluating the mechanical properties of soft tissues with finite dimensions. For instance, for a multilayer system, e.g. human skin [187], when a harmonic vibrator is used to induce shear waves in the dermis layer, the shear wave velocity depends not only on the material parameters of the dermis layer, but also on the mechanical properties of the adjacent layers and the geometrical parameters of the composites [187]. However, FEA shows that when a DETS method (e.g. the SSI technique) is used in this case, the velocity of the interfered wavefronts mainly depends on the mechanical properties of the dermis layer, and the effects of other parameters in the system are rather weak [187]. However, it should be noted that the viscosity of soft biological tissues may strongly attenuate the high-frequency shear waves and can significantly reduce the signal-to-noise ratio during the propagation of transient waves and affects the accuracy of wave velocities measured in a DETS method. Although previous studies have demonstrated the usefulness of dynamic elastography methods for practical measurements, quantitatively measuring tissue mechanical properties using these methods remains challenging in many cases. Here, we take the GWE of arteries as an example. Real arterial walls have layered structures and GWE methods described in the literature are only able to determine the effective elastic modulus of the arterial wall [122]. Second, real artery tissues are anisotropic and contain distributed collagen fibres [142]. Although recent studies have demonstrated that determining the anisotropic and hyperelastic properties of a bulk anisotropic soft tissue is possible [33,34], assessing the anisotropic parameters of arterial walls using the shear wave elastography method is by no means trivial because the wave in the arterial wall is guided. Third, the modelled arterial wall is usually assumed to be a time-independent material, whereas a real artery may exhibit viscoelastic deformation. For viscoelastic soft tissues, the wavenumber k is no longer a real number but could be a complex number, reflecting the dissipation of the wave. Chan & Cawley [188] studied the dispersion curve of a viscoelastic plate. Their results showed that the lowest mode (i.e. the mode adopted in the inverse analysis) was essentially unaffected by the viscosity of the soft media. They also demonstrated that the high-order modes suffered from viscoelastic effects. Therefore, caution should be taken when adopting these higher-order modes. Fourth, in the literature, the outer region of the arterial wall was simplified to a non-viscous fluid to derive the theoretical solution in recent GWE methods. This assumption is only reasonable in some cases, such as carotid arteries, in which the elastic moduli of the arterial walls can be much greater than those of the perivascular tissues. Fifth, the effects of blood pressure in in vivo experiments cannot be ignored. In this case, a robust GWE method should consider guided waves in a pre-stressed soft tube. Finally, note that identifying the material parameters using either static elastography or dynamic elastography represents an inverse problem. As mentioned above, an inverse problem may suffer from the issues of solution existence, uniqueness and stability. Among these issues, the stability of the solution is usually the key because the lack of stability will make the solution of an inverse problem have nothing to do with the real solution. Recently, Jiang et al. [8] introduced the concept of the condition number in analysing the stability of nonlinear elastic parameter A determined using the SSI technique via acoustoelasticity theory. For the strain energy density function given by equation (3.15), the following condition number in closed-form was derived in their study the ROI [8]. Accordingly, the condition number measures the sensitivity of the identified solution of an inverse problem to data errors. The larger the condition number is, the more sensitive the identified solution to data errors will be. For example, when the condition number is 5, an error of 5% in the input data will lead to an error of 25% in the identified solution. Equation (4.1) reveals that sufficient compression (e.g. λ < 0.75) should be imposed on the soft tissue to decrease the condition number. Therefore, to develop a robust elastography method for characterizing the mechanical properties of soft tissues or other soft materials, the existence, unique and stability issues of the solution to the inverse problem must be addressed. However, this issue has not received sufficient attention in the literature regarding ultrasound elastography. Concluding remarks Ultrasound-based elastography has emerged as a highly useful technique for characterizing the mechanical properties of soft materials, including living soft tissues, because of the extensive experimental and theoretical research performed in recent years, which has improved understanding of this technique and its applications, particularly in clinics. From the viewpoint of continuum mechanics, recent findings have contributed to shaping a set of unanswered questions that require investigations in future studies of static and dynamic elastography methods. Researchers, particularly those in the mechanics community, deserve to pay attention to these important issues. Some of these questions include the following: (i) How can we improve the current elastography methods based on the knowledge from the field of continuum mechanics to quantitatively determine the mechanical properties of soft tissues in critical cases, such as when the soft tissues are anisotropic and/or have finite dimensions (e.g. tumours) that significantly influence the propagation of shear waves? (ii) What opportunities exist for developing robust GWE methods to characterize the mechanical properties of thin-walled soft tissues, including arteries and bladder, in vivo based on the knowledge of guided wave in pre-stressed thin-walled soft solids? (iii) Diseases may alter the structures and functions of soft tissues/organs and change their mechanical properties. How can we model the diseased tissues/organs and determine which mechanical parameters (e.g. elastic, hyperelastic, viscoelastic and poroelastic parameters) are sensitive to the diseases, and further inspire the development of new elastography techniques? (iv) What new fundamental science can be explored (e.g. the propagation of elastic waves induced by a moving vibration source in pre-stressed inhomogeneous living soft tissues across different length scales)? (v) What new techniques are becoming available that could expand applications of ultrasound elastography and provide new opportunities to characterize diverse soft materials far beyond biological soft tissues? Although the practical use of ultrasound elastography in quantitatively measurements of material parameters still faces challenges in many cases, based on the advances made in understanding this promising technique in recent years and the aforementioned opportunities for further study, one can reasonably predict that this technique has a bright future in a variety of fields, including not only medicine, but also biology, materials science, tissue engineering and soft matter physics. Note that this review gives emphasis to the mechanics theories involved in ultrasound elastography. Although understanding the responses of soft materials to various internal or external stimuli is by no means trivial, the knowledge obtained from continuum mechanics indeed helps yield some analytical solutions that can be used to interpret the experimental data. Finally, we conclude this review with an old aphorism: 'Simplicity is beauty'. Aleksandr Solzhenitsyn said, in his 1970 Nobel Prize speech, that 'Beauty will save the world'. By pursuing fundamental solutions in simple forms that reveal the correlation between experimental responses and material parameters within the framework of continuum mechanics, we are not merely pursuing 'beauty' but also providing fundamental solutions that contribute to understanding ultrasound elastography methods and facilitate their practical use, particularly in medicine. Competing interests. We declare we have no competing interests. Funding. We acknowledge support from the National Natural Science Foundation of China (grant nos. 11572179, 11172155, 11432008 and 81561168023).
19,331.2
2017-03-01T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Stability of weathered cut slope by using kinematic analysis Rock slope engineering involves the designing of safe and stable rock slopes for a certain lifespan. Different rock slopes exhibit different strengths of material that reacts differently to external forces. Rock strength can decrease over time due to weathering process. In addition, the presence of discontinuity can lead to structurally controlled slope failure. The study focuses on a large exposed rock slope in Sri Jaya area, where the exposed granitic rock was cut by several sets of discontinuities. In addition to the discontinuities, the slope exhibited several weathering grades. The objective of this study is to map the weathering and discontinuity of the rock slope, in order to determine the stability of the slope. Schmidt rebound hammer tests were carried out to determine the extent of the weathering grade of the slope material. The collected field data were then analysed using the kinematic analysis to determine potential mode(s) of failure. From the analysis, it is predicted that there is significant percentage for toppling failure and wedge failure to occur in the weathered rock material. Introduction Weathering is the breakdown of soils, rocks, and minerals that are in contact with earth's atmosphere, water and air. Weathering rocks take place over long period of time, with rocks on earth's surface experiencing the process faster than rock buried underground. Weathering is one of the process that lead to soil formation. There are different types of weathering that affect rocks, which includes physical/mechanical, chemical and biological weathering. For physical/mechanical weathering, this process will breaks down the rocks into bits. Weathering also not only effect strong rocks but weak masses, which included weathered material from original fresh rocks that now forms soil. Chemical weathering is more or common and occurs faster in tropical regions, such as Malaysia, due to heat and abundant water from rain [1]. Weathering reduces the strength of rock, which can then affect the cut slope stability. In rock slopes, several methods are available to assess the stability. Kinematic analysis is one of the conventional method of analysis of slope stability, alongside limit equilibrium analysis and rock fall simulators [2]. It is in basic terms a geometric analysis, which examines the modes of slope failure that are feasible in a given rock mass, which can be applied to both slopes that are already excavated or proposed rock slopes. The analysis was developed by [3], with subsequent modification by [4][5] commonly used in modern analysis. In a kinematic analysis, it is the orientation of the sets of discontinuities and the slope face, together with friction, that are examined to determine if certain modes of failure can occur. The analysis is commonly conducted with the stereographic representation of the planes and lines of interest. It is a popular choice of slope stability analysis among local engineer working on rock slopes [6]. This study sets out to investigate the effect of weathering of rocks to the stability of cut slope. Large exposed rock slope along Sri Jaya area was selected as a case study, where several stage of weathering can be observed on the slope faces. The relationship between the weathering condition and stability of the rock slope was studied. Fieldwork In this study, the main objective is on finding the stability of weathered rock cut slope by using kinematic analysis. The data needed for the analysis would include the Schmidt rebound hammer test readings and discontinuity mapping (scanline mapping) of the rock slope. Fieldwork was carried out in order to determine the extent of the weathered rock, and ultimately to study its effect to the potential failure of the slope. The study area takes place in Sri Jaya, Pahang, where large exposed cut slope is found ( Figure 1). The weathering condition of the slopes means that there are potential for the weakening of the slope, which could lead to failures. The rock mass along the slope consist of granitic rocks. Figure 1. View of the exposed cut slope at Sri Jaya Schmidt rebound hammer test Schmidt rebound hammer test is a simple test used in determining the surface hardness of the weathered material of the rock slope ( Figure 2 -Figure 3). The test was carried out an interval of 5m along scanline of the slope face. For every interval, the test was repeated 10 times in order to get the average value of the rebound number. This is following the suggested method by [7]. The values can be used to determine the weathering grade of the rock. Discontinuity data The scanline method was carried out in order to get the measurements of dip and dip direction of the discontinuity for the purpose of kinematic analysis. Discontinuity mapping is a method that based on fieldwork that collect data, and is one of component to calculate rock mass classification in rock engineering. This method is a reliable method to measures and describe the discontinuity, which refers to fracture or breakage in the rock, such as joints, faults, beddings, and foliations. The discontinuity mapping is characterized by measurement of the number of discontinuity sets, size, spacing intercept, location, orientation and mean density. The discontinuity data from the mapping were used as input for software to analyse the rock slope condition, where the potential of failure of the rock slope could be predicted. The software used is Dips [8], which is capable of calculating the percentage of potential modes of failure of a slope. Table 1 show the result from the Schmidt rebound number from field mapping. Based on the result, it was found that most of the slope consists of rock material from weathering grade II (WG II). Based on the weathering classification on granite, it was determined that for WG II granite, the average value for Schmidt rebound hammer is more than 45 [9]. For rocks which does not give any rebound number, the weathering grade is classified as WG IV or lower. Kinematic analysis Data for the discontinuity of rock slope were processed in the software Dips, which requires the orientation of the discontinuities (joints) and slope face orientation. In the analysis, four types of plane failures could be calculated: planar sliding, wedge sliding, flexural toppling and direct toppling. The kinematic analysis was carried out along panels of the slope faces, separated into four panels: panel 1 Table 2. Due to the orientation of the joints to the way the slope surface was excavated, the potential of plane failure is low, as the joint are not oriented in a way that would facilitate the occurrence of planar sliding. However, with the orientation of the joint that dips towards the slope direction, together with the excavation of the slope to form high level berms, there is significant potential for toppling failure to occur. In addition, intersection of two or more sets of the joints in the rock slope could also lead to potential wedge failure, where there is weakening of the rock strength along more than one joint sets. However, kinematic analysis only provides the potential of slope failure based on discontinuity mapping of the slope. In order to further predict the stability of slope based on the weathered material, more detailed wider in-situ and laboratory testing of the weathered material is required [10]. This is to ensure proper protection measures could be undertaken for the slope.
1,710.8
2021-02-01T00:00:00.000
[ "Geology" ]
NADA : New Arabic Dataset for Text Classification In the recent years, Arabic Natural Language Processing, including Text summarization, Text simplification, Text Categorization and other Natural Language-related disciplines, are attracting more researchers. Appropriate resources for Arabic Text Categorization are becoming a big necessity for the development of this research. The few existing corpora are not ready for use, they require preprocessing and filtering operations. In addition, most of them are not organized based on standard classification methods which makes unbalanced classes and thus reduced the classification accuracy. This paper proposes a New Arabic Dataset (NADA) for Text Categorization purpose. This corpus is composed of two existing corpora OSAC and DAA. The new corpus is preprocessed and filtered using the recent state of the art methods. It is also organized based on Dewey decimal classification scheme and Synthetic Minority Over-Sampling Technique. The experiment results show that NADA is an efficient dataset ready for use in Arabic Text Categorization. Keywords—Data collection; arabic natural language processing; arabic text categorization; dewey decimal classification; synthetic minority over-sampling INTRODUCTION Data collection consists of gathering information to assess the outcomes and validate the research study.The accuracy of data collection is crucial to keep the truth of research.Data collection is required in all research areas and studies such as mathematics, physics, humanity, business, computer science and many more. Arabic Text Categorization is one application of Natural Language Processing in Computer Science that needs a huge amount of text documents to perform classification.Accessing to freely available corpus is a desirable aim.Unfortunately, these corpora are not easily found or not designed for Arabic Text Categorization such as Al-Dostor newspapers [1].In other words, the existing corpora ( [2], [3] and [4]) need modification before the usage.For example, increasing the number of classes, performing preprocessing techniques and providing the corpus with specific formats to facilitate the integration of the data.In fact, most of the existing Arabic corpora don't follow any technique necessary to organize the class hierarchy.This hierarchy helps illustrate the needed classes and keep corpus balanced to accomplish an accurate result.Moreover, some of the existing Arabic corpora are not dedicated for classification because either there are no defined classes such as 1.5 billion words Arabic Corpus [5], or the existing classes are not well defined ( [6], [7], and [8]).Furthermore, most of the available corpora are published as raw data, which requires applying linguistic pre-processing operations such as cleaning, tokenization, normalization and stemming before use. Consequently, the researchers in this field face a fundamental problem in comparing the results of their proposed methods with those of the state of the art techniques.This makes the validation step more difficult and timeconsuming.So, it is extremely needed to propose a new Arabic corpus that overcomes the above limitations. In this paper, we present NADA, a New Arabic Dataset built from two existing Arabic corpora and complemented with extra classes and documents.To cover the entire classes from different domains, the standard classification schemes (Dewey Decimal Classification scheme (DDC) [9]) is used to provide a logical hierarchy of classes needed in document classification.In addition, to reach a high classification accuracy, Synthetic Minority Over-Sampling Technique (SMOKE) [10] is applied to make the classes balanced.NADA is composed of 10 categories belonging to different domains, including Social science (e.g.economies, and law), Religious science (e.g.Islamic religion), Applied science (e.g.health), Pure science (e.g.Technology), Literature science, and Arts science (e.g.Sport).After the data was assembled and organized, the preprocessing methods and filtering are applied to make the data ready in ANLP and particularly ATC field.This paper is organized as follows.Section 2 introduces the Arabic Language.Section 3 presents the Dewey Decimal Classification scheme.Section 4 surveys the existing Arabic corpus.Section 5 shows the formation of NADA corpus.Section 6 displays the experiment results and finally section 7 concludes this works. II. ARABIC LANGUAGE Arabic is a complex language.It has diverse characteristics that make it different from the other languages.The Arabic word contains the diacritics placed above or below the letters rather than short vowels.However, these diacritics have been left in contemporary writing and expected to be filled in by the readers from their knowledge of the Arabic language [11].Furthermore, in Arabic, many letters have a similar structure and are differentiated only by the existence and the number of dots.For example, the letters (b-‫,ﺏ‬ n-‫,ﻥ‬ t-‫)ﺕ‬ have the same structure but with different dot location and number.Moreover, the different shapes of Arabic letters depend on the placement of the letters in the word.Four shapes are found for 22 letters in Arabic, which are (word-initial, word-medial, and wordfinal).In Arabic, nouns and adjectives involve genders [12].www.ijacsa.thesai.orgAnother obvious complex characteristic of Arabic language is the richness of vocabulary.For example, the word "darkness" has 52 synonymous, "short" has 164, and 50 synonymous for the "cloud" [12]. III. DEWEY DECIMAL CLASSIFICATION In order to arrange resources on the shelves and facilitate the retrieving process, the Dewey decimal classification scheme (DDC) can be used.The most usage of this scheme is in the libraries.DDC is a hierarchical number system that organizes all resources into ten main categories [9].Each main category is then divided into ten sub-categories and so on.In this study, this scheme is used to help build NADA. IV. RELATED WORKS The first step in text classification studies is data collection.The collected data must be suitable for the classification purpose.Data collection is required in each language performing text classification or other NLP applications.Many corpora can be found in English language (for example Newsgroup English benchmark [13], ACL Anthology Reference polish Corpus (ACLARC) [14], Reuters 21578 English corpus [15], and Reuters Corpus Volume 1 (RCV1) [16]) as long as in the other languages such that Chinese Souhu News corpus [17], Thai dataset [18]. In Arabic language, the state of the art studies presented a number of Arabic Corpora such that Al-Nahar 1 , Al-Jazeera 2 , Al-Hayat 3 and Al-Dostor newspapers [1], Hadith corpus [4], Akhbar-Alkhaleej corpus [2], Arabic NEWSWIRE [3], Quranic Arabic Corpus [4], corpus Watan-2004 [6], Khaleej-2004 [19], KACST Arabic corpus [20], BBC Corpus [7], CCN Corpus [8], Open Source Arabic Corpora (OSAC) [21] and Arabic corpus 4 that is composed of Watan-2004 and Khaleej-2004 corpora.Table 1 summarizes the existing corpora dedicated to ATC researches.Even though there are freely available Arabic corpora used in Arabic processing projects, most of them are either not suitable for text classification, or they might be appropriate for classification but still the data needs more filtering, processing and format conversion steps, which can negatively affect the classification accuracy. On the other hand, few commercial corpora 5 , are available but with extremely excessive cost.So, the need for developing free new corpora is critical in Arabic Text Categorization. V. NADA DATASET SETUP NADA corpus is collected from two existing corpora, which are Diab Dataset DAA corpus and OSAC corpus.DAA dataset has nine categories each of which contains 400 documents.Each category has its own directory that includes all files belonging to this category.These files have already been preprocessed and filtered [22].The documents in each class of DAA corpus are considered in NADA corpus.On the other side, OSAC dataset [21] has six classes each containing [500, 3000] raw documents.Each category has its own directory that includes all files belonging to this category. The OSAC dataset is a raw data that requires preprocessing.For this, each text file is pre-processed as follows: 1) the digits, numbers, hyphens, punctuation marks and all non-Arabic characters are removed.2) Some letters are normalized to unify the writing forms.3) Arabic stop words like pronouns, articles, and prepositions are removed.4) The light stemming is applied to the dataset to remove the entire affix and suffix from the word.However, Chen stemmer or Khoja algorithm for extracting the roots are not employed, because usually it is not valuable for Arabic text classification tasks, due to the conflation of various words to the same root form [12]. Furthermore, to reduce the dimensionality of the dataset, the recent new proposed Firefly based feature selection [23] is used.Firefly Algorithm is a well-known Artificial Intelligent technique applied to select the relevant words from a given document.This technique is applied to each document to reduce its size.The processed and filtered documents are considered in NADA dataset. In this study, DAA and OSAC datasets are partitioned into two parts to building the training and testing data for the classification purpose.By this step, NADA corpus is constructed and becomes available for usage.This construction is based on DDC scheme to make its classes well organized.Figure 1 displays the hierarchy of NADA corpus; only the green classes and subclasses are considered in NADA.Furthermore, SMOTE technique is used to balance the classes and then increase the classification performance [10].The data collection is summarized in Table 2 and 3  ARFF file: it is an ASCII file that involves a group of instances with a set of attributes.These instances are the text scripts that are involved in the text files.Each instance represents one text file.This file format is necessary to analyze and process the corpus using WEKA tool [5].  Text files: each file involves Arabic script in a specific category.These text files are classified into 7 categories as shown in Table 3.  Sampled file: to avoid imbalanced impact on classification results of the collected dataset, SMOTE [10] is used to balance the dataset classes.The impact of SMOTE is shown in Figure 2. www.ijacsa.thesai.org VI. EXPERIMENTAL RESULTS After CSV file is generated, it is converted into a sparse ARFF file format using TextDirectoryToArrf converter and StringToWordVector converter in WEKA (version 3-7-13).To measure the performance of classifying NADA, recall, precision and F1 measures are calculated and averaged using SVM classifier. To apply the experiment, the training and testing data are required.So, the entire dataset is gathered in one ARFF file.Then, the data is divided into two partitions using percentage method, where the first partition is training data, with 60% of the dataset and the second partition is testing data with 40% of the dataset. According to the result in Table 6, the classification accuracy of NADA is 93.8792% even though the classification accuracy of OSAC is 98.1758 % in Table 4.The result beyond the degradation of NADA's classification accuracy is due to the low accuracy of DAA where it is 80.9087 %, in Table 5 This can be explained by the fact that DAA is not well preprocessed and/or filtered which negatively affected the classification result. For the running time, Tables 4, 5 and 6 show the time taken in classifying each dataset.The time required to classify Nada is 1467.62 seconds which is about 24 min and 28 seconds.This time is higher than the time needed for classifying OSAC and DAA datasets.This is because the number of instances in NADA dataset, which is 13066 instances is higher than that of OSAC and DAA datasets which are 3710 and 3600 instances respectively. To conclude, NADA is well-organized dataset ready for use in ATC purpose and can be considered as a benchmark in this field of research and study.www.ijacsa.thesai.orgNADA is a New Arabic Dataset built from two existing Arabic corpora including OSAC and DAA datasets.This corpus followed a standard classification scheme (DDC) to provide logical hierarchy presentation of classes.NADA corpus is composed of 10 categories, which achieved 5 classes from the first level of DDC and some classes from the second level.To increase the classification performance, SMOTE technique is applied to balance the whole classes.This dataset passed through preprocessing and filtering steps to reduce researchers' efforts in rebuilding Arabic corpus.NADA is tested and validated using SVM classifier and three evaluation measures.The experiment results show that NADA is an efficient dataset for ATC purpose.This corpus can be extended by adding new classes and documents to increase its usage especially in Big Data and Deep Learning. Fig. 1 . Fig. 1.NADA Corpus based on DDC Hierarchy . Table2shows the categories and the number of documents of OSAC and DAA datasets and Table3displays the content of the new corpus. TABLE II . OSAC AND DAA ARABIC DATASETS TABLE III Running Time1467.62secondswww.ijacsa.thesai.orgVII.CONCLUSIONThis research study is performed to meet the extreme need of Arabic corpora and to overcome the difficulties faced by ANLP researchers especially in ATC field to find an appropriate corpus.
2,821
2018-01-01T00:00:00.000
[ "Computer Science" ]
Petrographic study and its implication to the uniaxial strength of weathered volcanic rocks from Tawau, Sabah This paper discussed the petrographic study and its effect to the uniaxial strength of weathered volcanic rocks from Tawau, Sabah, Malaysia. The volcanic rock consists of associated dacite, andesite and basalt rocks with the age of Pliocene to Quaternary. In this study the Murphy (1985) classification were used to determine the weathering grade of volcanic rocks. The uniaxial strength value were obtained from the Point Load Test and also estimated calculation from Uniaxial Compressive Strength test. The microstructures and identification of altered minerals were analysed using scanning electron microscope (SEM) and polarized microscope, respectively. The result of analysis indicated that the uniaxial strength of volcanic rocks decreased with the degree of weathering grades where the uniaxial strength decreased from 122.2 to 15.8 MPa for dacite, 143.4 to 10.1 MPa for basalt and 181.2 to 26.8 MPa for andesite. This result is due to the different percentages of quartz and feldspar minerals in the rock samples as well as formation of secondary minerals in weathered rocks. Microstructures study showed the appearance of micro fractures with narrow apertures in the minerals also influenced the uniaxial strength of the rocks. INTRODUCTION The strength of rocks varies depending on the rock type, discontinuities and weathering.The effect of weathering on the engineering properties of rocks has been studied by previous researchers and various weathering classification have been proposed [1,2,3].Weathering process consists of chemical, physical and biological dynamic process with agents such as water, air, organism and climate [4].Chemical weathering is defined as a decomposed process of rocks caused by reactions to water, carbon dioxide and humidity of mineralogy composition [4].Whereas, physical weathering is a slaking and disintegration process caused by force of water movements, temperature and inner stress changes [5].Continuous weathering process that occurred has contributed to the decrease of rock physical characteristics [2].High humidity resulted to extensive chemical weathering process and decrease the physical behaviour [6].Earth materials can be categorized into five layers according to the weathering grade which does not only limit to the rock surface. Weathering grade can be classified from Grade II (slightly weathered) to Grade V (completely weathered) based on certain parameters such as color changes, strength index, rocks-soils ratio (RSR) and micro-index (micropetrography, Imp, and micro-fractures, Ifr, index) [7].In tropical area the weathering profile can be observed from completely weathered on the surface to fresh rocks at the bottom of the profile.According to Fookes [8], Grade I and II classified as fresh rocks, Grade III and IV as combination of rock and soil, whereas Grade V and VI as soil.However, Irfan and Dearman [3] modified Fookes's classification and can be used for all types of rocks.Weathering processes effects to the strength of rocks due to the formation of soil material and increasing of porosity [6].The decreasing of strength occurred when the bond between minerals particles is break apart thus forming micro cracks and new minerals during weathering [5].Therefore, the objective of this research is to study the petrographic and its implication to the uniaxial strength of weathered volcanic rocks, collected from Tawau, Sabah. The study area is located at Tawau, Sabah which one of the area in east of Sabah has experienced with active volcanic activity.The age of volcanic rocks at the study area is estimated from Pliocene to Quaternary [9].The Pliocene volcanic rocks are situated at Mt. Magdalena, Mt.Wullersdorf, Mt.Pock, and Mt Lucia while Quaternary volcanic rocks can be found at Mt. Maria, Bombalai Hill, Tiger Hill and Mostyn Hill (Figure 1).The rock MATERIALS AND METHODS Three types of volcanic rocks consist of dacite, andesite and basalt were collected from the study area, which are widely distributed around Balung and Merotai area of Tawau, Sabah (Figure 1).The active weathering process resulted to thick soil profile up to 5 meters in most of the volcanic profiles.The colour is controlled by mineral composition which alter the primary minerals to secondary minerals throughout the weathering Grade II (slightly weathered) to V (completely weathered). Petrography plays an important role as volcanic rocks exhibited various mineral composition and texture.Thus, it is crucial to observe how weathering can affect on uniaxial strength of different petrography of volcanic rocks.The classification of rocks samples were done based on QAP diagram to which used to classify volcanic rocks [10]. In this study, weathering grade classification by Murphy [11] (Table 1) is used.Murphy's classification was modified to include the sound and feel of the square end of a rock hammer hitting the rock, though it depends on many external factors including the strength of the person wielding the hammer and the weight of the hammer itself [2].In-situ sampling was conducted where volcanic rock samples with Grade II to IV of cubic size (20 cm x 20 cm x 15 cm) were collected, except for Grade V where sampling was difficult to be done due to its very weak materials.Grade I was unable to be collected due to its depth and condition for sampling.All samples were then inserted into sample bag and labelled before proceed for laboratory analysis. Weathering degree also been able to identify with the condition of joints on the surface of rocks.Grade II showedclosed joints with minor color changes; while the opening will increase and filled with calcite and kaolinite in Grade III.At this point half of the volcanic rocks samples had degraded to a residual soil.In highly weathered volcanic rocks (Grade IV), the joint opening increased and the spacing decreased with clay was the common filling.Whereas, Grade V showed nearly 90% of the material turned to residual soil with clay as main mineral due to hydrolysis of abundance of feldspars in volcanic rocks. Table 1 Weathering grade classification by Murphy [11] Weathering Grade Description Completely Weathered (V) The rock is totally discolored and discomposed and in friable condition.The external appearance is that of a soil.Internally, the rock texture is partly preserved, but the grains have been completely separated.The pick end of the hammer easily enters the rock.Residual Soil (VI) Not included. Uniaxial compressive strength analysis involved of Point Load Test (PLT) and Unconfined Compression Test (UCT) [12] for intact rocks samples.The PLT is an accepted rock mechanics testing method used for the calculation of an intact rock strength index.Also, PLT is a versatile filed based index method capable of deriving values.The data obtained can be used to correlate the PLT index (Is 50 ) with the uniaxial compressive strength (UCS) [13] and to propose appropriate Is 50 to UCS conversion factors [12] (Table 2) for volcanic rocks samples.The changes of microstructures in volcanic rocks throughout the weathering process were observed using polarized light microscope model Carl Zeiss and the scanning electron microscope (SEM) technique model Philips XL40 with 60 psi pressure and 15 to 20 kV voltages.The observation of mineral is include of alteration of composition mineral, the existence of micro-fractures and the increment of pore spaces.Mineral percentage is counted using gridding technique of 10 mm x 10 mm (Figure 2). Volcanic Rocks Classification Volcanic rocks samples in the study area were collected based on hand specimen observation and previous references which showed the distribution of volcanic rocks types.To prove the first assumption and interpretation of the type of volcanic rocks collected, further analysis has been done by using QAP diagram [10] to classify the volcanic rocks samples.The percentage of quartz, alkalifeldspar and plagioclase feldspar was counted (Table 3) using gridding technique.The percentages of minerals were then plotted into the triangular chart for the classification of rocks samples.Based on Figure 3, it is found that the volcanic rocks were classified as dacite (Sample A), andesite (Sample B) and basalt (Sample C). Dacite consists mostly 45% of plagioclase feldspar with minor appearance of biotite and pyroxene along with rounded quartz (40%) and phenocrysts.Dacite is intermediate in composition between andesite and rhyolite which the groundmass is composed of plagioclase and quartz (Figure 4A).The petrography features of andesite showed 70% of plagioclase phenocrysts, 22% of orthoclase with small amount of quartz (8%), pyroxene and amphibole in porphyritic texture (Figure 4B).Meanwhile, the petrography features of basalt showed the domination of plagioclase (90%) and presence of orthoclase (9%) with minor amount of clinopyroxene and olivine in aphanite texture (Figure 4C). Uniaxial Compression Strength Table 4 summarizes the result of uniaxial compression strength test on dacite, andesite and basalt.The result of uniaxial strength of rocks showed that the strength is decreased with the increment of both weathering grade (Table 4).The existence of pores in the inter granular of minerals and micro fractures reduced the strength of rock [14,15,16].However the reduction of uniaxial strength is non-linear with the increment of weathering grade (Figure 5).Failure in the form of fractures resulted in uneven fracture planes in the rock samples. Based on Figure 5, the relation shown by graphs clearly exhibited higher strength of andesite compare to dacite and basalt.Andesite showed higher strength with 181.2 MPa in Grade II; due to the existence of plagioclase phenocrysts and interlocking quartz.This has contributed to high frictional angle which able to cause firmer collision thus increased the strength of particles in andesite [17].While basalt, gained its 143.4 MPa because of aphanitic texture where fine-grained were tightly packed and dacite with 122.2 MPa due to the dominance of quartz and glassy matrix in this grade.In Grade III, the increment of intra granular micro fractures has decreased the strength of andesite to 104.6 MPa.This, however, still classified andesite as very strong rock [12] due to its porphyritic texture where angular quartz in fine-grained matrix gave higher bonded degree to andesite (Figure 6).Basalt and dacite, meanwhile, showed significant drop of strength to 87.4 MPa and 73.4 MPa respectively, due to the formation of porosity (Figure 7) which caused particles to apart and easy failure.In Grade IV, the uniaxial strength of andesite continue to decrease to 87.7 MPa but still classified as strong, while dacite and basalt showed moderately strong value with 39.4 MPa and 35.0 MPa respectively.This is due to the existence of clay minerals (57.9 % to 60.2%) originated from the weathering of feldspars which yielded to increased porosity.Clay minerals reduced the friction angle to 4° [17,18] which able to affect the initial frictional angle.The formations of micro-fractures caused particles to slide along the failure axis thus decreased the uniaxial strength for volcanic rocks samples in Grade IV.All volcanic rocks showed the least uniaxial strength with 26.8 MPa, 15.8 MPa and 10.1 MPa for andesite, dacite and basalt respectively.Both basalt and dacite showed similar strength classification of moderately weak while andesite showed moderately strong value in Grade V.This is due to the homogeneous texture and finegrained with more than 65% clay domination (Figure 8).Rounded grains initiated easy collision among them thus resulted in low internal friction angle.Basalt showed a higher strength than andesite due to the existence of kaolinite and montmorillonite which played a role as a cohesive material between grains.Slightly weathered rock (Grade II) showed discoloration and 50 -60% of fresh rock strength.50% of rock structure will start to disintegrated to possess approximately 30% of fresh rock strength in moderately weathered grade.It will disintegrate greater up to more than 50% of structure with less than 15% of fresh rock strength in highly weathered grade [3].Under the influence of weathering, the strength, density and volumetric stability of the rock will be reduced, whilst deformability, porosity and weatherability is increased.This can lead to significant reduction in rock strength [19]. CONCLUSION Chemical weathering has altered the rock forming minerals in andesite, dacite and basalt volcanic rocks.The weathered rocks formed secondary minerals and pore spaces between mineral grains.The presence of pore spaces, micro fractures and secondary minerals due to the weathering processes affected the uniaxial compressive strength of rocks.Andesite rock with Grade II to IV showed the decrease of strength from 181.2 to 26.8 MPa; basalt decreased in strength from 143.4 to 10.1 MPa; whereas dacite strength decreased from 122.2 to 15.8 MPa. The primary minerals in volcanic rocks such as quartz and plagioclase which is resistant to the weathering exhibited higher strength or the rocks.However, in weathered volcanic rock the appearance of clay minerals from plagioclase minerals will reduce the strong of the samples.As a conclusion, the weathering process is able to alter the rock forming minerals and micro fabrics of rocks thus reduced the uniaxial strength of the volcanic rocks. Malaysian Journal of Fundamental and Applied Sciences Vol.10, No.1 (2014) 154-158 | 155 | distributions situated around Tawau town from Apas-Balung in the eastern part and Brantian in the western part of Tawau district. Fresh No visible signs of weathering.Rock is fresh.Crystals are bright.The rock hammer rings and bounces back.Slightly Weathered (II) Discontinuities are stained or discolored and may contain a thin filling of altered material.Discoloration may extend into the rock from the discontinuity to a distance of 20% of the discontinuity spacing.The rock hammer rings and bounces back.Moderately Weathered (III) Slight discoloration extends from discontinuity planes for a distance of more than 20% of the discontinuity spacing.Discontinuities may contain filling of altered material.Partial opening of grain boundaries observed.The hammer 'thuds'.Highly Weathered (IV)Discoloration extends throughout the rock, and the rock material is partly friable.The original texture of the rock has mainly been preserved, but separation of the grains has occurred.The hammer 'thuds' and fragments of rock and individual mineral grains on the surface can easily be broken or rubbed off by hand. Fig. 2 Fig. 2 Gridding technique to count the percentage of minerals exist in rock samples (unscaled). Fig. 5 Fig.5 Relation between uniaxial strength and weathering grade of volcanic rocks which weathering process is able to decrease the strength. Fig. 6 Fig. 6 Andesite showed the existence of (A) interlocking quartz which contributed to high frictional angle.Continuous weathering produces (B) abundance of micro-fractures and (C) porosity which caused particles to slide along the failure axis. Fig. 7 Fig. 7 Dominance of (A) quartz and glassy matrix gave high strength on dacite before decreased due to the (B) increment of intragranular micro-fractures and (C) inter-pores in higher weathering grade. Fig. 8 Fig. 8 Decrease in strength due to the existence of clay minerals which reduced the friction angle and produced porosity in basalt. Table 3 Rock classification based on major mineral composition of volcanic rock samples Table 4 . Effect of weathering grade uniaxial strength of volcanic rocks samples
3,326.4
2014-08-01T00:00:00.000
[ "Geology" ]
Automatic Fault-Tolerant Control of Multiphase Induction Machines: A Game Changer : Until very recently, the fault tolerance in multiphase electric drives could only be achieved after fault localization and a subsequent modification of the control scheme. This scenario was profoundly shaken with the appearance of the natural fault tolerance, as the control reconfiguration was not required anymore. Even though the control strategy was highly simplified, it was still necessary to detect the open-phase fault (OPF) in order to derate the electric drive and safeguard its integrity. This work goes one step beyond and suggests the use of an automatic fault-tolerant control (AFTC) that also avoids the detection of the OPF. The AFTC combines the natural fault-tolerant capability with a self-derating technique, finally obtaining a hardware-free software-free fault tolerance. This achievement changes completely the rules of the game in the design of fault-tolerant drives, easing at the same time their industrial application. Experimental results confirm in a six-phase induction motor (IM) drive that the proposed AFTC provides a simple and safe manner to add further reliability to multiphase electric drives. Introduction The advantages of multiphase drives over their three-phase counterparts are nowadays well known, exploited and applied [1]. In addition to the possibility of activating exclusive modes of operation [2], such as the enhanced braking capability [3], the two most attractive features are likely the capability to enhance the efficiency and reliability of the electric drive [4]. The post-fault operation is especially critical in applications where security is a main concern (e.g., aircraft, electric vehicles), but it is also appreciated when the shut-down of the electric machine involves a significant economic impact (e.g., wind energy conversion systems) [5]. Standard three-phase machines can only achieve a satisfactory post-fault operation with the insertion of additional hardware [6]. Instead, multiphase machines provide a hardware-free fault tolerance taking advantage of their inherent redundancy [7]. Despite the lack of additional hardware, the enhanced reliability is obtained at the expense of a much higher control algorithm complexity. First of all, it is mandatory to detect and localize the open-phase fault (OPF). Secondly, the current references must be modified according to the fault scenario. Thereby, it is necessary to store different sets of current references and select the one that corresponds to the specific fault scenario [4]. Thirdly, the current controllers must change their structure. This stage varies depending on the control approach. For the sake of example, in field oriented control (FOC) the x-y proportional-integral (PI) current controllers are typically converted into proportional-resonant controllers (PR) in order to cope with the nonconstant nature of the x-y current references after the fault occurrence [7]. Finally, the drive needs to be derated to avoid over-currents and safeguard the integrity of the electric drive [4]. To sum up, the fault tolerance traditionally requires four stages: stage one (detection and localization), stage two (modification of the current references), stage three (control reconfiguration) and stage four (derating). It can be concluded that, although feasible, the improved reliability is obtained with a rather high software complexity. In this context, some recent works have suggested the use of a control scheme that remains valid both before and after the fault occurrence. This alternative approach skips the aforementioned stages two and three and has been baptised as natural fault tolerance, tested in six-phase systems with predictive strategies [8] and FOC [9], and also extended to nine-phase systems [10]. The core idea in these works is to maintain the x-y current control in open-loop mode in order to avoid the conflict with d-q controllers in post-fault situation. While this key feature is implemented in direct controllers using virtual voltage vectors [8], FOC simply deactivates the closed-loop x-y current controllers after the fault occurrence [9]. Although the higher simplicity of the natural approach was a tipping point in the design of fault-tolerant regulation strategies for multiphase drives, stages one and four were still mandatory to protect the machine and converter from eventual over-currents. Since the derating of the drive does not require the knowledge of the specific phases under OPF, a simplified version of stage one was suggested in [11]. However, the determination of the fault scenario and the subsequent derating were still required [11]. This work completes the simplification of the fault-tolerant control by suggesting a procedure for the self-derating of the multiphase electric drive. The current limits are set in a variable manner; therefore, the maximum current values are immediately changed after the fault occurrence. The proposed procedure guarantees that the stator copper losses are below rated values both in pre and post-fault situations. Moreover, since this procedure is automatic, stage four is no longer required. Similarly, stage one can also be omitted from an operational perspective: detection can be useful for diagnosis purposes, but not to adapt the post-fault operation [12]. By combining the natural fault-tolerant strategy with the self-derating procedure, the electric drive becomes fault-tolerant with no action at all. While the natural approach avoids stages two and three, the self-derating procedure skips stages one and four. This software-free regulation strategy will be referred from now on as automatic fault-tolerant control (AFTC). Compared to previous works, this proposal can be regarded as the first AFTC strategy because all other strategies require some kind of software modification, either in the control structure itself or in the setting of the operating limits. In few words, while multiphase machines have a potential hardware-free fault-tolerant capability, the suggested AFTC achieves this enhanced reliability in a software-free fashion. The result brings a highly attractive feature for industry because it provides simple means to empower the electric drive with a higher robustness. The paper is structured so that the background on FOC for six-phase drives is introduced in Section Two, the proposed AFTC strategy is described in Section Three, the experimental results are discussed in Section Four and the main conclusions are summarized in Section Five. Generalities of Six-Phase Electric Drives An asymmetrical six-phase induction motor (IM) and a dual three-phase two-level voltage source converter (VSC) were employed in this study ( Figure 1). Although it is possible to model the six-phase IM in phase variables (a 1 , b 1 , c 1 , a 2 , b 2 , c 2 ), it was more convenient for regulation purposes to employ the vector space decomposition (VSD) approach via the application of the current invariant generalized Clarke transformation: The distributed-winding IM could then be modelled in VSD variables [1], where the different components have a clearer meaning: α-β currents generate the flux/torque, whereas x-y currents are just parasitic currents that flow through the stator of the IM. As in three-phase drives, the α-β currents were typically transformed into a synchronous reference frame using the Park rotational transformation, whereas the x-y currents were transformed with the inverse of this transformation matrix: where θ s is the instantaneous position of the reference frame that is obtained from the measured stator currents and rotor parameters [1]: Natural Fault-Tolerant Indirect Rotor Field Oriented Control (IRFOC) In multiple three-phase machines with isolated neutral points, the standard indirect rotor field oriented control (IRFOC) strategy usually employs a control structure based on the utilization of one outer PI speed controller and 2n/3 PI stator current controllers. This implies that four VSD currents are under control in a six-phase machine: the d-current regulates the flux production, the q-current regulates the torque production, and the x -y currents are simply driven to zero in order to reduce stator copper losses ( Figure 2). When an OPF occurs, the phase current cannot flow through the damaged phase, appearing as a new restriction in the system. For example, if an OPF occurs in phase a 1 , the new restriction in the system will be [8]: It can be deduced from Equation (4) that α-β and x-y planes are no longer independent in post-fault situation, hence it becomes impossible to satisfy the x-current reference (set to zero) and the α-current reference (set to a nonzero value). Therefore, it becomes clear that α and x controllers are seeking incompatible goals. This conflict implies that x-y controllers will disturb α-β currents, ultimately affecting the torque and speed regulation of the electric machine. The traditional solution has been to modify the stator current references (setting i * x = −i * α ), using resonant controllers in the form of double PI regulators. Nevertheless, this requires the detection/ localization of the fault and a fast control reconfiguration in order to avoid undesirable transients after the fault occurrence [4]. An alternative solution to evade the conflict between α-β and x-y controllers is to simply deactivate the closed-loop regulation of the x-y currents [9]. The need to modify the current references (stage 2) obviously disappears, and the control reconfiguration (stage 3) can also be circumvented if the activation of the open-loop x-y control is naturally achieved. This can be accomplished if the closed-loop x-y control is eliminated in healthy operation, but this solution can lead to suboptimal solutions in the presence of machine asymmetries or nonideal effects [9]. For this reason, it is suggested in [9] to set a low saturation threshold for the x-y PI controllers ( Figure 2). This procedure maintains the x-y controllers in healthy operation and automatically deactivates them after the OPF occurrence. Even though the natural fault-tolerant control avoids stages 2 and 3, it is still necessary to detect the fault and identify the fault scenario (stage 1) in order to derate the drive (stage 4) and safeguard the drive integrity [11]. The next section reveals how to get rid of these remaining stages, so that the fault-tolerant control becomes fully automatic. Automatic Fault-Tolerant Control (AFTC) After OPFs occur, the number of active phases is reduced. In order to maintain the same operating point as in healthy situation, the rms value of phase currents must increase. This rise of the post-fault phase currents results in higher stator copper losses, eventually causing severe damage if the winding temperature increases above the insulation class of the winding. For this reason, a derating is mandatory in order to avoid an overheating of the motor [4]. Since the d-current is typically set to a constant value in the base-speed region, the derating can be done by simply defining a threshold for the q-current (termed i qmax in what follows) [13]. This constant value for the saturation of the q-current is valid in prefault operation because x-y currents are regulated to zero. Nevertheless, after the fault occurrence the x-y currents are no longer null, Equation (4), and the value of i qmax becomes excessive. Traditionally, it is suggested to change the value of i qmax after the fault occurrence according to the specific fault scenario [13]. This procedure is completely logical in fault-tolerant control schemes with reconfiguration [1], because the fault scenario needs to be identified in any case. Consequently, the efforts to change the value of i qmax are minimum. However, in natural fault-tolerant approaches the detection and localization of the faulty phases are not required for control purposes, hence the derating of the drive becomes the only reason to maintain stage 1 (fault localization). It is in this context where a self-derating would avoid not only stage 4, but also stage 1. Aiming to further simplify the fault tolerance, the value of i qmax needs to be variable without the need to detect or localize the fault. In other words, the saturation of the q-current should vary after the OPF even when the drive does not know that the fault has occurred. For this purpose, it is necessary to consider the rms value of phase currents, calculated from VSD variables as [3]: where x -y currents have a close-to-zero value in a healthy situation and the d-current is fixed at its nominal value, so variations of the rms value of phase currents are solely due to the q-current. On the other hand, the maximum rms phase current is determined by the winding rated current (i rated ). Setting the limit i s = i rated and replacing in Equation (5), an expression of the maximum q-current as a function of other VSD variables can be obtained: When an OPF occurs, the x -y currents immediately have a nonzero value because of the fault restriction, Equation (4), hence reducing automatically the maximum value of the q-current. As it can be deduced from Equation (6), the higher the value of the x -y currents, the lower is the threshold i qmax . Taking into account that the x -y currents rise after the OPF occurrence without any control action because it is a physical restriction, it follows that the derating (i.e., i f aulty qmax < i healthy qmax ) is applied in an automatic manner. Furthermore, the variable threshold from Equation (6) guarantees that the post-fault stator copper losses are below rated values. The integration of the self-derating procedure into the natural IRFOC scheme can be done by including the variable saturation from Equation (6) at the output of the PI speed controller, as it is depicted in Figure 2. The variable nature of the maximum q-current value implies that the drive automatically sets different current limits in pre and post-fault situations in order to prevent any damage. It is worth highlighting that PI control parameters, as well as the control structure, did not change after the fault occurrence. The addition of the self-derating procedure shown in Figure 2 completed the proposed AFTC and allows the software-free fault-tolerant control of the six-phase IM drive. Finally, it is worth noting that self-derating was included as a saturation threshold for the PI speed controller, therefore this procedure can also be successfully applied to other current control schemes, such as model predictive control (MPC) or direct torque control (DTC) [14]. Figure 3 shows the employed test bench, where the six-phase IM was driven by two conventional two-level three-phase VSCs (Semikron SKS22F modules (Semikron, Nuremberg, Germany)). Parameters of the aforementioned six-phase IM have been obtained using ac-time domain and stand-still with inverter supply tests (see Table 1). The VSCs are supplied by a single 300 V DC power source and the control actions are performed by a digital signal processor (TMS320F28335 from Texas Instruments, (TI, Dallas, TX, USA)). Phase currents and speed measurements were obtained using four hall-effect sensors (LEM LAH 25-NP (LEM, Bourg-la-Reine, France)) and a digital encoder (GHM510296R/2500 (Sensata, Attleboro, MA, USA)). The six-phase IM is loaded coupling its shaft to a dc machine. A variable passive R passive load was connected to the dc-machine and, consequently, the load torque was speed-dependent. On the other hand, the OPFs have been provoked using a controllable relay board implemented between the inverter and the machine. Parameter Value Experimental Results Test 1 is designed to verify the response of the self-derating algorithm when the operating point is achievable in post-fault situation (Figure 4). The reference speed is fixed to 500 rpm and the load torque was equal to 1 Nm. At t = 2s an OPF is forced in phase a 1 (Figure 4e), highlighting the post-fault restriction, Equation (4), in the system (Figure 4d). Despite the OPF occurrence, the speed and d-q current tracking was successfully done, obtaining a similar performance as in reconfigured control approaches [9]. The maximum q-current i qmax was reduced after the fault from 4.48 A to 4.18 A, as it can be observed in Figure 4f. Nonetheless, its value remained higher than the actual q-current reference, therefore the saturation was not reached, and the dynamics of the drive were not affected. As expected from Equation (4), the post-fault x-current is no longer null, reducing the value of i qmax as a result of the self-derating from Equation (5). In the case of test 1, the modification of the maximum achievable q-current did not affect the control performance, but in other operating conditions this saturation can be reached, as it is illustrated in the next test. In test 2, the operating point after OPF is not reachable if the integrity of the system is to be safeguarded. Therefore, the automatic fault-tolerant approach forces the control to reduce the motor speed in order satisfy the prevention requirements ( Figure 5). The reference speed was set to 600 rpm and the load torque was increased up to 4.6 Nm. As in test 1, an OPF was provoked at time t = 2 s in phase a 1 (Figure 5e), so that the current cannot flow through the damaged phase. When the OPF occurs, the maximum q-current i qmax was reduced from 4.43 A to 3.45 A, suffering a reduction of 22.12% (Figure 5c). After the derating, the value of i qmax dropped below the prefault value of the q-current, therefore saturation took place. The limitation of the q-current (Figure 5b) after the saturation reduced the torque production and deactivated the closed-loop speed control. The torque was then regulated in open-loop mode and the motor speed consequently dropped down to 458.5 rpm (Figure 5a). The response of the self-derating algorithm is depicted in Figure 5f, showing a fast reduction of i qmax after the fault occurrence. The automatic derating (i.e., reduction of i qmax ) mirrors the increase of x-y currents immediately after the OPF, as it can be expected from the definition of the saturation threshold in Equation (6). In any case, regardless of the post-fault derating value, it can be observed in Figures 4 and 5 that the current and speed control after the OPF is done with a similar performance as in the reconfigured approach [9]. To sum up, the proposed AFTC strategy allows a satisfactory post-fault speed regulation of the six-phase drive both when the derating saturates the -current (Test 2) and when the value of is not reached (Test 1). This enhanced reliability is provided with no additional hardware and with no control action at all. Conclusions Redundancy in multiphase systems can enhance the reliability of the electric drive without additional hardware. This attractive capability has been so far hindered by the mandatory need to add further complexity to the control stage. The proposed automatic fault-tolerant control (AFTC) shows however that it is possible to achieve a satisfactory post-fault performance with no action after the OPF occurrence. This software-free approach is obtained with the inclusion of a self-derating procedure into a natural fault-tolerant control strategy. Experimental results confirm the self-derating capability of the AFTC, providing a similar postfault performance as in reconfigured control approaches. The universal nature of AFTC allows the use of a single control scheme for pre and post-fault situations, hence avoiding the fault detection and control reconfiguration. The simplicity and universality of AFTC are key features that foretell a good prospect for industry application where reliability is a must. To sum up, the proposed AFTC strategy allows a satisfactory post-fault speed regulation of the six-phase drive both when the derating saturates the q-current (Test 2) and when the value of i qmax is not reached (Test 1). This enhanced reliability is provided with no additional hardware and with no control action at all. Conclusions Redundancy in multiphase systems can enhance the reliability of the electric drive without additional hardware. This attractive capability has been so far hindered by the mandatory need to add further complexity to the control stage. The proposed automatic fault-tolerant control (AFTC) shows however that it is possible to achieve a satisfactory post-fault performance with no action after the OPF occurrence. This software-free approach is obtained with the inclusion of a self-derating procedure into a natural fault-tolerant control strategy. Experimental results confirm the self-derating capability of the AFTC, providing a similar post-fault performance as in reconfigured control approaches. The universal nature of AFTC allows the use of a single control scheme for pre and post-fault situations, hence avoiding the fault detection and control reconfiguration. The simplicity and universality of AFTC are key features that foretell a good prospect for industry application where reliability is a must.
4,649.4
2020-06-04T00:00:00.000
[ "Engineering" ]
TDMA Datalink Cooperative Navigation Algorithm Based on INS/JTIDS/BA : Position information is very important tactical information in large-scale joint military operations. Positioning with datalink time of arrival (TOA) measurements is a primary choice when a global navigation satellite system (GNSS) is not available, datalink members are randomly distributed, only estimates with measurements between navigation sources and positioning users may lead to a unsatisfactory accuracy, and positioning geometry of altitude is poor. A time division multiple address (TDMA) datalink cooperative navigation algorithm based on INS/JTIDS/BA is presented in this paper. The proposed algorithm is used to revise the errors of the inertial navigation system (INS), clock bias is calibrated via round-trip timing (RTT), and altitude is located with height filter. The TDMA datalink cooperative navigation algorithm estimate errors are stated with general navigation measurements, cooperative navigation measurements, and predicted states. Weighted horizontal geometric dilution of precision (WHDOP) of the proposed algorithm and the effect of the cooperative measurements on positioning accuracy is analyzed in theory. We simulate a joint tactical information distribution system (JTIDS) network with multiple members to evaluate the performance of the proposed algorithm. The simulation results show that compared to an extended Kalman filter (EKF) that processes TOA measurements sequentially and a TDMA datalink navigation algorithm without cooperative measurements, the TDMA datalink cooperative navigation algo-rithm performs better. Introduction In large-scale joint military operations, datalink members can be commanded and controlled as a unified whole with position information, and position information is crucial to combat operation. At present, positions of network members can be obtained through absolute navigation methods like GNSS or long-range navigation (LORAN) systems [1,2]. However, GNSS is vulnerable to jamming, which will affect the accuracy of localization [3]. LORAN is also easily affected by noise and cross-rate interference [4]. A joint tactical information distribution system (JTIDS) can provide advantages like high transmission power and high anti-jamming capability [5]. Most of all, JTIDS can share location information between network members, and JTIDS users can extract TOA information; therefore, when GNSS or LORAN is jammed, the JTIDS network is a primary choice [6]. However, JTIDS has some disadvantages when positioning. First, JTIDS members are randomly distributed in the horizontal plane, and an uneven distribution of members may result in unevenness in the distribution of network members' horizontal dilution of precision (HDOP) [7]. Members with poor HDOP of the network have low precision of latitude and longitude. Second, coverage of a large effective zone of the JTIDS network is about 500 km radius, but the height of network members is much smaller than the effective radius, and the distribution range of JTIDS members in the horizontal plane is much larger than the vertical difference, which result in a poor vertical dilution of precision (VDOP) and low estimate accuracy of members' vertical ranges. In the general navigation algorithm of TDMA datalinks, joint units (JUs) process information of navigation sources and TOA measurements with a Kalman filter (KF) based on sequential processing, which can process TOA measurements sequentially and timely in the TDMA system, but the positioning performance is dissatisfactory when positioning geometry is poor [8][9][10]. To enhance the stability and accuracy of positioning, an integrated navigation method based on INS and datalink information was presented in [11,12], but they did not propose a good method to improve vertical accuracy. Therefore, a more practical positioning method of TDMA datalinks, such as JTIDS, is urgently needed. To improve the altitude observation accuracy effectively, many scholars choose to add measurements of sensors, such as a barometer altimeter (BA), in the vertical direction. Authors in [13] proposed methods of loosely-coupled and tightly-coupled schemes with barometer information to improve location accuracy of a low cost INS/GNSS system under a harsh GNSS-degraded environment. To improve height accuracy of flight, authors in [14] investigated the combination of data from GNSS, radar, and barometer sensors. With pseudo-range, Doppler information, and MEMS barometric information, authors in [15] proposed a method of positioning with two satellites. Authors in [16] presented a state estimation technique by fusing measurements of long-range stereo visual odometry, a global positioning system (GPS), and barometric and inertial measurement units, and they improved positioning performance for the aggressive intermittent GPS and high-altitude micro-aerial vehicle (MAV) flight environment. To improve positioning accuracy of network positioning users, the method of cooperative navigation was proposed by many scholars. Cooperative navigation has received extensive interest from the research field, like wireless sensor networks, mobile networks, and unmanned aerial vehicles (UAVs) [17][18][19]. Currently existing approaches for cooperative navigation are factor graphs and sum-product algorithms [20], semidefinite programming [21], particle filters [22], Kalman filters [23], and so on. In the cooperative navigation method, positioning users help each other to determine their locations. Compared to single positioning users, a group of cooperative positioning users may provide many navigational benefits, such as tolerance against individual user or sensor failures, distribution of sensors across a larger spatial area, and shared observations [24]. Cooperative navigation increases localization performance in terms of both accuracy and coverage [25]. Cooperative navigation is always used to improve positioning accuracy of network members in defective positioning environments. In GNSS-denied environments, authors in [24] addressed the cooperative localization approach for a small group of unmanned aerial vehicles (UAVs), and the proposed approach estimated each UAV's relative position inside the group using ranging measurements and estimating global positioning magnetic anomaly measurements. A cooperative localization algorithm with TOA and received signal strength measurements was proposed in [26], and the proposed cooperative localization algorithm significantly improved the localization accuracy of mobile nodes that could not directly connect to a sufficient number of anchor nodes in a wireless sensor network. Authors in [27] investigated the operational framework for cooperative localization of UAVs using GNSS, microelectromechanical systems, INS, and ultra-wide-band (UWB) sensors to improve accuracy in regions that lack GNSS, and they provided a comparison of distributed and centralized architectures and proved that centralized architecture generally provides higher localization accuracy compared with the distributed architecture. An incentive mechanism for cooperative localization was proposed in [28] to improve the localization accuracy of wireless network nodes in harsh environments due to poor coverage or signal blockage. A hybrid cooperative positioning method based on GNSS, network anchors, and cooperative measurements was proposed in [29], and the proposed method improved localization accuracy of network agents with cooperative measurements between them. In order to improve JTIDS network users' accuracy of the horizontal plane and vertical direction, the TDMA datalink cooperative navigation method based on INS/JTIDS/BA is proposed. In the proposed method, an estimator is decomposed into altitude and horizontal planes. In the vertical direction, a height filter based on a barometric altimeter (BA) is used to correct altitude errors of the inertial navigation system (INS) independently. In the horizontal plane, the TDMA datalink cooperative navigation algorithm uses general navigation measurements, cooperative navigation measurements, and predicted states to estimate latitude and longitude errors of INS. The rest of this paper is organized as follows: The second section overviews the basic principles of JTIDS navigation and introduces TDMA datalink cooperative navigation method of JTIDS. The third section introduces the integration architecture of the TDMA datalink cooperative navigation algorithm based on INS/JTIDS/BA and presents the RTT filter and altitude filter, and then explains the division method of estimate time slice and navigation slots, measurements, and WLS estimator of the proposed algorithm in detail. In the fourth section, the WHDOP of the TDMA datalink cooperative navigation algorithm and the effect of cooperative measurements on estimate errors are analyzed. In the fifth section, a simulation study is conducted to analyze and evaluate the proposed algorithm. The sixth section concludes the paper. Principles of JTIDS Navigation and TDMA Datalink Cooperative Navigation Method of JTIDS JTIDS is a synchronous, time division multiple access, spread spectrum communication system. As illustrated in Figure 1, members of JTIDS operate with different roles. The navigation controller (NC) establishes the relative coordinates. One user of the network runs time reference (TR), and other users will synchronize with TR directly or indirectly. Primary users (PUs) are permitted to synchronize with RTT protocol, and secondary users (SUs) are permitted to perform clock synchronization and navigation passively. Terminals with high absolute position accuracy are designated as position references (PRs) [6]. Principles of JTIDS Navigation Some slots of JTIDS are selected as navigation slots to transmit precise participant location and identification (PPLI) messages. PRs and terminals with high accuracy positions can transmit PPLI messages in their navigation slots as navigation sources. The structure of JTIDS messages is illustrated in Figure 2. The information of PPLI messages contains source terminal positions and speed, as well as position quality and time quality. Positioning users can extract general TOA navigation information from the synchronization header of the PPLI message and obtain position information of source terminals, and then estimate positions with these pieces of information [30]. In the general JTIDS navigation method, estimators of users process information and estimate distributed positions, and users only use PPLI messages transmitted from the navigation sources with higher accuracy. TDMA Datalink Cooperative Navigation Method of JTIDS In JTIDS network positioning, users and navigation sources are randomly distributed in the horizontal plane, and the geometrical distribution of navigation sources may not meet the requirements of each positioning user; therefore, more measurements are needed. However, the number of navigation sources is fixed within a short period time, so we take advantage of measurements between positioning users to improve their positioning accuracy; moreover, the height filter with measurements of BA is designed to estimate the heights of users. In the TDMA datalink cooperative navigation method, several positioning users estimate their state together as a whole, and one centralized estimator is used to process positioning users' information and estimate their latitude and longitude errors of INS together. We call these positioning users cooperative members. As shown in Figure 3, in order to distinguish measurements used in the horizontal plane estimator, we define the measurements transmitted between the datalink navigation sources. Cooperative members are general navigation measurements, and measurements transmitted between cooperative members are cooperative navigation measurements. In the TDMA datalink cooperative navigation method, latitude and longitude errors of all cooperative members are the error states we need to estimate, so if there are K cooperative members participating in the cooperative navigation calculation, the state vector of estimator has 2K dimensions. The TDMA datalink cooperative navigation algorithm needs to process all cooperative navigation members' general navigation measurements and cooperative navigation measurements and predicted states. Cooperative measurements provide more constraint relationships between cooperative members, and cooperative members' states will convergence together and lead to a higher positioning accuracy. The Implementation of TDMA Datalink Cooperative Navigation Algorithm Based on INS/JTIDS/BA JTIDS network members' VDOP is poor, considering the independence of measurements and poor VDOP, and we decompose the dimension of the estimator into altitude and horizontal planes. The TDMA datalink cooperative navigation algorithm is an integrated navigation system based on INS/JTIDS/BA. The architecture of the algorithm is shown in Figure 4, and the output of the RTT filter is used to correct the TOA measurements. In the vertical direction, the height filter processes BA measurements with EKF independently [31], and the TDMA datalink cooperative navigation algorithm is used to estimate longitude and latitude errors. denotes the state vector, and denotes the process noise vector, and covariance matrix Qh is calculated by ( ) σ denotes the variance of altitude velocity noise, and T denotes the discrete interval. The state transition matrix is shown in Equation (3). ( ) where e R denotes the major axis of the earth reference ellipsoid. RTT Filter The frequency difference is assumed as a first order Markov process, and clock offset is the integral of the frequency difference. denote the process noise vector, and the state transition matrix can be expressed as where f β denotes the correlation coefficient, and the covariance matrix Qk−1 is calculated by ( ) where 2 fN σ denotes variance of clock frequency drift. The measurement equation of RTT filter is where b Δ denotes the user's clock error obtained with RTT, and v is measurement noise of RTT [33]. Cooperative Navigation Algorithm in Horizontal Plane In the horizontal plane, the state we estimate is all cooperative members' longitude and latitude errors of INS. We compute the difference between TOA measurement and calculate pseudo-range, given the measurement equation which describes the relationship between the difference and longitude and latitude errors. Then, error states are estimated with general navigation measurements, cooperative navigation measurements, and predicted error states. Estimate Time Slice and Arrangement of Time Slot The architecture of the proposed cooperative navigation algorithm is centralized. One of these cooperative members runs the cooperative navigation algorithm as a computing center unit. The interval of algorithm execution time is a short time slice Tp, and we assume that INS errors do not change much during each interval time Tp. INS errors of cooperative members can be estimated as invariable error states in this period of time. As shown in Figure 5, each time slice contains many navigation slots. Part of the navigation time slots are occupied by cooperative members to transmit cooperative information messages as cooperative navigation slots, and other navigation slots are occupied by navigation sources to transmit PPLI messages as general navigation slots. General navigation slots and cooperative navigation slots are alternately distributed in each estimate period list. As shown in Figure 6, we give an example of three cooperative members within one estimate time slice. Navigation slot n, navigation slot n − 2, and navigation slot n − 4 are cooperative navigation slots, and cooperative members c3, c2, and c1 transmit cooperative messages in these navigation slots. Navigation slot n−1, navigation slot n − 3, and navigation slot n − 5 are general navigation slots, and navigation source s1 and s2 transmit PPLI messages in these navigation slots. The information computing center unit used to compute is divided into two parts. One part is the measuring the computing center extracted from PPLI messages and cooperative navigation messages, and the other part is other cooperative members sharing to the computing center unit with cooperative navigation messages. The shared information of the cooperative member includes measured values of general TOA measurements, positions of the navigation source, measured values of cooperative TOA measurements, the INS output of the cooperative member, and the predicted error states. The computing center will estimate cooperative members' error states with weighted least squares (WLS) when enough information is collected. Estimate results will be broadcasted, and then these cooperative members revise latitude and longitude errors of INS. The centralized algorithm need the guarantee of network traffic; therefore, the proposed algorithm is more suitable for a small number of cooperative members. Measurement model If slot n is the general navigation slot, navigation source s transmit a PPLI message. The TOA measurement between the navigation source s and the cooperative member c is given as follows [34]: where n T:c s r − is the actual distance between source s and cooperative member c. c n t Δ and n s t Δ are clock offsets of member c and source s, respectively, and ˆn s t Δ denotes the clock offset of navigation source estimated by the RTT filter, and the clock offset unit is converted from second into meter. n c s w − denotes TOA measurement noise and is modeled as a zero-mean white Gaussian process. The calculate distance between cooperative member c and source s in slot n is where ˆn c t Δ denotes cooperative members' clock offsets, which are approximatively estimated by the RTT filter. The actual distance between cooperative member c and navigation source s in slot n can be linearized applying a Taylor series around INS errors are converted from ECEF rectangular coordinates to geodetic coordinates with Equations (A2)-(A4), derivation process is shown in the Appendix A. Equations (A2)-(A4) are substituted into Equation (13), and the equation is simplified. Analysis of Measurement Errors The measurement noise variance is The calculate pseudo-range is where ˆn c1 t Δ , ˆn c2 t Δ denote cooperative members' clock offsets, which are approximatively estimated by RTT filter. The actual distance between cooperative member c1 and c2 in slot n can be linearized by applying a Taylor series around where 1 2 n c c v − denote errors of measurement and can be expressed as Analysis of Measurement Errors The noise variance of cooperative navigation measurement is We assume that errors are independent of each other; where 2 n c1-c2 σ denotes the variance of measurement noise, σ , which can be approximatively estimated by the covariance matrix of the height filter and RTT filter. State-Transition Equation The error state vector has 7 dimensions. Each cooperative member will predict its own INS error state locally, and the predicted time interval is the estimate time slice. [ , , , , , , ] T e n u e n L V V φ φ φ λ = Δ Δ Δ Δ X (30) State vector contain angle errors, longitude and latitude errors, and east and north directional velocity errors, but only longitude and latitude errors are used for the WLS estimate. We assume that the current estimate time slice is k, and thus the state equation is where ( 1 ) k k− , Φ denotes the state-transition matrix, ( 1) k− W denotes the process noise vector, T denotes the discrete interval, and the length of T is equal to the short period of time Tp. The state equation predicts according to the second-order damped error propagation equation of INS, and we get matrix A. The noise variance matrix of state equation is The variance of error state is Least-Squares Estimation We assume that there are K cooperative members participating in the cooperative navigation calculation. We estimate all cooperative members' latitude and longitude errors in one WLS estimator, so the state vector has 2K dimensions. The corresponding coefficients of K The TDMA datalink cooperative navigation algorithm can be solved with WLS. We assume that H is a measurement matrix after linearization. The corresponding weight matrix is WHDOP of Cooperative Navigation Algorithm We assume that K ε denotes error vector of With Equation (39), the relationship between K ε and measurement errors vector ρ ε can be presented as follows: The covariance matrix of vector K ε is [36] { } σ is introduced as a scaling constant to define the weight and is a user-equivalent range error denoting the statistical measurement error [7]. We define the matrix G The Effect of Cooperative Navigation Measurement on Cooperative Members' Positioning Accuracy We assume that i h is a cooperative navigation measurement vector between member c1 and c2, and their state vectors are We use the Sherman-Morrison formula. We can obtain We came to the conclusion that cooperative navigation measurements can improve WHDOP and positioning accuracy of cooperative member c1 and c2. One cooperative navigation measurement's contribution to cooperative members' reduction of horizontal plane variance is 1 c q and c2 q , which can be obtained by Equations (61) and (62). Simulation Experiments and Analysis We simulated a JTIDS network with multi-access mode of TDMA, and an observed member operated the TDMA datalink cooperative navigation algorithm to evaluate the performance of the proposed algorithm. We processed data of BA, INS, and TOA of JTIDS JUs in a computer. We generated location, speed, and attitude data of JTIDS members by the preset real trajectories, we and simulated INS errors with the error propagation equation of INS. We added INS errors to real navigation information to simulate INS information and added Gaussian white noise and clock error to the real distance of two network members to simulate TOA measurement. Simulation Conditions Within an area of about 100 km 2 , we simulated a JTIDS datalink network of 8 members, and these members were simulated as aircraft members. Member 1 was preset as NC of this JTIDS network, and at the same time it also operated as a time reference; the other members would synchronize their clocks with member 1. Members 1, 3, 5, and 7 positioning with a federated Kalman filter based on INS, GPS, TOA, and BA filters' structure is shown in Figure 7. Members 1, 3, 5, and 7 could reach a high localization accuracy in this JTIDS network, so they broadcast PPLI messages as navigation sources, and members 2, 4, 6, and 8 were cooperative members positioned with the TDMA datalink cooperative navigation algorithm. The basic slot was 7.8125 ms, and the slot interval of two adjoining navigation slots was 8 basic slots. Cooperative members carried out the cooperative navigation algorithm every 500 ms, which means that the algorithm would be carried out every 8 navigation slots, and the estimated time slice Tp was 500 ms. Members 2, 4, 6, and 8 transmitted cooperative navigation information in their navigation slots. Navigation sources broadcast PPLI messages in their navigation slots. First, we set members 2, 4, 6, and 8 as the observed members and compared the performance of TDMA datalink cooperative navigation algorithm based on INS/JTIDS/BA and EKF based on sequentially processing and the TDMA datalink WLS navigation algorithm without cooperative navigation measurements under the same conditions. Second, we set member 6 as the observed member and compared the performance of the proposed algorithm with different numbers of cooperative members, and then we analyzed the effect of TOA random errors of measurement and clock calibration accuracy. Performance Comparison of Algorithms The TDMA datalink cooperative navigation algorithm based on INS/JTIDS/BA and EKF based on sequentially processing and TDMA datalink WLS algorithm without cooperative navigation measurement are compared in this part. Cooperative members carry out the compared algorithms in horizontal positioning, and at the same time the same height filter is used. We preset the 1-sigma error of BA to be 50 m, and we preset 1-sigma random noise of pseudo-range measurement to be 3 m. Compared with the TDMA datalink cooperative navigation algorithm, cooperative navigation measurements are not be used in TDMA datalink WLS. The state equation of EKF based on sequentially processing is established according to the second-order damped error propagation equation of INS. The measurement equation is Cooperative members' longitude, latitude RMS errors of compared algorithms are presented in Figures 9 and 10. Figure 11 shows the altitude RMS errors of cooperative members. Figure 11. RMS altitude error comparison between TDMA datalink cooperative navigation, sequentially processing EKF, and TDMA datalink WLS. Cooperative members' longitude and latitude errors in every estimate moment are presented in Figures 12 and 13, respectively. Conclusions can be drawn from the analysis of positioning results in horizontal plane positioning, and positioning precision of the proposed algorithm performs better. A height filter based on BA revises altitude errors of INS in an independent dimension, so the accuracy of the horizontal plane has little effect on altitude accuracy. Effect of the Number of Members That Participate in Cooperative Navigation on Localization Accuracy We set member 6 as the observed member, let member 6 operate the TDMA datalink cooperative navigation algorithm with measurements from different numbers of cooperative members, and compared the performance in different situations. As shown in Figure 14, in horizontal plane positioning compared with the case calculate without cooperative navigation measurements, more cooperative members mean more cooperative navigation measurements and better WHDOP in the horizontal plane, which leads to better accuracy. Furthermore, altitude accuracy is not affected by the accuracy of the horizontal plane, so it is not changed much. Effect of TOA Measurement Random Error on Localization Accuracy The 1-sigma random error of TOA measurements was set to be 3 m the first time, and then it was increased 3m at a time. The results of RMS errors are shown in Figure 15. As shown in Figure 14, random errors of TOA measurement effect both general navigation measurements and cooperative navigation measurements, so it mainly affects the accuracy of the horizontal plane. The vertical direction is estimated separately, so random errors of TOA measurements have little influence on accuracy of altitude. Effect of Clock Calibration Accuracy on Localization Accuracy We set member 6 as the observed member, and changed the clock frequency drift of cooperative members to analysis the effect of clock calibration accuracy on localization accuracy. As shown in Figure 16, clock errors become larger along with the change of clock frequency drift. Clock errors mainly affect TOA measurements, so positioning accuracy of the horizontal plane is more affected. The vertical direction is estimated with BA measurements, so accuracy of altitude is not affected by clock calibration accuracy. Conclusions The TDMA datalink cooperative navigation algorithm based on INS/JTIDS/BA is proposed in this paper. Members of JTIDS calibrate the clock via RTT, and altitude is located by a height filter independently. In the horizontal plane, a cooperative navigation algorithm is proposed to estimate cooperative members' longitude and latitude errors of INS. We analyze the effect of cooperative navigation measurements on localization errors in theory. We compare the proposed algorithm with the sequentially processing EKF algorithm and TDMA datalink WLS algorithm without cooperative navigation measurements, and we can draw the conclusion from the analysis of positioning results, namely that the algorithm we propose performs better in the same situation. We show the accuracy of the positioning results with different numbers of cooperative members that participate in cooperative navigation and analyze the effect of random error of TOA measurements and clock calibration accuracy. This provides a theoretical basis for datalink location of the TDMA system. Conflicts of Interest: The authors declare no conflict of interest.
5,948.6
2021-03-25T00:00:00.000
[ "Computer Science" ]
Chemical speciation and fate of tripolyphosphate after application to a calcareous soil Adsorption and precipitation reactions often dictate the availability of phosphorus in soil environments. Tripolyphosphate (TPP) is considered a form of slow release P fertilizer in P limited soils, however, investigations of the chemical fate of TPP in soils are limited. It has been proposed that TPP rapidly hydrolyzes in the soil solution before adsorbing or precipitating with soil surfaces, but in model systems, TPP also adsorbs rapidly onto mineral surfaces. To study the adsorption behavior of TPP in calcareous soils, a short-term (48 h) TPP spike was performed under laboratory conditions. To determine the fate of TPP under field conditions, two different liquid TPP amendments were applied to a P limited subsurface field site via an in-ground injection system. Phosphorus speciation was assessed using X-ray absorption spectroscopy, total and labile extractable P, and X-ray diffraction. Adsorption of TPP to soil mineral surfaces was rapid (< 48 h) and persisted without fully hydrolyzing to ortho-P. Linear combination fitting of XAS data indicated that the distribution of adsorbed P was highest (~ 30–40%) throughout the site after the first TPP amendment application (high water volume and low TPP concentrations). In contrast, lower water volumes with more concentrated TPP resulted in lower relative fractions of adsorbed P (15–25%), but a significant increase in total P concentrations (~ 3000 mg P kg soil) and adsorbed P (60%) directly adjacent to the injection system. This demonstrates that TPP application increases the adsorbed P fraction of calcareous soils through rapid adsorption reactions with soil mineral surfaces. Electronic supplementary material The online version of this article (10.1186/s12932-017-0046-z) contains supplementary material, which is available to authorized users. Introduction Tripolyphosphates (TPP) have been commonly used as a phosphorus (P) source in slow release liquid fertilizers [1][2][3]. To be bioavailable to plant or microbial communities, TPP must first be hydrolyzed to phosphate monomers (ortho-P). Tripolyphosphate is believed to persist in the soil solution until undergoing hydrolysis, when it becomes bioavailable and reactive in the soil environment [4][5][6]. However, there is significant evidence that suggests TPP and other linear polyphosphates adsorb directly to metal oxide surfaces without having to first be hydrolyzed [7][8][9][10][11]. If TPP adsorbs directly to soil mineral surfaces, this could not only reduce TPP mobility in the soil solution but also reduce calcium phosphate (Ca-P) mineral precipitation. Calcium phosphate mineral formation immobilizes P from the soil solution, reducing the fraction of readily bioavailable P. Tripolyphosphate or linear polyphosphate applications to calcareous soils may be a novel way to improve P nutrient availability. Since linear polyphosphates must undergo hydrolysis (either biotic or abiotic) to ortho-P before precipitating as a mineral phase with either Ca or Fe (pH dependent), they can act as a slow release fertilizer [7]. In the soil environment TPP hydrolysis can often be biotically catalyzed by the phosphatase enzyme excreted from plants as root exudates or by microbes [12][13][14]. In a healthy soil environment, TPP has been thought to rapidly hydrolyze due to an abundance of exogenous phosphatase in the soil solution exuded to mobilize organic P [15]. However, this relies upon an active soil biological pool, as phosphatase only persists for a few days in a non-sterile environment [12,14]. Research has found that polyphosphate adsorption to mineral surfaces likely reduces enzyme catalyzed hydrolysis [16,17]. In the absence of rapid hydrolysis by phosphatase, abiotic factors will play a role in hydrolyzing TPP, however at significantly slower rates. Under cool, alkaline environmental conditions, abiotic hydrolysis rates of TPP are slow as both temperature and pH strongly affect this process [3,7,18]. For example, at temperatures below 25 °C, under sterile solution conditions, hydrolysis of TPP completely stalls, whereas at temperatures above ~ 50 °C the hydrolysis of TPP is rapid [3]. Both McBeath et al. [3] and Zinder et al. [18] found that solution pH has an inverse relationship with TPP hydrolysis. The half-life of TPP at pH 2.3 was 34 days while at pH 5.4 it was found to be 174 days. Both papers hypothesized that soluble cations in solution can catalyze TPP hydrolysis. Tripolyphosphates are also capable of adsorbing directly to mineral oxide surfaces without first hydrolyzing to ortho-P [8,10]. Researchers have also shown [7] that the adsorption of TPP to mineral surfaces can catalyze the hydrolysis of TPP to pyrophosphate (pyro-P) and ortho-P. This provides evidence that TPP adsorption onto mineral surfaces is likely to play an important role in hydrolysis and thus chemical fate of TPP in soils. Phosphate (PO 3− 4 ) rapidly forms both adsorption complexes and precipitate phases which can limit P availability. The speciation and chemical fate of P is directly dependent on the soil solution and geochemical conditions. At acidic pH, ortho-P adsorbs and forms surface precipitates on Al-oxides (i.e., berlinite, and variscite) and Fe(III) oxide (i.e. strengite) mineral surfaces [19,20]. The formation of these precipitates removes P from the soil solution and decreases the overall bioavailability of P [20]. At alkaline pH and in calcareous systems, ortho-P forms a variety of calcium phosphate (Ca-P) phases with the solubility-limiting phases depending upon several factors including: pH, Ca:P ratio, and the presence of competing ions in solution such as NH + 4 and Mg 2+ [20][21][22][23]. The presence of NH + 4 and Mg 2+ can lead to the formation of more soluble phosphate minerals such as struvite (NH 4 MgPO 4 ·6H 2 O), amorphous calcium phosphate (ACP) and dicalcium phosphate (brushite) [24,25]. The formation of ACP, brushite, and hydroxyapatite is also largely dependent on Ca:Mg:P ratios [22,23]. Higher Ca:P ratios favour the formation of crystalline and less soluble phases like hydroxyapatite [22,23], whereas the incorporation of even small amounts of Mg into the crystal structure of Ca-P minerals can poison the growth sites and prevent the formation/transition to hydroxyapatite [21]. Several spectroscopic techniques are available to study P speciation in soils and geochemical systems. The most commonly used X-ray technique for determining P speciation in soils is X-ray absorption near edge structure (XANES) spectroscopy which is sensitive to the average local bonding environment of P atoms [19,24,26]. A XANES spectrum of any sample is a weighted average of all P atoms measured, which has the potential to overlook minor species that contribute less scattering to the spectrum [24]. One can use reference spectra and linear combination fitting (LCF) to estimate P-species [19,[27][28][29][30][31]. However, LCF has the risk of over-estimating the spectral contributions from P species with atoms that strongly scatter X-rays (i.e. Ca) in Ca-P minerals whereas species that contribute minimal structure (adsorbed P) may be underrepresented [24,28]. This issue is compounded at the P K-edge due to overlapping spectral features of many P species. For example, the challenges of determining the different types of TPP, pyro-P, and ortho-P adsorption complexes with XANES spectroscopy is highlighted by Hamilton and coworkers [7] where adsorbed TPP on goethite is spectrally identical to adsorbed pyro-P and adsorbed ortho-P. Unfortunately, the complex nature of soils and the combination of P species (adsorbed/mineral phases) present prevents the direct measurement of soil adsorbed TPP by techniques more suitable to identification of polyphosphates, namely Fourier Transform Infrared or Nuclear Magnetic Resonance spectroscopic methods [19]. Nonetheless, our recent P K-edge XANES study of a model system allows us to infer the speciation of adsorbed TPP based upon the known adsorption and precipitation mechanisms on a goethite surface in the presence of Ca 2+ [7]. The objectives of this study were (a) to determine the short-term chemical fate of TPP in soils and (b) to characterize the long-term fate and mobility of two TPP nutrient applications applied to a P-limited calcareous soil. To study the adsorption potential of TPP to soil minerals and the effect this has on mobility, TPP was applied to a P limited subsurface soil under short-term lab conditions and to a P limited field site to track the chemical fate of TPP under longer-term environmental conditions. The effectiveness of TPP as a P amendment will be gauged upon whether TPP adsorbs directly to soil mineral surfaces or whether ortho-P precipitation reactions dominate. The goals of this study are (1) to determine whether TPP will adsorb directly to soil mineral surfaces under short-term reaction conditions and (2) to determine the chemical fate and mobility of two TPP amendment applications to a calcareous P limited subsurface soil system. Site history and soil sampling The study site is a Federated Cooperatives Ltd (FCL) owned and operated fueling station that also historically served as a fertilizer storage facility. The onsite fueling station currently consists of a 4 pump/8 line gas bar with underground storage tanks (see Fig. 1 for the site and sampling schematic). Petroleum hydrocarbon contamination (PHC) originated from leaking bulk storage tanks, which have been replaced as part of an upgrade to the current residential fueling station. Groundwater is routinely monitored throughout the site to track the extent of hydrocarbon movement and nutrient concentrations. This site was chosen for TPP application because it is part of an active in situ bioremediation study and has been identified as being highly P limited, determined through P groundwater concentrations of < 0.3 mg P/L. This groundwater monitoring has identified that the PHC is not moving offsite. Tripolyphosphate nutrient amendments were applied through two underground perforated injection lines that were installed as part of a gravity fed amendment delivery system. The injection lines are at a depth of 1.22 m and they rely upon preferential flow paths to transport the nutrient solution to the hydrocarbon contaminated soil zone between 1.82 and 3.66 m. The first amendment application was performed prior to our involvement as part of an in situ bioremediation trial to improve nutrient conditions throughout the site; this first nutrient application consisted of urea (9.5 kg) and sodium tripolyphosphate (1.4 kg) diluted in 13,500 L of water. It was noted during this application that the study area of the site had initially become saturated with higher water volumes then the infiltration capacity of the site, resulting in some mounding of the site's groundwater. One year after the TPP application, soil cores ( Fig. 1) were collected directly adjacent to the injection line as well as up and down-gradient of the main injection line. After the first amendment application, no groundwater P was detected. A second amendment application occurred 3 years after the first amendment, consisting of a larger TPP (102 kg) and urea N (9.5 kg) amendment spike diluted in 4500 L of water. A second set of sample cores were collected 1 year later along the same gradient illustrated in Fig. 1. Soils were sampled via coring using a push drill rig collecting 2″ diameter soil cores to a depth of 4.26 m. The cores were immediately sealed, transported on ice, and frozen before subsampling to limit potential oxidation effects on soil mineralogy. Soil cores were subsampled by collecting ~ 30 g from each of the studied depths. These subsamples were freeze dried, ground, and homogenized for elemental and spectroscopic analysis. Analysis of the soil cores focused on the 1.82 and 3.66 m depths. The rationale for choosing these depths was that the 1.82 m depth is close but below the amendment injection system, whereas the 3.66 m depth is a sand lens that represents the leading edge of the hydrocarbon plume. Short-term adsorption of TPP Two soils (1.82 and 3.66 m) from the research site were used to determine the short-term sorption potential of TPP with soil minerals. The soils were suspended in 0.01 M NaCl background electrolyte solution and adjusted to pH 6.5 using 0.01 M H 2 SO 4 . All soil treatments were spiked (using either TPP or ortho-P) to a targeted loading of 10,000 mg P/kg of soil. The ortho-P source was K 2 HPO 4 and TPP was applied as Na-TPP; both in double deionized water. After P addition, the pH was adjusted as needed over 48 h to maintain pH 6.5. The soils were then filtered through a 0.45 µm filter paper and triple washed with background electrolyte to remove entrained P. Reacted soil samples were freeze-dried and ground for XAS analysis to determine complexation mechanisms. XAS and XRD Data collection and analysis X-ray absorption spectroscopic (XAS) and X-ray diffraction (XRD) measurements were conducted at the Canadian Light Source (CLS) synchrotron in Saskatoon, SK., Canada. The Canadian Light Source operates a storage ring at 2.9 GeV and between 150 and 250 mA. All P K-edge XANES measurements were collected at the SXRMB beamline (06B1-1) utilizing an InSb (111) monochromator in fluorescence mode under vacuum conditions with a 4-element Vortex detector. Concentrated reference standards were diluted with boron nitride to ~ 1 wt. % total P to minimize self-absorption effects. Soil samples were dried, ground to a uniform particle with mortar and pestle, and applied to the beamline sample holder as a thin layer on carbon tape. The beam spot size was 1 × 3 mm giving a bulk representation of the P speciation of each soil sample. See supplemental information for the preparation conditions for adsorption standards. The Ca and Mg phosphate mineral reference standards were synthesized by Hilger [32]. All other compounds were purchased and were reagent grade or better. All P XANES spectra were processed and linear combination fit (LCF) using the DEMETER software package [33]. Briefly, data was processed with background removal, calibration to an internal reference standard, alignment and then merging of scans. Phosphorus reference spectra used in the LCF model fits are located in the (Additional file 1: Figure S1). It is known that there is an inherent level in uncertainty in LCF of unknown XANES spectra typically estimated at ± 10% or less [28,30]. To reduce the uncertainty and reliance on the statistical output of the LCF model results, all available geochemical information was incorporated in selecting the reported LCF model. These conditions included soil pH, total and labile P concentrations, soil mineralogy, as well as groundwater Ca and Mg concentrations. The statistical based nature of LCF has difficulty distinguishing between reference compounds that have similar structure such as calcium phosphate mineral species. The LCF results for all Ca-P mineral phases were reported as a single summed value for two reasons [1] due to DEMETER fitting multiple reference compounds to the same spectral features, and [2] limited data quality, due to low concentrations of P in these soils which limited data quality and was a concern that it may potentially increase LCF uncertainty; specifically with fitting multiple mineral phases with similar spectral features. Linear combination fitting was performed with only one adsorbed P standard due to the similarities and lack of identifying spectral features between the "adsorbed ortho-P" and "adsorbed TPP" reference spectra. It was determined throughout the LCF analysis that either adsorbed P reference spectra would provide an identical model fit result. The adsorbed P fraction of the LCF model fits are operationally defined as adsorbed TPP. This operational definition is based upon several factors: [1] adsorbed TPP is indistinguishable from adsorbed ortho-P (Additional file 1: Figure S1) [2]. In the presence of high Ca concentrations ortho-P would rapidly precipitate and not persist as adsorbed P in a calcareous soil environment. Groundwater modeling of the system has indicated that even low concentrations of groundwater ortho-P would be oversaturation with respect to calcium phosphate mineral precipitation, and as such adsorbed ortho-P is not expected to a be present as a phase [3]. Tripolyphosphate adsorbs directly to mineral surfaces without first hydrolyzing to ortho-P [7,8,10]. Tripolyphosphate has been shown under lab conditions to remain adsorbed to mineral surfaces without hydrolyzing for several months at pH 8.5 [7]. Tripolyphosphate hydrolysis in cold climates and slightly alkaline soils (temp. < 5 °C) could potentially take several years to naturally occur given limited microbial activity; however, surface-catalyzed hydrolysis may be an important mechanism resulting in adsorbed TPP hydrolysis [3,7,18]. X-ray diffraction measurements were completed at the CMCF-BM (08B1-1) beamline utilizing an energy of 18 keV and a wavelength of 0.6888 Å. The beamline employs a Rayonix MX300-HE wide area detector to collected XRD data over a range of 2 -37 2θ (Å). Soils were ground to a uniform particle size with mortar and pestle and then loaded into a polyimide tube for analysis. Data processing was completed with the GSAS-II software package [34]. Phase identification of all XRD spectra was completed with X'Pert HighScore Plus (PANAnalytical) with Rietveld refinements completed using the GSAS and EXPGUI software package [35]. All crystallographic information used during the Rietveld refinements were taken from the mineral phases identified with X'Pert HighScore Plus. Soil extractions and analysis Total elemental concentrations of all samples were determined with X-ray fluorescence (XRF) using Ther-moFisher Scientific ARL OPTIM'X X-ray Analyzer. Dried soil samples were ground to a uniform particle size with mortar and pestle for XRF analysis. Elemental concentrations were determined using the OPTIQUANT software package which provides a ± 10% accuracy on converting counts per second into mg/kg elemental concentrations. X-ray fluorescence elemental analysis was selected because it is a non-destructive technique, while a single measurement provides the elemental concentrations of all the elements within each sample. Phosphorus concentrations were verified for accuracy by microwave soil digestions (US EPA Method 3051) with P concentrations measured using the colourmetric (molybdenum blue) method with a SEAL Analytical Inc. AutoAnalyzer 1 (AA1). Labile P fraction was operationally defined as the sum of P extracted from the sequential extraction steps of double deionized H 2 O (DDI) and 0.5 M Na-bicarbonate solution [36]. The extraction procedure consisted of a soil:solution ratio of 1:80 (w/v) for each sequential extraction step with the supernatant being filtered through a 0.45 µm filter and analyzed for P with an AutoAnalyzer 1. Soil pH was determined using a 0.01 M CaCl 2 solution and a soil to solution ratio of 1:10 (w/v) [37][38][39]. The soil-solution slurry was mixed via end over end shaking for 0.5 h and then left to settle for 2 h before pH measurement. Short-term TPP adsorption A number of researchers have shown that TPP rapidly adsorbs to metal oxide surfaces [7][8][9][10][11] but the mechanism of TPP sorption onto soils has not been previously determined. Our experimental results demonstrate (Fig. 2) that TPP directly adsorb to our study soils without first hydrolyzing to ortho-P. The P XANES indicate that, after 48 h of reaction, TPP has formed an adsorption complex consistent with the adsorbed TPP reference standard. In contrast, the XANES features of the 48 h ortho-P treatment show that ortho-P precipitated as a Ca-P phase based upon diagnostic spectral features (noted by dashed lines). This strongly suggests that TPP can adsorb directly to soils without first hydrolyzing to ortho-P in the soil solution; if hydrolysis occurred in solution then Ca-P precipitates would also form in the TPP samples. It is possible that adsorbed TPP will slowly hydrolyze on these mineral surfaces with the hydrolysis rates dependent on enzyme activity and geochemical conditions [3,7,18]. The 3.66 m TPP spiked soil does contain slight spectral features associated with the presence of Ca-P minerals species, but this is likely due to lower TPP adsorption to this sandy soil resulting in a larger spectral contribution of the soil's initial P (~ 800 mg P/kg of crystalline calcium phosphate mineral species) for this sample rather than rapid TPP hydrolysis. Long-term field speciation and fate of TPP Based upon the short-term laboratory results, we hypothesized that TPP adsorption will affect both TPP mobility and chemical fate in soils. The application of TPP to a P-limited field site will help determine the extent of TPP distribution/filtration and provide an indication of how long TPP can remain adsorbed in a natural system without hydrolysis and precipitation reactions occurring. Phosphorus XANES and LC model fits from the first TPP amendment application are displayed in Fig. 3. The results of the LCF analysis, including all soil geochemical information can be found in Table 1. The slight pre-edge feature in the "2a and 7b" XANES spectra (Figs. 3, 4), likely arises from scattering peaks from diffracting minerals that were not able to be fully normalized out in the lowest concentration samples, and is not the result of Fe phosphate mineral formation. The low concentration TPP amendment did not increase soil P concentrations. Elemental analysis revealed ( Table 1) that P concentrations are similar both directly adjacent and below the amendment injection line. Notably, there was no increase in total P along the vertical gradient closest to the injection system, which would have been expected simply based upon proximity. Labile extractable P concentrations are low relative to both total P concentrations and percentage of adsorbed P throughout all soils. As the adsorbed P fraction of the LCF models is most likely due to adsorbed TPP, this suggests that adsorbed TPP is not readily extractable or desorbed by either H 2 O or Na-bicarbonate. Similarly to the ortho-P treatment of Fig. 2, the high concentrations of Ca and relative abundance of carbonate minerals (Additional file 1: Figure S2) favour the formation of a Ca-P surface precipitate if the adsorbed P fraction was an adsorbed ortho-P molecule. The soils closest to the amendment injection line had the highest fraction of adsorbed P. This was expected since the vertical gradient soils were in closest proximity to the amendment injection line. Based upon the widespread distribution of adsorbed P, despite the soils being high in clay, the amendment is likely traveling through preferential flow paths from the injection point to the sand lens at 3.66 m before proceeding through the sand lens. The adsorbed P fraction of the up-gradient soils provides evidence that nutrient amendment was also being forced to these locations. The best explanation for Fig. 3 Phosphorus XANES and linear combination model fits for the horizontal and vertical hydrological gradient from the amendment injection line sampled 1 year after the first TPP application this is that the amendment solution was mounding during this initial nutrient application resulting in saturating the infiltration capacity of the soils and driving nutrient solution to up-gradient positions. The 1.82 m down-gradient soil had the lowest fraction of adsorbed P; this is likely due to a lack of amendment flow to this area of the site. The second amendment application consisted of a more concentrated TPP solution with a smaller water volume than the first application. Phosphorus speciation results from 1 year after the second concentrated TPP application are presented in Fig. 4 (XANES spectra) and Table 2 (LCF results and geochemical information). With the increase in TPP concentration, only one soil position experienced an increase in total P, this soil was located directly adjacent to the injection system. The concentration increased from ~ 800 to ~ 3000 mg P/kg soil. Soils further away from the injection system have P concentrations largely consistent with soils from the first TPP application. Nonetheless, labile extractable P was higher after the second application, typically ~ 80 mg P/kg versus-15-20 mg P/kg. This fraction increased site-wide even though total P was largely unchanged. One explanation for this increase could be the hydrolysis of adsorbed TPP from the previous TPP application. This ortho-P could have either remained in an adsorbed form or precipitated as a soluble Ca-P species. Either species may be susceptible to desorption or dissolution by the extraction used to measure labile P. Soils in closest proximity to the injection line had the highest relative fractions of adsorbed P. However, TPP amendment movement appears to have been limited and did not reach up-gradient soils. This is expected, as the lower water volume was unlikely to fully saturate the study area and thus would not force amendment to upgradient positions. The small relative fraction of adsorbed P at the 1.82 m up-gradient sample is likely either residual adsorbed P from the first amendment application. The increase in adsorbed P down gradient indicates TPP can be both mobile and reactive with soil minerals. Although TPP adsorption to soil minerals reduces its expected mobility in soils, there is evidence of TPP distribution throughout the studied area as noted by increases to the relative fraction of adsorbed P. Effectiveness of TPP as a P amendment in calcareous soils The adsorption and persistence of TPP between application and sampling (~ 1 year) in a calcareous soil system is an important finding. The persistence of TPP and adsorbed P in this soil environment indicates the biotic hydrolysis of TPP may be limited. While phosphatase was not directly measured in this study, potential reasons phosphatase activity could be low include: (1) reduced microbial populations as a result of PHC toxicity, (2) lack of root exudates in subsurface soils due to a history of paved surface cover, and (3) even if present in soils, some research indicates adsorbed TPP may not be readily susceptible to phosphatase catalyzed hydrolysis [16,17]. Tripolyphosphate application increases adsorbed P and appears to be stable in this soil environment for a full year between application and sampling. In the absence of enzyme catalyzed hydrolysis of TPP, the abiotic hydrolysis of TPP in solution and soils is expected to be slow or non-existent specifically at the low temperatures consistent with this site (< 5 °C) [3,18]. The alkaline nature of these soils further reduce the abiotic hydrolysis rates, as TPP hydrolysis is significantly faster in acidic conditions [3,7,18]. However, even though the hydrolysis rate is expected to be slow, there is still evidence that hydrolysis is occurring: there is an increase in labile extractable ortho-P between sampling points and there is a reduction in adsorbed P of the up-gradient soil after the second soil core sampling. High Ca concentrations and adsorption to mineral surfaces may both catalyze TPP hydrolysis and may be responsible for the hydrolysis occurring in these typically unfavorable hydrolysis conditions [7,18]. Tripolyphosphate is capable of strongly adsorbing to minerals either in a flat or terminal configuration [8,10], neither form of adsorbed TPP appears to be readily desorbed from soil mineral surfaces based upon the labile extraction results of this study. This was exemplified by the 2.43 m soil having the highest P concentration (~ 3000 mg P/kg soil), highest fraction of adsorbed P, but similar labile P concentrations to the surrounding soils. While adsorbed TPP may not be readily desorbed, a key finding is it does not form Ca-P mineral phases until after hydrolysis; the formation of Ca-P minerals has been shown to significantly reduce microbial P bioavailability [29]. It is expected that adsorbed TPP would be readily available to microbial communities as they would likely contain the phosphatase enzymes capable of hydrolyzing and cleaving P from linear poly-P [29,40]. However, while research suggests that adsorbed ortho-P is bioavailable to microbes, there is no direct evidence to date that indicates whether microbial populations are capable of scavenging adsorbed TPP from mineral surfaces. Further study is required to determine whether adsorbed TPP is bioavailable. However, adsorbed ortho-P has been shown to be a preferred species for increasing potential soil P bioavailability, as it is an accessible species for microbial uptake [29]. The distribution of adsorbed P at this study site appears to be dependent on water volume/site saturation as illustrated in Fig. 5. However, both the highest relative fraction of adsorbed P and the highest total P concentration resulted from the concentrated TPP application, although with a lower zone of influence than the first application. It was expected that the low loading of TPP would be less mobile in soils, with most TPP rapidly adsorbing to mineral surfaces. In contrast, higher loadings of TPP were expected to result in the highest relative fraction of adsorbed P and elevated total P concentrations site wide. As once the adsorption sites of a mineral surface have been saturated, remaining dissolved TPP should be free to move with groundwater flow resulting in TPP distribution. Increasing total P concentrations through TPP application may be restricted by the overall adsorption capacity of the mineral surfaces; soils may require multiple applications to allow TPP time to hydrolyze. The high sorption affinity of TPP on mineral surfaces reduce the risk of TPP moving offsite or into untargeted areas causing unintended P-related ecosystem damage. Conclusions Liquid TPP amendments have proven to be an effective P source for facilitating and maintaining adsorbed P on soil mineral surfaces in Ca rich environments. This research has shown that TPP will rapidly (> 48 h) adsorb on soil surfaces and persist primarily as adsorbed P in a calcareous soil environment. While these results are consistent with a number of short-term laboratory complexation studies of TPP adsorption and hydrolysis on metal oxides, this is one of the first studies to measure TPP complexation onto soils. However, the bioavailability of adsorbed TPP is unclear and warrants further study to determine whether microbes are capable of utilizing this P source from mineral surfaces. Tripolyphosphate adsorption presents a challenge to distributing TPP throughout a subsurface soil profile due to impeding TPP transport. It was found that the movement of dilute concentrations of TPP is dependent on ground water flow and appears to rely upon large water volumes to transport amendment throughout the site. When concentrated TPP applications with decreased water volume were utilized, they resulted in higher relative fractions of adsorbed P and localized total P increases, but decreased site coverage of adsorbed P. Applying high concentrations of TPP with large volumes of water may be a more effective strategy for increasing the concentration and distribution of adsorbed P throughout this PHC contaminated site.
6,847.4
2018-01-08T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Controlling Isomerization of Photoswitches to Modulate 2D Logic‐in‐Memory Devices by Organic–Inorganic Interfacial Strategy Abstract Logic‐in‐memory devices are a promising and powerful approach to realize data processing and storage driven by electrical bias. Here, an innovative strategy is reported to achieve the multistage photomodulation of 2D logic‐in‐memory devices, which is realized by controlling the photoisomerization of donor–acceptor Stenhouse adducts (DASAs) on the surface of graphene. Alkyl chains with various carbon spacer lengths (n = 1, 5, 11, and 17) are introduced onto DASAs to optimize the organic–inorganic interfaces: 1) Prolonging the carbon spacers weakens the intermolecular aggregation and promotes isomerization in the solid state. 2) Too long alkyl chains induce crystallization on the surface and hinder the photoisomerization. Density functional theory calculation indicates that the photoisomerization of DASAs on the graphene surface is thermodynamically promoted by increasing the carbon spacer lengths. The 2D logic‐in‐memory devices are fabricated by assembling DASAs onto the surface. Green light irradiation increases the drain–source current (I ds) of the devices, while heat triggers a reversed transfer. The multistage photomodulation is achieved by well‐controlling the irradiation time and intensity. The strategy based on the dynamic control of 2D electronics by light integrates molecular programmability into the next generation of nanoelectronics. Materials and reagents All the chemicals and reagents were directly used without further purification. The Dow Chemical Company. General methods Density functional theory (DFT) simulations were performed using CP2K (http://www.cp2k.org) [1] based on the mixed Gaussian and plane-wave scheme [2] and the Quickstep module. [3] The calculation used Perdew-Burke-Ernzerhof (PBE) exchange correlation functional [4] , and molecularly optimized short range Double-Zeta-Valence plus Polarization basis set [5] with Goedecker-Teter-Hutter pseudopotentials [6] (DZVP-MOLOPT-SR-GTH). The plane-wave energy cutoff was 400 Ry, and a Grimme's dispersion correction with Becke-Johnson damping (D3BJ) dispersion correction [7] was applied. The calculation was performed on Gamma point only without symmetry constraint. Structural optimization was performed using the Broyden-Fletcher-Goldfarb-Shannon (BFGS) optimizer until the maximum force fell below 0.00045 Ry/Bohr (0.011 eV/Å). The finite displacement method was used for the phonon calculation, with incremental displacement of 0.01 Bohr (0.0053 Å). HOMO and LUMO diagrams were drawn via VMD (1.9.3) software, where the isovalue of HOMO was 0.02 a.u., and that of LUMO was 0.015 a.u. 1 H and 13 C nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance 400 MHz spectrometer. UV/vis transmittance and absorption spectra were measured on a Shimazu UV-2600. To find out how much excitation light was absorbed by the films of DASAs, the transmittance spectra were transformed into absorbance spectra by equation (1) where A, T, I0 and It represent the absorbance, transmittance, intensity of incident light and intensity of transmission light, respectively. The films of DASA-6C, DASA-12C and DASA-18C show similar absorbance in the visible light region, which is with good accordance to the results of thickness. X-ray diffraction (XRD) data were collected in the angular range of 2θ =2 ̊90 ̊ with a Bruker D8 Advance X-ray diffractometer. The elemental mapping was performed on a field-emission scanning electron microscope (SEM) (Carl Zeiss GeminiSEM 300) equipped with an energy-dispersive X-ray spectrometer (EDS). Samples were scattered on silicon wafers and metal sprayed. The morphology on the films surface was determined with a transmission electron microscopy (TEM) (FEI Tecnai F20) microscope. Samples were prepared on a carboncoated copper grid. The morphology on the films surface was determined with an atomic force microscope (AFM) (Bruker Multimode 8) microscope. Samples were spin-coated on the silicon wafer and graphene device. Drain-source current-threshold voltage (Ids-Vg) curves for the 2D logic-in-memory devices were tested by Pro Plus FS-Pro. The LED light source with the emitted wavelength at 520 nm was purchased from Zhongjiao Jinyan Systems. The output intensity of the LED was controlled by an LED controller (Zhongjiao Jinyuan Systems) and calibrated by a Laserpoint calibrator (A-02-D12-BBF). Synthesis All DASAs and its intermediates were synthesized according to a modified strategy based on the previous reports [8] . Synthesis of DASA-2C Scheme S1. Schematic illustration of the synthesis of DASA-2C. Photoisomerization of DASAs in solutions Due to the push-pull nature of DASAs, the absorption spectra shift in different solutions (Figure S1-S4 Photoisomerization of DASAs in the solid state The isomerization of DASAs in the solid state was investigated by formation of films on the surface of glass substrates via spin-coating at 1000 rpm for 30 s ([DASAs]=0.01 M). A splitting and widening n-π* absorption band at ~540-650 nm was observed for DASA-2C in the solid state, which is attributed to the strong intermolecular π-π aggregation (Figure S8-S9). The n-π* absorption band gradually narrows with prolonging the carbon spacers for DASA-6C and DASA-12C, indicating the intermolecular π-π aggregation is inhibited (Figure S10-S13). On the other hand, the n-π* absorption band splits and broadens again by further prolonging the carbon spacers for DASA-18C, which might be attributed to the crystallization on surface (Figure S14-S15). Mechanical properties of DASAs films The thickness of the DASAs films were determined by AFM, and the information has been summarized in Figure S16 and Table S1. However, due to the poor filmforming property of DASA-2C, which generates plenty of fragments on surface, the thickness is therefore not provided. On the other hand, the films of DASA-6C, DASA-12C and DASA-18C exhibit similar thickness ranged between ~70 and ~100 nm. where a is the contact radius of probe and sample, R is the tip radius, w is the adhesion energy, F is the loading force, δ is the indentation depth, * is the reduced Young's modulus. And , are Poisson's ratio and Young's modulus, respectively. Since ≫ , Force-distance curves of the DASAs films on silicon wafer substrates were summarized ( Figure S17-S20). Fabrication of 2D logic-in-memory devices The 2D logic-in-memory devices were fabricated via the following procedure: (1) Monolayer graphene/h-BN/sublayer graphene were picked up by polymer films composed of polycarbonate (PC)/poly-dimethylsiloxane (PDMS) and then deposited on silicon substrates. (3) Electrodes were deposited on the heterostructure surface using the photomask lithography and metal thermal evaporation/lift-off process. Light-controlling the 2D logic-in-memory devices Depositing of DASAs on the surface of 2D devices shifts the Ids-Vg curves, indicating the interfacial effect and intermolecular interaction between graphene and DASAs ( Figure S22-S25). The 2D logic-in-memory devices exhibit negatively shifted Vg and gradually increased Ids upon irradiation while using DASA-6C, DASA-12C, and DASA-18C as the photoswitches (Figure S26-S27). As expected, the Ids-Vg curves of the devices with DASA-2C does not shift after irradiation due to the strong intermolecular π-π stacking ( Figure S29).
1,500.8
2023-03-11T00:00:00.000
[ "Chemistry" ]
Web Scraping Using R The ubiquitous use of the Internet in daily life means that there are now large reservoirs of data that can provide fresh insights into human behavior. One of the key barriers preventing more researchers from utilizing online data is that they do not have the skills to access the data. This Tutorial addresses this gap by providing a practical guide to scraping online data using the popular statistical language R. Web scraping is the process of automatically collecting information from websites. Such information can take the form of numbers, text, images, or videos. This Tutorial shows readers how to download web pages, extract information from those pages, store the extracted information, and do so across multiple pages of a website. A website has been created to assist readers in learning how to web-scrape. This website contains a series of examples that illustrate how to scrape a single web page and how to scrape multiple web pages. The examples are accompanied by videos describing the processes involved and by exercises to help readers increase their knowledge and practice their skills. Example R scripts have been made available at the Open Science Framework. https://practicewebscrapingsite.wordpress.com/. All the R scripts for the examples and the PowerPoint slides used in the videos can be accessed or downloaded from the Open Science Framework, at https://osf.io/6ymqg/. The website was specifically designed to help readers learn about the process of web scraping and to provide a safe environment for practicing web scraping. The introductory video provides an overview of web scraping, the web-scraping tools that we use in this Tutorial, and good web-scraping practices. Example 1 shows readers how to download, extract, and store information from a single web page. Examples 2 and 3 explain how to download, extract, and store information while using links built into a website to move across multiple web pages. Example 4 shows how to download, extract, and store information while moving across web pages by manipulating URLs. We encourage readers to watch each example video while following along with the example R script and then to take the time to complete the accompanying exercise before moving on to the next example. Learning Objective and Assumed Knowledge The learning objective of this Tutorial is to teach readers how to automatically collect information from a website. In particular, after completing this Tutorial, readers should be able to download a web page, should know how to extract information from a downloaded web page, should be able to store extracted information, and should understand different methods of moving from page to page while web scraping. An understanding of R and RStudio is helpful but not required. The Tutorial has been designed so that novices to web scraping and readers with little to no programming experience will find the material accessible and can begin to develop their own scraping skills. Readers who already have R and RStudio installed and have a basic understanding of the R language may wish to skip the next three sections and proceed directly to the discussion of the four steps involved in web scraping. Installation of R, RStudio, and SelectorGadget All the programs you will need to web-scrape are free to download and use. First, you will need to download R (R Core Team, 2019) from https://cran.rstudio.com/ and install it on your computer. Second, we recommend downloading and installing RStudio (https://www.rstu dio.com/). All the code for this Tutorial will be run in the script window of RStudio. You can create new scripts in RStudio by clicking on "File," then "New File," and then "R Script." Finally, you will need SelectorGadget (Cantino & Maxwell, n.d.), which can be downloaded at https://selectorgadget.com/. If you do not use Chrome as your Web browser, you will need to download it (https://www.google.com/chrome/) before downloading SelectorGadget. For more information about how to download these programs, see the introductory video on the website accompanying this Tutorial (https://practicewebscrapingsite.wordpress .com/). Packages and Functions in R R is an incredibly versatile programming language capable of performing many different tasks, including web scraping, statistical analysis, and data visualization. The reason for its versatility is that it has a large community of users who create software, in the form of packages, that other users can use. A package is a collection of functions designed to perform a task. For example, in this Tutorial, we use the rvest package (Wickham, 2019), which contains a variety of functions that can be used to web-scrape. A function is code that modifies or manipulates some input to produce a desired output. For example, to calculate a mean, one can use the mean function by providing it with a vector (column) of numbers (i.e., mean(numbers)). A function often takes additional instructions, known as arguments, that adjust how it modifies or manipulates the input. For example, the mean function can be modified by using the na.rm argument to specify whether missing values are to be included (true or false; e.g., mean(numbers, na.rm = TRUE)). In order to use functions contained in a package, you first need to install and load that package. Installing and Loading R Packages Downloading and installing packages for use in R is a two-step process. First, one needs to download the package by using the function install.packages ("package name"). Next, the package must be loaded into R by using the function library(package name). For the rest of this Tutorial, you will need to download and install the rvest package by typing and running the following code in RStudio: install.packages("rvest") library(rvest) Note that once you have installed a package, you will never need to download it again. However, every time you start a new session of RStudio, you will need to run the library function to load the packages you will be using in that session. Downloading a web page To download a web page, use the read_html function and supply it with the URL of that page (i.e., read_ html("address of website")). Example 1 on the website involves collecting the titles, main text, and picture links for three articles stored on a single web page. In order to download this page, type in and run the following code: The read_html function downloads the web page at the address given and then the less-than and hyphen notation (<-) tells the software to assign that information to an object-in this case, an object called Exam-ple1. In technical terms, Example1 is a Document Object Model because it holds all the data of the web page and preserves the structure of the information held on that web page. What this line of code does is analogous to taking a physical book and converting the information into a digital book available on an e-reader. The book on the e-reader will contain the same number of chapters, pages, and paragraphs, with the same text in each of those sections. Similarly, the read_html function collects and preserves the structure of the information held on the web page. This is important because it allows you to extract just the information you are interested in. Extracting information from a web page Writing the code to extract information from a web page involves two steps: specifying the location of the information to be collected and then specifying what information at that location should be extracted. A good analogy is using a textbook to obtain a famous quote by an author. First, you turn to the chapter and page where that author is mentioned, and then you find the quote so that you can copy the famous words by the author. Step 1, involves the html_nodes function, to which two pieces of information must be added: the object holding the downloaded web page and the address to the information you wish to extract (i.e., html_nodes(web page, "address to information")). In order to generate the address to the information you want, use SelectorGadget. When SelectorGadget is installed on your computer, there is an icon in the top right of the Chrome window that looks like this: . While viewing the web page you are interested in, click on this icon to open SelectorGadget and then select the information that you wish to extract. For example, to get the address to the article titles in Example 1, click on the icon for Selector-Gadget and then select the titles (see Fig. 1). Look down the page and make sure that only the information you wish to extract is highlighted green or yellow. If additional information that is not required is highlighted, click on that to unselect it. When only the right information is highlighted, copy and paste the address SelectorGadget generates into the html_nodes function. Thus, at the end of Step 1, you have written code that indicates where the information you wish to extract is stored. The next step is to pass along this information to a function that will perform the extraction. Step 2 involves the pipe operator (%>%), which takes the output from one function and passes it to another without the need to store it. In this case, the pipe operator is added to the code in order to pass along the information in the html_nodes function. The pipe operator is followed by one of three commands, depending on the type of information to be extracted. If you want to extract text, use the html_ text function (i.e., html_text()). If you want to extract links from the web page, use the html_attr function with the additional href argument (i.e., html_attr("href")). Or if you want to collect the address of images to download later, use the html_attr function with the additional argument src (i.e., html_attr("src")). The following code will extract the titles, text, and address to the pictures of the three articles stored on the web page in Example 1: html_nodes(Example 1, "strong") %>% html_text() html_nodes(Example 1, ".Content") %>% html_text() html_nodes(Example 1, "#post-25 img") %>% html_attr("src") Storing information collected while web scraping There are several ways to store extracted information. The best approach will depend on the type and amount of data you are extracting. For simplicity, in this Tutorial, we describe how to store information in vectors. This process changes depending on whether you are scraping a single page or multiple pages. We begin by explaining how to store the information from a single page. To store the information extracted Fig. 1. Screenshot illustrating the use of SelectorGadget to extract the titles on the web page in Example 1 (only the first title is shown here). SelectorGadget has identified "strong" as their address, and this address can then be used to extract the titles. by the Example 1 code just presented, assign (<-) this information to vectors called Title, Text, and Images by expanding the code as follows: Title <-html_nodes(Example 1, "strong") %>% html_text() Text <-html_nodes(Example 1, ".Content") %>% html_text() Images <-html_nodes(Example 1, "#post-25 img") %>% html_attr("src") A video demonstration of how to extract and store information from a single page is available at our website (see "Example 1: Scraping a Single Webpage"). Storing information when scraping over multiple pages is a little more complicated because as one moves over each web page, extracting information and storing it to a vector, the information captured from the previous page will be overwritten. To avoid this problem, use the following three-step process: First initialize an empty vector. Second, extract the information from a web page using the html_nodes and html_text function and store this information in a second vector. Third, add the information captured in the second vector to the initially empty first vector. The following code will extract and store the titles in Example 2: Title <c() Heading <-html_nodes(Example 2, ".entry-title") %>% html_text() Title <c(Title,Heading) As the web scraper goes over each new page, the information in the Heading vector will be overwritten with new information and added to all the previously extracted titles stored in the Title vector. Our website has a video demonstration of this technique (see "Example 2: Scraping Multiple Web Pages"). Scraping across multiple pages There are a variety of methods for scraping across a website, and often the way the website is designed will determine the approach to use. To keep things simple, we outline two common approaches used by web scrapers to move across webpages: following links in a webpage to other pages and manipulating the Web address. To follow links, you need to download a webpage containing links to all the other pages to be visited and then extract and store those links. The following code from Example 2 shows how to store the titles from multiple pages of a website using the links stored on a web page: #Initialize the empty vector Title <c() #Download the web page Example 2 <-read_html(" https:// practicewebscrapingsite.wordpress .com/example-2/") #Extract and store the links BlogPages <-html_nodes(Example2, ".Links a") %>% html_attr("href") #Use each of the links, stored in i, to visit a web page by means of a for loop, which repeats a block of code. } The i in the for loop becomes each one of the links stored in BlogPages, so the indented code between the two curly brackets ({}) is repeated for each link. Notice that i is passed to the read_html function to download each new web page, from which information is extracted and then stored. It is the act of passing i to the read_html function that allows the scraper to navigate over multiple pages. To scrape webpages by manipulating URLs, you need to identify a part of the URLs that systematically changes over the web pages. You then need to artificially manipulate the URL in your code to move over the different pages. Example 4 illustrates this process for a case in which the URL changes by the page number specified (i.e., https://practicewebscrapingsite.wordpress.com/ example-4-page-0/, https://practicewebscrapingsite .wordpress.com/example-4-page-1/). This example requires that you generate a sequence of numbers to represent the different page numbers. Use the seq function, which takes arguments of the first number (0), the last number (1), and the increment of change (1). This information is saved to Pages: Pages <seq(0,1,1) Next, use a for loop to iterate over the pages. The i will become the page numbers 0 and 1, so the code within the for loop will run twice: (2) WebpageURL <paste ("https:// practicewebscrapingsite .wordpress.com/example-4-page-", i,"", sep="") The Sys.sleep() function in this code inserts a pause in the running of the code, to avoid putting undue stress on the website server. In this example, a 2-s pause has been inserted. The paste function generates the URLs by taking the part of the URL that does not change and adding that to the number stored in i. The sep argument is left blank so that the unchanging and changing parts of the URL will be joined without any separation. The generated URL is then stored in WebpageURL and passed to the read_html function, which downloads the new web page. Information from this new web page can then be extracted and stored. Good Practices and the Ethics of Web Scraping Before scraping a website, it is a good idea to check if it offers an application program interface (API) that allows users to quickly collect data directly from the database behind the website. If it does offer an API that contains the information you need, it would be easier to use the API. Also, although the methods presented here should help you scrape many websites, some sites may display information in unusual formats that make them more difficult to scrape. It is worth checking whether you can download and extract information from a single page before building a complete web scraper for a website. When web scraping, it is a good idea to insert pauses between downloading web pages, as this helps spread out the traffic to the website. Web scrapers may be banned from a website if they put undue stress on it. In Example 4, we used the Sys.sleep function to insert 2-s delays before downloading web pages. Before starting your own web-scraping project it would be a good idea to check your institutional review board's policy on web scraping; you might need to make an ethics application, or your target data might be classified as archival data that do not require an ethics application. As a general rule of thumb, any information stored behind a username and password is considered private and ought not to be web scraped. Summary In this Tutorial, we have introduced readers to what web scraping is and why it is a useful data-collection tool for psychologists. We have provided a basic explanation of the R environment and how to download and install R packages. Readers should feel confident in their ability to conduct the four key steps of web scraping: downloading web pages, extracting information from downloaded pages, storing that extracted information, and using Web links or manipulating URLs to navigate across multiple web pages. We strongly recommend that readers work through the examples and exercises provided on the accompanying website to further build their knowledge of web scraping and gain more experience with this method. Action Editor Alex O. Holcombe served as action editor for this article. Author Contributions A. Bradley is the guarantor. A. Bradley created the website and videos. Both authors drafted the manuscript, provided feedback to each other, and approved the final version of the manuscript. Declaration of Conflicting Interests The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
4,072.6
2019-07-30T00:00:00.000
[ "Computer Science" ]
Translation of UML Statecharts to UPPAAL Automata for Verification of Real-time Systems — In this paper we present a tool to transform UML statecharts to UPPAAL automata. The tool allows one to check temporal properties against statecharts modeling a real-time system. We give the constraints on statecharts, the tool description, and the results of testing it on a well-known traffic control example. INTRODUCTION Usually verification tools work with models written in specialized languages intended for convenient application of verification algorithms.On the other hand, during the design stage systems are often modeled with universal modeling languages (such as UML) or industry-specific modeling languages.UML statechart diagrams are an example of universal models describing the behavior of systems communicating with the environment via shared memory and message passing.Real-time systems are often modeled with such diagrams.Since the cost of correcting an error increases over the course of system development, verifying the properties of the system as early as possible one improves its quality and simplifies its development. In this paper we present a tool for converting UML statechart diagrams to timed automata used in the UPPAAL verification system [1,2].In section 2 we define the syntax of expressions we use in UML diagrams.The algorithm is discussed in section 3. Experimental results obtained with the algorithm are given in section 4. II. UML STATECHARTS Unified Modeling Language (UML) is used to design a wide range of systems implemented in various languages and in different environments.Therefore, the authors of the standard of UML deliberately avoid defining syntax and semantic of the language completely [5, ch. 13].The language defines a metamodel comprised of syntactical constraints on all models in UML notation.Generally it is only possible to say whether the model is syntactically correct.The behavior of a correct model might be undetermined in some cases: guards, actions and triggers can be defined in a natural language which tolerates different interpretations. The authors of the language suggest creating a separate profile for every class of systems without changing the general notation.However, in the case of statechart diagrams, creating the profile does not solve interpretation problems.To prove the properties formally it is necessary to define a strict syntax and semantic of all used primitives of statecharts.In this study, additional constraints on the structure of the diagrams and the syntax of expressions are imposed, thus the ambiguity is avoided. Simple states are the same as in the standard UML metamodel.There are two types of composite states: sequential and parallel.Automata residing in a parallel state are executed simultaneously.Composite states have special entry and exit states.Some states are marked with logical formulae called invariants; a system can reside in such state only while its invariant is satisfied. Each transition between states may be provided with a guard, an action, and a synchronization.Guards express requirements that must be satisfied to enable the transition.Actions are the operations performed after the transition is fired. The syntax of guards, invariants and actions is similar to the syntax of the C language.There are three types of variables: an integer type over a certain range (e.g., int [4..9] x = 5;), the boolean type (e.g., bool b = false;), and the clock type (e.g., clock c;).All variables must be defined in the comments section of the UML model.Expressions admitted in guards include all types of comparison as well as logical NOT, AND and OR operations.Actions may contain assignment statements including complex arithmetic expressions and the C-style ternary operator '? :'.Invariants have the same syntax as guards do, though the expression must be marked with the keyword 'assume()'. There are two additional expressions in the syntax.The boolean expression 'in(S)' borrowed from STATEMATE language [3] denotes that the state S is active in the system.The operation of random assignment, written as 'x=random();' non-deterministically gives a value to an integer or boolean variable admitted by the type. The syntax and the meaning of macros are similar to the ones in C language.They are defined in the comments section along with the variables.The macro '#define X Y' replaces all occurrences of X with Y before other stages of the translation. The examples of expressions can be found in the Figure 9. The operation of sending a signal is identical to the hardware-like message broadcast [3].Every signal must be defined in the model.When signal S is sent by a transition (denoted by the synchronization section), the automaton marks signal S as sent, and on the next step all the automata that can activate a transition with the receiving of signal S (written as '!!S') must do that.If none of the automata can receive the signal, it is considered lost.For instance, on figure 10 the system moves from state AHome to state AToGreen only at receiving a signal AtoG. III. TRANSLATION OF UML STATECHARTS TO UPPAAL AUTOMATA The UML to UPPAAL translator works with UML statecharts in the widespread XMI format.When a file is parsed and an internal representation is constructed, the translation is performed in two phases.First, the statechart is transformed to the intermediate forman hierarchical timed automaton (HTA) [4] and then this automaton is translated to a network of timed automata (NTA) according to the algorithm similar to the one introduced in [4]. Since the structure of statecharts differs significantly from the structure of hierarchical timed automata, an additional step of transformation of UML statecharts should be carried out before translating them to UPPAAL. Firstly, during the parsing of UML, the expressions that do not belong to UPPAAL model language are translated.All macro substitutions take place before parsing the guards and actions.The 'in(S)' expression in guards is replaced by checking the value of a special flag variable which is unique for each 'in(S)' statement. Further, all references to automata are replaced by their unique copies.If one automaton is nested into another one, it is inserted as well.Name collision on this step is avoided: if the names of two states in two nested automata coincide, then one of the states is renamed, and if two variables with the same name are declared in different scopes (e.g. in two automata referenced in the third one), then one of them is renamed.As a result a single hierarchical UML statechart is formed.The next step is to modify composite states (figure 2-3).In HTA, only transitions between simple states, entry and exit states are allowed, so it is necessary to change the arcs which start or end in composite states to match them with the corresponding entries and exits.Adding several new entries or exits might be necessary.In HTA transitions into a composite state are allowed if they end in its entry state, similarly, transition out of a composite state into its parent is possible if it starts in an exit state.All other transitions must begin and end inside of the same composite state, i.e., the source and target states remain in the same composite state.However, in UML statecharts it is possible to perform transitions to a deeply nested state; hence it is important to add all exits and entries in between. Finally, guards, actions, and synchronizations should not be present on transitions ending in exit states according to HTA definition.In such cases, a new state, like tmp in the Figure 2, is added and the guards, actions and synchronizations are assigned to the transition ending in the new state. NTA consists of processes, variables, channels and clocks. A process is a certain timed automata which has finite sets of locations and transitions.Some locations are marked with invariants, and some transitions are supplied with guards, actions, and synchronizations. Invariants, guards, actions, and synchronizations are similar to those in HTA.Three kinds of locations are possible: ordinary, urgent, and committed.When an urgent location is active in NTA, no time can advance, and if the location can be deactivated, it is left at once.Committed locations are similar to urgent locations, but they have the highest priority in deactivation. Each channel has its own type, either broadcast or handshake.Broadcast channels are similar to those in HTA.Handshake channel is used to synchronize the execution of exactly two transitions in NTA. The translation HTA to NTA is as follows. Before state translation, variables, channels and clocks are copied from HTA directly to NTA.According to translation algorithm, auxiliary variables and channels are added.Some of them are mentioned below. Every composite state S in HTA corresponds to a process P(S) in NTA.Every such a process has an initial location 'idle' which corresponds to inactivity of a composite state. Consider a parallel composite state S in HTA.A special location 'active' is created in P(S).The 'active' location can be reached from the 'idle' location by performing a sequence of transitions via committed locations 'start(X)', one for each composite state X nested in S. The first transition in the sequence carries a synchronization 'activate(S)?' that activates P(S).Other transitions in the sequence carry synchronizations 'activate(X)!' for every nested state X.Also there is exactly one transition from the state 'active' to the state 'idle' that carries a synchronization 'deactivate(S)?' deactivating P(S). When P(S) is activated, the whole sequence of transitions is executed with no time advancing, every nested state is activated, and then P(S) reaches the 'active' location which corresponds to activity of all states nested in S. Consider a sequential composite state S in HTA.A process P(S) includes locations 'active(X)' for every state X nested in S as well as committed locations 'start(X)' for every composite state nested in S. Locations 'start(X)' and 'active(X)' are connected via a transition decorated with a synchronization 'activate(X)!'.The 'idle' location is connected with either a location 'start(X)' in the case of a composite state X or with a location 'active(X)' in the case of a basic state X via transition with synchronization 'activate(S)?'. When P(S) is activated, it activates exactly one of its nested states and reaches one of 'active(X)' locations which correspond to the activity of X. To deactivate a state X nested in S, the process P(S) uses a set of deactivation sequences of committed locations.Transitions in each sequence carry synchronizations 'deactivate(Y)!' for every composite state Y nested at any level in X. Thereby when a deactivation sequence is executed, all inner states which can be deactivated simultaneously in HTA are deactivated in NTA.If S has to be deactivated as well, the final location of the sequence is connected to the 'idle' location.Otherwise it is connected to one of 'start(X)' or 'active(X)' locations. To initialize the NTA defined above, an additional process 'Kickoff' is created.This process is a sequence of committed locations which ends with an ordinary location.Transitions in this process carry special synchronizations 'init(X)!' for every initial state X of HTA.Special initial transitions are also added into other processes to reach a correct initial state. IV. EXPERIMENTAL RESULTS To be certain that the implementation of our translation algorithm is correct and well suited for composition with UPPAAL we tested it on several case studies.The simple examples were used to make sure that the output of the algorithm satisfies the expectations and to check the behavior on various sample cases.Some more complex tests were aimed to simulate the whole process of verification of a system defined by a UML statechart diagram.Below we present the results of our experiments with the model of traffic lights control system described in [4]. A. Simple tests An example of a simple test is given on figures 4-5. B. Traffic lights example The traffic lights control system consists of two traffic lights on a crossroad.The lights are controlled by a processor supplied with some sensors.Lights on the street and on the avenue change colors customary to let cars pass by in both directions.Further, in the case an ambulance car arrives from any direction, the lights must turn to green on that direction in order to let the ambulance pass as soon as possible. The UML diagrams for this system are shown in the Figures 10-11.The first diagram contains state loops for the lights and the ambulance and a reference to the diagram of the light controller.The lights are changed in the usual order (green to yellow to red) according to the signals of the light controller.The ambulance non-deterministically and passes through the street crossing. The light controller normally sends signals to the lights to switch their colors in order.When the ambulance appears, the system exits the normal cycle and enters the AmbulanceArriving composite state where the light colors are changed arbitrarily in order to turn the light on the street where the ambulance is waiting green. In [4] the authors constructed a UPPAAL model for this system manually to verify its properties.We used our translator and obtained the model automatically.The following properties were tested. A[]! deadlock This property guarantees the absence of deadlocks. E<> stg==1 && avg==1 This property means that there exists a trace where both lights are green at the same time and it was proved to be false.At the same time the seemingly contrary property is not fulfilled also, because there can be a situation where one light is red and the other one is yellow. Ambulance_process_proc.Approaching_active_in_Ambula nce --> Ambulance_process_proc.Home_active_in_Ambulance Home state for the ambulance car is reachable from the Approaching state, which basically means that the ambulance will always eventually pass the crossing. CONCLUSIONS Experiments with our tool testify that translation of UML statecharts to UPPAAL timed automata is possible.We reproduced the results that were obtained manually in [4] with our automatic translation and showed that the tool is applicable to models of relatively simple real-time systems with parallel interacting processes.Further work includes formal proof of the correctness of the algorithm based on [3] and testing the tool on practical examples of real-time systems. Figure 9 . Figure 9. Example of a UML diagram containing all syntactic features
3,180.8
2012-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Evaluation of target dose inhomogeneity in breast cancer treatment due to tissue elemental differences Monte Carlo simulations were run to estimate the dose variations generated by thedifference arising from the chemical composition of the tissues. CT datasets of five breast cancer patients were selected. Mammary gland was delineated as clinical target volume CTV, as well as CTV_lob and CTV_fat, being the lobular and fat fractions of the entire mammary gland. Patients were planned for volumetric modulated arc therapy technique, optimized in the Varian Eclipse treatment planning system. CT, structures and plans were imported in PRIMO, based on Monte Carlo code Penelope, to run three simulations: AdiMus, where the adipose and muscle tissues were automatically assigned to fat and lobular fractions of the breast; Adi and Mus, where adipose and muscle, respectively were assigned to the whole mammary gland. The specific tissue density was kept identical from the CT dataset. Differences in mean doses in the CTV_lob and CTV_fat structures were evaluated for the different tissue assignments. Differences generated by the tissue composition and estimated by Acuros dose calculations in Eclipse were also analysed. From Monte Carlo simulations, the dose in the lobular fraction of the breast, when adipose tissue is assigned in place of muscle, is overestimated by 1.25 ± 0.45%; the dose in the fat fraction of the breast with muscle tissue assignment is underestimated by 1.14 ± 0.51%. Acuros showed an overestimation of 0.98 ± 0.06% and an underestimation of 0.21 ± 0.14% in the lobular and fat portions, respectively. Reason of this dissimilarity resides in the fact that the two calculations, Monte Carlo and Acuros, differently manage the range of CT numbers and the material assignments, having Acuros an overlapping range, where two tissues are both present in defined proportions. Although not clinically significant, the dose deposition difference in the lobular and connective fat fraction of the breast tissue lead to an improved knowledge of the possible dose distribution and homogeneity in the breast radiation treatment. Background Breast cancer is one of the most spread cancer diseases, treated with different modalities. Adjuvant radiotherapy, after surgery, has been proven to increase the breast cancer specific survival [1]. However, the radiation treatment might increase the toxicity, cutaneous, cardiac and pulmonary, reducing the quality of life of the patients [2]. In 2002, after the introduction of the intensity modulated technique in breast cancer radiotherapy, Vicini et al. [3] evaluated the possible predicting factors for developing acute skin toxicity. Significant correlation (p = 0.005) in univariate and multivariate analysis was reported with dose homogeneity, in particular with the breast volume receiving 105 and 110% of the prescription dose (45 Gy delivered in 1.8 Gy/fraction in their work). The fractionation schemes have been changed in the last years, and hypofractionation is today widely used, with or without a simultaneous integrated boost. Such shorter schedules, mostly in 3 weeks, do not increase the toxicity relative to the previous conventional schedule on 5 weeks [4][5][6][7]. However, the statistical significance of the Vicini et al. data, although based on only 95 patients, suggested the importance of keeping the dose homogeneity in the breast as good as possible. Similarly, in 2015, Mak et al. [8] in a study on 280 patients reported that the breast tissue treated to more than 105 and 110% of the prescribed doses were found to be predictors of long term breast pain on univariate analysis, with the V 110% remaining significant also in a multivariate analysis with an odds ratio 1.01 per cm 3 , p = 0.007. With the clinical implementation of the most advanced dose calculation algorithms, namely type 'c' [9] as Monte Carlo, the specific tissue anatomy in terms of its chemical composition can be properly taken into account to better estimate the physical dose distribution (and ultimately the dose homogeneity in the target). In particular, for breast cancer treatment, it is known that the mammary gland consists of lobules of connective tissue, separated by fat tissue, with the glandular fraction being assumed of about 40% of the whole breast. The female whole breast composition, including both glandular and fat fractions, according to the ICRP Publication 89 [10], presents lower carbon and higher oxygen fraction than fat. This might be consistent with the association of the lobular fraction to muscle tissue, having lower carbon and higher oxygen component than adipose tissue. The breast tissue composition in the two different fractions of lobular and fat compartments would in principle lead to different energy depositions (and dose) that could be better managed by dose calculation processes able to distinguish among different elemental composition of tissues, like Monte Carlo simulations, or algorithms as Acuros [11]. Aim of the present work is to estimate the dose variations generated by the difference in tissue chemical composition and not coming from the optimization process, which could compensate for dose differences when attempting to deliver homogeneous dose in the breast target (both lobular and fat fractions). Monte Carlo simulations were used herein, as well as Acuros as a clinically implemented dose calculation algorithm. Treatment plan calculations Five left breast cancer patients were selected from the institutional database. They were considered as a representative sample of the clinical practice. CT datasets were acquired in the supine position with 2 mm slice thickness, adjacent. Clinical target volume (CTV) was contoured on the CT dataset to encompass the whole mammary gland, and cropped 4 mm inside the skin. Additional structures were delineated: CTV_lob and CTV_fat, being the lobular and fat CTV volumes, respectively. These two last structures were contoured using a CT ranger, discriminating the two tissues with the HU = − 59 (CTV_fat where HU < − 59, CTV_lob where HU ≥ − 59, HU: Hounsfield Units). The ratio between the lobular and the fat volumes within the CTV was 0.21 ± 0.13 (range 0.11-0.40). All the patients were planned with volumetric modulated arc therapy technique (VMAT), in its RapidArc form, on a 6 MV beam from a Varian TrueBeam linac equipped with a multileaf collimator Millennium-120 (Varian Medical Systems, Palo Alto, CA, USA). The arc geometry was of two partial arcs, with the gantry spanning from~300 tõ 170°, the collimator was of~± 15°, set according to the breast shape and patient anatomy. Total dose prescription was 40.5 Gy in 15 fractions as mean CTV dose. All the plans were generated with the Varian Eclipse treatment planning system, optimized with the Photon Optimizer (PO) algorithm (version 13.6) and calculated with Acuros XB (version 13.6). The same dose calculation algorithm was used to compute the dose distribution at least once during the plan optimization process (intermediate dose), to improve the optimization result according to an accurate dose estimation, in particular regarding the target dose homogeneity. Monte Carlo simulations Patient CTs, structures and plans were exported in DICOM format from Eclipse and imported in PRIMO (version 0.3.1). PRIMO is a free computer software (http:// www.primoproject.net) that simulates clinical linacs and estimates absorbed dose distributions in patient CT datasets (as well as in water phantoms) [12]. It combines a graphical user interface and a computation engine based on the Monte Carlo code PENELOPE [13][14][15]. A program for fast Monte Carlo simulation of coupled electron and photon transport, DPM, is also integrated [16], and used in the current work. The linac head was simulated by using the phase-space files made available by the linac vendor (Varian Medical Systems) for research purposes. Those phase-spaces were simulated into a Geant4 Monte Carlo environment and distributed according to the IAEA format [17]. In the current work, a phase-space for TrueBeam linac, 6 MV flattened beam quality, of 49.5e + 09 histories was used. Inside the patient, the transport parameters (to balance the trade-off between speed and accuracy) are predefined for DPM simulations as 50 and 200 keV cut-off energies for photons (bremsstrahlung) and electrons (collision), respectively. A variance reduction technique (splitting in CT with a factor 100) was used to reduce the computing time, that otherwise would be unacceptable if a direct approach was used. With this method, the average statistical uncertainty of all CT voxels accumulating more than 50% of the maximum absorbed dose, and reported by PRIMO at two standard deviations, was around 1% (range over all the simulations 0.99-1.08%). Tissue density and HU management The same curve to convert HU to mass density was used in PRIMO and Acuros based systems. The material assignment based on the CT number was set in PRIMO as similar as possible to the Acuros setting in Eclipse. Full compatibility of the two assignments is not viable, since Acuros assigns adjacent materials in a smooth way, allowing an overlapping HU range, where the previous and next materials are linearly combined from one to the other. The used materials are summarized in Table 1. The specific chemical compositions as configured in the two systems, PRIMO and Acuros, are not identical in their defaults, being the hydrogen fraction in PRIMO higher than the corresponding fraction set for Acuros for most of the human tissues. To exclude a systematic error that could arise from this difference, the contribution of the various elements was modified in PRIMO for adipose and muscle tissues, to be more compatible with the Acuros materials. Figure 1 shows the elemental compositions of adipose and muscle tissues according to the PRIMO and Acuros defaults. The Acuros values were hence used in this work. One of the patients of this study was simulated with the two chemical compositions for adipose and muscle tissues, according to the PRIMO and Acuros defaults. With the PRIMO defaults, the dose to muscle and adipose tissues were estimated higher than using Acuros defaults by about 0.12% and 0.03, respectively. Those differences, although considered negligible, were excluded from the computation by changing the PRIMO tissue composition material defaults. Patient doses with Monte Carlo simulations For each of the five cases, three different Monte Carlo simulations were computed in PRIMO, assigning different materials to the muscle and adipose HU ranges, while keeping the original density: -AdiMus: as standard, muscle and adipose tissues were assigned to the muscle and adipose HU ranges, respectively; -Adi: the adipose tissue material was assigned to the HU including both adipose and muscle ranges; -Mus: the muscle tissue material was assigned to the HU including both adipose and muscle ranges. Mean doses to CTV, CTV_lob and CTV_fat were computed for all the simulations. The dose difference generated by the chemical composition of the specific tissue, lobular or fat, was estimated by the difference of the mean doses of the CTV_lob between Adi and AdiMus simulations, and of the difference of the mean doses of the CTV_fat between Mus and AdiMus simulations. Those values give the possible dose estimation error when a different material chemical composition (adipose for lobular tissue, or muscle for fat tissue) is used for calculations, while the surrounding tissue dose is computed with the correct tissue assignment. Calculations were based on the mean dose of the whole structure. Uncertainties were reported at two standard deviations for all the voxels in each specific structure. To include also the positional dose difference, the 3D gamma evaluation available in PRIMO software was analysed. The gamma index [18] was evaluated between AdiMus simulation (the best approximation of the true patient), and Adi or Mus simulations for CTV_lob and CTV_fat, respectively (i.e. assigning the "erroneous" material to the two portions, respectively). For the gamma criteria, the distance to agreement (DTA) was set to 2.5 mm, equal to the simulation grid, as well as to half of this value, 1.25 mm; the delta dose was varied from 0.5 to 3.0% of the maximum dose. No threshold dose value was limiting the evaluation, that was performed only inside the target (close to the prescription dose level). However, the analysis was restricted to the points with reference dose having uncertainty below 70%. For one patient, two additional simulations were run, assigning to the HU range of the CTV the cartilage and the cortical bone tissues, keeping the original density. This would emphasize the importance of properly assigning the correct tissue (elemental composition) to the HU ranges. Comparison with Acuros calculations Comparison of the PRIMO computed results was performed with Acuros calculations, as implemented in Eclipse (version 13.6). Acuros explicitly solve the Linear Boltzmann Transport Equation, while Monte Carlo methods (as PENELOPE in PRIMO) generate a stochastic solution by simulating a large finite number of particles. In principle, the two methods should lead to the same solution. However, non-negligible approximations are used in the radiotherapy planning practice. One of the most crucial is the material composition and assignment to predefined HU ranges, which is not modifiable in Acuros. This reason prevented the calculations in settings similar to the above-described Monte Carlo simulations (AdiMus, Adi, Mus). Nonetheless, to evaluate the dose difference generated by the elemental composition of tissues estimated by Acuros, dose calculations were performed also with AAA (Anisotropic Analytical Algorithm) implemented in Eclipse. The two algorithms used the same machine configuration data, and are based on the same concepts of the beam source model [19]. AAA does not take into account the specific tissue composition, and inhomogeneities are managed by rescaling the density according to HU, with no differentiation in the energy deposition for different materials (no medium differentiation). The differences arose in Acuros due to the chemical composition of the tissues were evaluated through the differences of the mean doses in CTV_lob and CTV_fat for Acuros and AAA calculations, once the two plans were renormalized to the same mean dose to CTV. This is clearly a very crude approximation to isolate the medium composition effect on the calculated dose. HU in lobular and fat breast portions The analysed patients presented a mean HU of − 14 ± 10 and − 103 ± 3 in the lobular and fat portions of CTV, respectively. The Standard Deviations of the HU distributions inside CTV_lob and CTV_fat were 26 ± 2, and 21 ± 9, respectively. To notice is the quite stable HU values in the lobular and fat portions of breast among patients. In Fig. 2 the average (over the analysed patients) HU histograms is presented, where the two peaks are well separated, although an overlap is present, due most probably to the structure contours inaccuracy (the CTV_lob was defined as the CTV voxels with HU larger than − 59). Monte Carlo simulations A cumulative dose-volume histogram example of one of the selected patients is presented in Fig. 3. Here, the CTV, CTV_lob and CTV_fat were presented for AdiMus, Adi, and Mus simulations. As expected, the AdiMus and Adi simulations estimated the same dose distributions in CTV_fat, while in CTV_lob this happens for AdiMus and Mus simulations. Table 2 reports the percentage dose differences between the mean dose of the specific CTV portions of the test simulation, and the CTV mean dose from AdiMus simulations. The AdiMus CTV mean dose can be considered the standard condition for planning and dose prescription. The reported errors are the average statistical uncertainties in each specific structure, at 2 standard deviations, propagated for all the patients. The possible dose overestimation in the lobular breast region, relative to the prescribed dose, when adipose tissue is there assigned, is of 1.25 ± 0.45% (considering the difference of the mean doses from AdiMus and Adi simulations in the lobular fraction). Conversely, the possible dose underestimation in the fat region of the breast if muscle tissue is assigned is of 1.14 ± 0.51% (the Fig. 2 Average histograms on all the patients of the HU distributions of CTV_lob and CTV_fat differences of the mean doses from AdiMus and Mus simulations in the fat fraction). In the case of cartilage and bone assignments, a dose underestimation was evaluated of 0.6% and 2.8, respectively in the lobular fraction, and of 1.8% and 4.1 in the fat fraction. All those differences are generated by the lone difference in elemental composition of the tissues, since the specific density of each voxel is allocated from the HU value. The gamma evaluation analysis was summarized in Fig. 4, where the percentage of points fulfilling the criteria is shown for CTV_lob and CTV_fat comparing AdiMus vs. Adi and AdiMus vs. Mus simulations, respectively. From those graphs, a large amount of the structure volume is shown not to fulfil the criteria below a dose difference compatible with the difference estimated just above, between 1 and 1.5%. The computed gamma evaluation presented an agreement for DTA = 2.5 mm and delta dose of 0.5% exceeding the 90-95% of the CTV_lob and CTV_fat volumes for AdiMus vs. Mus and AdiMus vs. Adi comparisons, respectively (that is between the simulations with muscle in the CTV_lob, and adipose in the CTV_fat, not shown in Fig. 4). This is consistent with the average uncertainty of the simulations, around 1% at two standard deviations. Acuros calculations Concerning the clinical use of tissue differentiation in Acuros, the results showed a dose overestimation of the AAA (where no chemical composition is taken into account) in the lobular portion of breast of 0.98 ± 0.06%, and an underestimation of 0.21 ± 0.14% in the fat portion. Interesting to note is a better homogeneity between doses in the lobular and fat regions of the CTV found for the Acuros calculated plans, while the AAA recalculation presented an overdose to the lobular region of about 1%. The reason of an increased homogeneity in the Acuros calculated plan resides in the optimization process, which used Acuros calculation as intermediate dose to refine the optimization and improve the target dose homogeneity. If the optimization process uses a less accurate dose calculation algorithm for intermediate dose estimation (AAA), in these specific cases of breast planning, the lobular portion of then breast will be underdosed by 1%. Discussion In this work, we analysed the dosimetric aspects of the whole breast irradiation arising from the special anatomy of the mammary gland, composed by two different tissues, the lobular and the fat connective tissue. From the Monte Carlo data, there is a dose difference of more than 1% coming only from the chemical composition of the two different components. Such a difference most probably is not clinically significant, and is well within the accuracy required by the dose calculation systems. However, this systematic effect might produce an underdosage of such an amount of dose to the lobular fraction of the breast that is indeed the core of the mammary gland. The works of Vicini et al. [3] and the more recent of Mak et al. [8] reported a significant correlation of the radiation effects, in terms of acute skin toxicity and long-term breast pain, to the breast volume receiving more than 105% or 110% of the prescription dose, whichever the dose fraction size. This correlation points to the need of delivering homogeneous dose in the breast, and in this frame a difference of 1-1.5% in the dose homogeneity could be of interest. However, the dose distributions calculated in the mentioned studies were affected by some systematic error due to the lack of knowledge in the tissue composition and related energy deposition, since none of those studies used so advanced calculation algorithms. A more accurate estimation of the dose distribution in the breast compartments could help the understanding of the correlation between toxicity and dose homogeneity. The investigation of the dose effect of different breast compartments was already reported in 2011 [20], where In this study, the plans were optimized with an inverse planning process, using intermediate dose calculations performed with the Acuros algorithm. This allowed a better homogeneity of the dose distribution inside the whole breast according to the same dose calculation algorithm. Being Acuros calculations more accurate than AAA in the inhomogeneity management, also thanks to the medium composition inclusion, the use of advanced calculations leads to more refined knowledge of the dose distribution, possibly improving the radiation treatment by modulating the dose according to the clinical effects on toxicity or outcome. In the current work, we started from a pure Monte Carlo simulation, which is generally considered as the gold standard for the dose estimation. However, true Monte Carlo calculations are today not easily available in the clinical routine practice, due to the excessive long calculation time. A problem that cannot be solved even with the Monte Carlo simulations refers to the approximation of the chemical composition and relative fractions of the different atomic components of human tissues. The human body is considered as composed by only six different media: air, lung, adipose, muscle, cartilage and bone, assuming that the tissue presenting HU in a certain range (from a CT dataset, that is a result of absorption) has exactly a defined proportion of some chemical components, as published for example in the ICRP Publication 89 [10]. This approximation is obviously not fully reflecting the real anatomy, and as a consequence, the dose estimation is affected by this approximation, even using the gold standard. The attempt to mitigate this issue was implemented in Acuros, using overlapping HU ranges between two adjacent tissues. On one side, this feature prevents the pure dose calculation comparison between full Monte Carlo and Acuros. On the other side, probably it better reflects the small differences in the human tissues, although keeping all the approximations and uncertainties. In the specific case of breast, the ICRP Publication 89 reported about the carbon and oxygen fraction difference between breast tissue (as a whole) and fat tissue, suggesting a trend to be more similar to the muscle tissue. However, the lobular fraction belongs to muscle medium in the HU ranges used for calculations, while it is not exactly muscle, and its specific chemical composition might be different. These considerations on the human tissue compositions bring to one of the limitations of the current work. We analysed only the small variations in the breast tissue and their dosimetric consequences, i.e. the interface between adipose and muscle densities and compositions. What would be important to evaluate and estimate is the accuracy in calculation, or maybe the understanding the human tissues composition, in the other, more complex interfaces: air to lung, and cartilage to bone. For those two couples of tissues the distinction is much more complex, and more detailed studies in the specific anatomies would be advisable. Conclusion A dose deposition difference in the lobular and connective fat fractions of the breast tissue is estimated with Monte Carlo simulations and Acuros calculations. Although not clinically significant, such a difference lead to an improved knowledge of the possible dose distribution and homogeneity in the breast radiation treatment. Abbreviations AAA: anisotropic analytical algorithm; Adi: simulation with adipose assignment in both adipose and muscle CT number ranges; AdiMus: simulation with adipose and muscle assignments in adipose and muscle CT number ranges; CT: computed tomography; CTV: clinical target volume; CTV_fat: connective fat fraction of CTV; CTV_lob: lobular fraction of CTV; DTA: distance to agreement; HU: Hounsfield Unit; Mus: simulation with muscle assignment in both adipose and muscle CT number ranges; VMAT: volumetric modulated arc therapy Availability of data and materials Data supporting the findings of this work are available within the article.
5,373.2
2018-05-15T00:00:00.000
[ "Medicine", "Physics" ]
Mapping suitability for open-loop ground source heat pump systems: a screening tool for England and Wales, UK The UK Government expects that, by 2020, 12% of the UK’s heat demand will come from renewable sources, and is providing incentives to help achieve this. Open-loop ground source heat pumps (GSHP) could make a substantial contribution. A web-based screening tool has been developed that highlights areas where conditions may be suitable for installing commercial-scale (>100 kW heating or cooling demand) open-loop GSHP systems in England and Wales. In addition to the basic requirements for open-loop GSHP (i.e. the availability of a sufficiently productive aquifer within a reasonable depth beneath the surface) the tool provides information on existing abstractions, water chemistry and the location of protected areas. Validation and tool application show that it produces reliable results and provides an effective method for the initial assessment of subsurface conditions and suitability for GSHP installations. Hence, the tool can help to reduce uncertainty at the early planning stage, and also to promote GSHP technology to a variety of audiences. Ground source heat pump (GSHP) systems exchange heat with the subsurface to provide space heating or cooling. Groundwaterbased open-loop systems exchange heat directly with groundwater and can be more efficient than closed-loop systems owing to the water generally maintaining a constant temperature, whereas in closed-loop systems the ground is affected by heat extraction or injection. They could make a substantial contribution to meeting the UK's heating or cooling demands while reducing CO 2 emissions, but this depends on overcoming obstacles to GSHP uptake. Two of these obstacles are the lack of public awareness of GSHP technology (Enviros Consulting Limited 2008;Roy & Caird 2013) and the higher uncertainty (compared with conventional heating or cooling systems) regarding the economic viability of a planned scheme owing to unknown (hydro)geological conditions at the installation site. To address these issues, the British Geological Survey (BGS) (with support from the Environment Agency (EA) and advisors from the GSHP industry) is developing methods for identifying favourable (hydro)geological conditions for the installation of GSHP systems at the local administration or regional scale. Developed in a geographic information system (GIS), the results are made available as simple-to-use, web-based tools intended for use in first-pass assessments of the potential of a given locality for GSHP installation and/or for use in resource assessments. This paper presents the development of the open-loop GSHP screening tool for England and Wales, which maps hydrogeological and economic factors relevant for groundwater-based open-loop GSHP installations. Data sources The screening tool has been developed for England and Wales at a scale of 1:500000 and is freely available on the BGS website (http://www.bgs.ac.uk/research/energy/geothermal/ gshp.html). It is based on national datasets available from the collaborators in this study or sourced under an Open Government licence from Natural England and Natural Resources Wales. Some layers, such as the protected area map, were derived by combining existing maps and reattributing them to fit the purpose of this tool. The bedrock aquifer map and the underlying data layers have been specifically created as part of this project, based on the evaluation and mapping of aquifer productivity at the national scale. (The term 'bedrock' is used by BGS to refer to deposits of approximately Pliocene age and older. It includes unconsolidated sediments such as Palaeogene sands and the Crag, which is Pliocene to Pleistocene in age.) The data layers are briefly described below. A more detailed description of the tool and the underlying mapping method has been given by Abesser (2012). Simplifications and assumptions The tool was developed based on the following assumptions. Mapping suitability for open-loop ground source heat pump systems: a screening tool for England and Wales, UK Abstract: The UK Government expects that, by 2020, 12% of the UK's heat demand will come from renewable sources, and is providing incentives to help achieve this. Open-loop ground source heat pumps (GSHP) could make a substantial contribution. A web-based screening tool has been developed that highlights areas where conditions may be suitable for installing commercial-scale (>100 kW heating or cooling demand) open-loop GSHP systems in England and Wales. In addition to the basic requirements for open-loop GSHP (i.e. the availability of a sufficiently productive aquifer within a reasonable depth beneath the surface) the tool provides information on existing abstractions, water chemistry and the location of protected areas. Validation and tool application show that it produces reliable results and provides an effective method for the initial assessment of subsurface conditions and suitability for GSHP installations. Hence, the tool can help to reduce uncertainty at the early planning stage, and also to promote GSHP technology to a variety of audiences. Gold Open Access: this article is published under the terms of the CC-BY 3.0 license. 2014-050r esearch-articleTechnical NoteXXX10.1144/qjegh2014-050C. Abesser et al.Open-Loop Gshp Screening Tool, England And Wales (1) The tool is used for the initial screening of subsurface conditions for schemes with peak heating or cooling loads of 100 kW or more. (2) The area that is evaluated is 0.25 km 2 or larger (i.e. appropriate to the scale at which the tool and maps were developed). (3) The temperature differential ∆T (K) resulting from the heat exchange lies between 5 K (which is a typical value for many heat pump systems; Banks 2012) and 10 K (the maximum ∆T recommended by the Environment Agency for discharge to groundwater; Environment Agency 2011). (4) The minimum required water flow rates Q (l s −1 ) for schemes with peak loads q ≥ 100 kW are 2-5 l s −1 . (5) Installation of multiple abstraction wells is viable for openloop GSHP schemes >100 kW to achieve the required operational yields. (6) All aquifers with estimated yields of >1 l s −1 are considered a suitable groundwater source for open-loop GSHP schemes with peak loads ≥100 kW, as more than one borehole could be utilized. Bedrock aquifer map The primary requirement for groundwater-based open-loop systems is the availability of a suitable aquifer that can yield the required volume of water and instantaneous flow rate. This layer is illustrated in Figure 1. It shows the areas where suitable bedrock aquifers are present at the surface (at outcrop or beneath superficial deposits) (the term 'superficial deposits' is used by BGS to refer to Quaternary deposits; that is, Pleistocene age and younger) or at depth (i.e. concealed by younger bedrock formations that are generally, but not always, less permeable) and classifies these according to their potential to provide the following levels of productivity (yields): no suitable aquifer (yield <1 l s −1 ), moderate aquifer at outcrop (yield 1-6 l s −1 ), good aquifer at outcrop (yield >6 l s −1 ), concealed aquifer at depth (yield >1 l s −1 ) or combinations of an aquifer at outcrop and a concealed aquifer at depth. Examples of aquifers included in the yield categories 1-6 l s −1 and >6 l s −1 are shown in Table 1. This layer includes only the main hydrogeological units ( Table 2) that form important concealed aquifers at depth, with the maximum depths to which these formations are considered to form aquifers. These are 400 m for the Chalk, Lower Greensand and Sherwood Sandstone and 150-200 m for the remaining formations (UKTAG 2011). However, in the overall suitability assessment (as described in the section 'Implementation of the thematic layers in the web-based screening tool') aquifers beneath 300 m are considered to be 'less suitable', as high costs associated with borehole drilling and completion as well as possible water quality problems would probably render a GSHP installation at depths >300 m uneconomic. The tool does not include aquifers that potentially provide deeper geothermal resources; for example, the Sherwood Sandstone Group of Humberside and the Hampshire Basin. Depth to source map Drilling, completion and pumping costs are important considerations when assessing the viability (and economics) of open-loop GSHP installations. This layer estimates the drilling depth required to reach the uppermost (i.e. nearest the surface) potential aquifer. This does not necessarily coincide with the depth to the potentiometric surface, but in some areas represents the thickness of superficial deposits or less permeable rock formations that overlie the aquifer. Depth values are grouped into eight categories ranging from (1) <50 m to (8) Protected areas map A number of protection zones are defined in England and Wales to protect groundwater sources or to preserve wildlife, geology or landscape. GSHP schemes located within a protection zone may require additional permissions and/or planning consents from the authorities managing the protection. This layer (Fig. 3) shows the distribution of protection zones in England and Wales. It combines GIS datasets from the EA, Natural Resources Wales and Natural England into eight categories covering the various possible combinations of Source Protection Zone (SPZ), Site of Special Scientific Interest (SSSI) and National Park. Groundwater quality data (point data) Open-loop GSHP systems exchange heat directly with the groundwater and hence they are susceptible to problems induced by poor groundwater quality. The principal concerns are corrosion and scaling or fouling. This dataset provides empirical indices and concentration thresholds that estimate (1) the tendency of the groundwater to deposit or dissolve calcium carbonate (Langelier saturation index, LSI; Ryznar stability index, RSI) (Rafferty 1999), and (2) the corrosiveness of the groundwater (Larson-Skold corrosive index, LSCI) (Larson & Skold 1958). These indices are often used conjunctively and interpreted according to the guidelines in Table 3. They were calculated using in situ groundwater temperatures and, hence, represent the temperature at which the water would be delivered from the borehole. This dataset also includes concentrations of dissolved iron (Fe), hence indicating the predisposition of the water for iron (hydr)oxide precipitation (encrustation). Data are grouped into waters with dissolved Fe concentrations less than or more than 500 µg l -1 to indicate a low or high tendency for iron (hydr)oxide formation. Existing licensed abstractions (point data) In most countries, the operation of open-loop GSHP systems and the associated groundwater abstraction and reinjection is regulated under water resources legislation. In England and Wales, groundwater abstraction is regulated by the Environment Agency and Natural Resources Wales, respectively, and any abstraction over 20 m 3 day −1 (equivalent to a continuous rate of 0.23 l s −1 ) requires a licence. This dataset comprises point data (single abstraction licences) and is derived from the EA's National Abstraction Licensing Database (NALD). The dataset shows the maximum daily licensed quantity (as of 12 August 2011) that is permitted to be abstracted by the licence from one or several sources, and covers all aquifers. It is included to provide an indication of the rates and volumes that can be abstracted within the area of interest (from one or more boreholes) but also highlights areas where large abstractions exist and, hence, where water availability may be limited, reducing the likelihood of a permit being issued. Implementation of the thematic layers in the web-based screening tool All thematic layers were developed in ArcGIS 9.3.1 and integrated into a WebGIS viewer to create the web-based screening tool. The function of the web viewer is to provide the screening map interface through which the underlying thematic layers can be explored (without allowing direct access to the data). The screening map (Fig. 4) is derived from the bedrock aquifer map (see above) and the depth to source map (see above). It shows areas that are 'favourable' or 'less favourable' for the installation of open-loop GSHP systems (>100 kW). Areas are considered 'favourable' where one (or more) productive bedrock aquifer (i.e. with borehole yields ≥1 l s -1 ) is present within 300 m of the ground (topographic) surface. In some areas, aquifers are present at depths of more than 300 m, but these are shown as 'less favourable' in this tool as the high costs associated with drilling, borehole installation and possibly pumping and poor water quality would render a GSHP installation probably uneconomic. Furthermore, aquifers generally become less productive with increasing depth compared with those nearer the surface. Clicking on the map opens a table that displays details of the underlying data layers and allows access to the thematic maps (Fig. 5). Information on groundwater chemistry and existing licensed abstraction volumes in the vicinity are shown in the table (where they exist) but these cannot be accessed directly owing to restrictions relating to data confidentiality and security. Instead, the table displays all data values (up to a maximum of 10) that occur within a search radius of 600 m around the chosen location. These can refer to sampling points or abstractions from different aquifers and depths, and can include multiple boreholes forming part of the same abstraction licence. Testing and application of the screening tool The performance of the screening tool was tested by applying the tool to locations where commercial-scale GSHP systems are known to be operational. For each location, predictions of the overall viability were obtained, and predicted minimum depth ranges were compared with existing data from aquifer tests and borehole records. The list of existing licence abstractions (as returned by the tool for each location) was also checked to ensure that the GSHP abstraction was itself shown by the tool. A total number of 99 locations were tested and all found to lie within areas mapped as 'favourable' by the tool. However, two of these schemes are known to have experienced thermal interference between the boreholes. The resulting reduction in efficiency caused the operation of these schemes to become unsustainable and led to their abandonment after only a few months of operation. Such problems of 'thermal interference' are, however, due to issues with borehole spacing, scheme size and management (Younger 2014), rather than due to the fundamental unsuitability of the (hydro)geology. Information on the depth to water and/or depth to aquifer was available for 73 locations and generally compares well with the depth ranges estimated by the tool. Only 72 (out of 99) of the locations were identified as licensed abstractions by the tool. Abstractions at the remaining 27 locations may have been licensed after the tool was created or, in a few cases, the schemes abstract from superficial deposits (which are not considered in this tool). The screening map layer is available for downloading (in web map services format) on the BGS website and can easily be incorporated into existing GIS projects. As well as being used for point assessments (e.g. to assess suitability at single map locations), the tool can also be applied in regional-scale (administrative-scale) assessments. In England, for example, local authorities need to quantify the naturally available renewable energy resource within their geographical boundary (SQWenergy 2010). The utility of the tool to support such regional or area resource assessments has been tested for a pilot area, the West Midlands (13000 km 2 ). This study estimated that about 56% of the area is suitable for open-loop installations with a capacity of 100 kW or more. For England and Wales as a whole, the estimate is higher, with 67% of the total area being mapped as favourable. This estimate is based on a minimum yield requirement of 1 l s −1 (assumption (6)). Assuming a minimum yield requirement of 6 l s −1 reduces the estimated favourable area to 52% for the West Midlands and to 57% for England and Wales. Discussion and conclusions The GSHP screening tool has been developed at the 1:250000 scale for use at the 1:500000 scale. This provides a 500 m ground resolution, which is similar to that of other regional-or national-scale tools (Bezelgues et al. 2010). The scale was selected to reflect the purpose of the tool (i.e. to be used as a screening tool, not for site-specific assessments) and the reliability of the underlying data. The tool maps the most relevant hydrogeological and economic requirements for GSHP installation, namely the presence of a sufficiently productive aquifer within a reasonable depth beneath the surface. As such, it identifies areas where it is worth carrying out more detailed site-specific investigations to prove the hydrogeological and economic viability of a scheme at the early planning stage. Although reducing the uncertainty associated with unknown subsurface conditions, the tool does not provide definitive answers at the site scale and cannot replace more detailed desk studies or sitespecific investigations (Banks 2011). A limitation of the tool is that it considers only the major hydrogeological units to form useable aquifers at depth (Table 2). Therefore some areas underlain by concealed aquifers at relatively shallow depths are excluded (e.g. Permian sandstones below Aylesbeare Mudstone in SW England) even though, locally, they can provide an important resource. This is because the subsurface extent of these formations is not generally known and hence their distribution has not been mapped. A more detailed presentation of the subsurface geology (including geological volumes and units) is currently being developed as part of a 3D national geological model (NGM) (British Geological Survey 2014). This will provide the necessary data required for more detailed mapping of concealed aquifers. The tool does not consider superficial deposits, even though, at some locations, they can form moderately productive aquifers (O'Dochartaigh et al. 2011) and have potential for supporting medium-to large-scale groundwater heating and cooling applications (Birks et al. 2013). However, the inherent heterogeneity of superficial deposits means that their properties as aquifers (e.g. permeability, thickness and lateral extent) can change significantly over short distances even within the same lithological unit. Maps of superficial deposits and their thicknesses are available but these tend to be classified by their mode of origin (e.g. 'Glacial Deposits', 'River Terrace Deposits' or 'Blown Sand') rather than lithology. Permeability within these classes can vary hugely (Bricker & Bloomfield 2014;MacDonald et al. 2012), and their productivity will also depend on the lithology of the deposit, areal extent (which is often small) and saturated thickness, making it difficult to distinguish between deposits that form aquifers and those that do not. There are also currently no superficial deposits maps available at the scale of 1:250000. Validation of the tool against locations of existing open-loop GSHP schemes and borehole data shows that the tool produces reliable results. It also highlights two important issues: (1) the fact that the tool cannot address the sustainability of a scheme (which depends on spacing, mode of operation and load balance); (2) the need for regular updates of the thematic data layers. Sustainability is not explicitly addressed within this tool, and the tool uses yield rather than specific capacity data. However, proximity to, and hence the risk of interference from, existing abstractions can be inferred from the abstraction licence dataset, which shows existing abstractions near the location of interest. Intergranular aquifers generally have smaller zones of influence and hence interference effects are likely to be more limited, but in fractured aquifers, local fractures may provide pathways for rapid groundwater flow between boreholes, diminishing the sustainability of this and/or neighbouring schemes (Gropius 2010). This is more likely to be a problem in aquifers that are utilized excessively by a large number of users and, naturally, have a higher risk of thermal and hydraulic interference (Ferguson & Woodbury 2006;Fry 2009). Thermal interference from closed-loop systems can, theoretically, also affect the performance of open-loop systems but is unlikely to be a problem unless the closedloop scheme is very large. It should be noted that closed-loop schemes are currently unregulated in England and Wales. The need for updating applies in particular to the abstraction licence data used within this tool. This is a dynamic dataset with new licences being added constantly and expired licences being removed. Considering the expected rise in uptake of open-loop GSHP technology in the UK, it is important that this dataset is kept up-to-date to ensure that the tool remains relevant for users. The tool provides information on the economic viability and deployment constraints for open-loop GSHP installations. A recent review of the regional assessments of renewable energy capacity in England concluded that such information is required to improve existing resource assessments and to support the development of local (and regional) energy plans (Stoddart & Turley 2012). Hence, it can be expected that this tool will play an important role in future assessments and target setting for shallow geothermal resources in England (and Wales). Developed for use at the 1:500000 scale, the tool does not give definite answers at the site scale and cannot replace more detailed sitespecific investigations. Even so, it does provide (1) a valuable instrument for the initial assessment of the suitability of an area, (2) data and information relevant for (regional and local) renewable resource assessments and (3) a tool to communicate where suitable subsurface conditions for the installation of open-loop GSHP systems may exist. As such, the tool can reduce uncertainty at the early planning stage and also help to promote GSHP technology to a variety of audiences.
4,841.2
2014-09-15T00:00:00.000
[ "Engineering" ]
The entire lifetime of a distinct double-diffusive staircase in crater Lake Nyos, Cameroon Lake Nyos, a deep crater lake, located in the north-west of Cameroon, was permanently stratified below 50 m depth due to subaquatic sources supplying warm, salty and CO2-enriched water into the deepest reaches. The high CO2 content in these source waters caused the 1986 limnic eruption. The deep inflowing water is denser than the hypolimnetic water and maintains the stability of the water column, which is double-diffusively stratified. During the dry season in Feb 2002, cooling triggered the formation of a double-diffusive (DD) staircase, a sequence of homogeneously mixed layers separated by distinct stable interfaces. The initiation of the staircase was slightly below the permanent chemocline at ~ 50 m depth, from where the staircase expanded vertically in a diffusion-type manner for ~ 750 days to a maximal vertical extension of ~ 37 m. The staircase pattern caused the upward heat fluxes to increase which depleted the driving temperature gradient. Subsequently, the density ratio increased and reduced the upward heat flux divergence until DD progressively weakened and finally the staircase structure eroded. Based on 39 CTD profiles, we describe the DD phenomenon, explain the three distinct phases of this unique DD event, which lasted for ~ 850 days, and discuss the vertical extension of the DD zone in relation to the rates of new layer formation and layer decay. To our knowledge, this is the only observation over the entire lifespan—“from birth to death”—of a DD event in a natural water body. Early 2000s, Lake Nyos was double-diffusively stratified and developed a staircase of up to 27 layer-interface pairs Double-diffusive layering went through three phases (build-up, steadiness, and decay) and was active for ~ 850 days Upward heat flux divergence drove formation of new layers, which was in balance with layers decay for more than one year. Early 2000s, Lake Nyos was double-diffusively stratified and developed a staircase of up to 27 layer-interface pairs Double-diffusive layering went through three phases (build-up, steadiness, and decay) and was active for ~ 850 days Upward heat flux divergence drove formation of new layers, which was in balance with layers decay for more than one year. Introduction The present publication is written in honour of recently retired Professor Peter Davies. The first author remembers with great joy and gratitude and with high respect, his enthusiasm for the Gerhard Jirka EFM Summer School [22], where the first author had the lucky fortune to be involved (Fig. 1). Peter was fascinated by our contributions on small-scale observations from the natural environment of lakes. For this reason, we decided to focus in this publication on a unique example of lake double diffusion (DD). The specific novelty of this contribution is the documentation of the entire lifespan-from birth to death-of a The location of this unique event is Lake Nyos, the well-known 209 m deep Cameroonian crater lake. Lake Nyos gained notoriety for its limnic carbon dioxide (CO 2 ) eruption in 1986, causing a human tragedy [3,5]. The origin of the high CO 2 abundance are deep subaquatic sources, supplying warm, salty and CO 2 -enriched water into the hypolimnion. At the time of our observations in the early 2000s, the lake was permanently density-stratified below 50 m depth with temperature (T), salinity (S) and CO 2 concentrations increasing with depth (Fig. 2). As all these three constituents affect the water density, Lake Nyos featured an exceptional density structure. While the fast-diffusing T destabilised the density profile, the slow-diffusing S and CO 2 had a stabilizing effect (Fig. 2). This setting of largely different molecular diffusivities can lead to localised density instabilities known as double diffusion. Therefore, Lake Nyos is a particularly unusual example of the family of DD-type of stratifications. Double-diffusive stratification is generally rare in lakes, because usually T decreases and S increases with depth leading to classical diffusively stable water columns. However, prominent exceptions of DD-developing stratification were found above former salt depositions, such as in Lake Vanda [9,10] and several other lakes in the Antarctic Dry Valley [34,35], or over volcanic underground, such as in Lake Kivu [18,33]. Another set of DD-type lakes can be found on the Norwegian and Canadian West Coast, where Embedding of the DD zone (grey, 53 to 95 m depth) within the vertical stratification and to the degassing of Lake Nyos. Left: the contributions of temperature T (red), plus salinity S (blue), plus CO 2 (black) to the density profile. The peculiarity of this DD-stratification, besides the compensating effects of temperature and salinity (blue), is the strong contribution by CO 2 (black). Right: self-siphoning pipe flow and subsequent subsidence of ~ 0.48 cm day −1 1 3 former estuarine fjords were transformed to lakes after the last ice-age and still hold ancient saltwater in their deepest realms. Powell Lake in British Columbia is the most prominent representative of this category [24,36]. With three and four density-relevant water constituents, there are examples of lakes showing triple-diffusion [23], such as the present study, and quadruple-diffusion, such as Lake Kivu [33]. More DD-type lake examples are reviewed in Wüest et al. [42]. The second type of DD, the so-called finger regime [28], is even rarer in lakes. The constellation of saltier and warmer water layers residing on top of fresher and cooler layers is not a setting, which is favoured by geochemical and geothermal processes. However, evaporation in salt lakes may cause such vertical profile structures, such as in the Dead Sea, where during summer an especially high saline and warm water surface forms, which triggers DD salt fingering through the pycnocline [1]. DD is a widely researched fluid dynamics phenomenon occurring in oceans and lakes [11,21]. The most important feature of DD is the enhancement of the vertical transport of water constituents, relative to molecular diffusion, by transforming gradual density gradients into staircases of convectively mixed layers, separated by thin stable interfaces. The mechanical energy for this process stems from extraction of potential energy from the stratification. Such staircases can develop when two agents that diffuse at different molecular rates contribute in opposing ways to the stability of the vertical density profile [14,33,38,39]. Given that temperature was destabilizing, whereas S and CO 2 were stabilizing the density profile (Fig. 2), made Lake Nyos a probable candidate for the development of a DD staircase of a triple-diffusive type. From early laboratory experiments [37], we know that the density ratio R ρ (stabilising density gradients divided by destabilizing density gradients; see definition in Eq. 2 below) and the interface T step are the two dominant parameters for generation and maintenance of staircase structures and the heat fluxes [13], which drive the convective turbulence in the mixed layers. The most detailed analysis, over thousands of layer-interface pairs in Lake Kivu, confirmed the so-called 4/3 flux law for the low range of R ρ but deviated for larger values of R ρ [30]. The close agreement with the Kelley [13] parameterisation was also found in the earlier report from Lake Nyos, where it was possible to quantify the heat fluxes from the DD layering as well as from the heat budget based on measured T profiles [26]. Important for the present study are the observations made on the layer-interface pairs of Lake Kivu, that the most frequently observed R ρ values were at ~ 4, while R ρ > 8 was extremely rare [30]. This finding was confirmed with corresponding Direct Numerical Simulations [32]. In Powell Lake, three DD-active depth sections were found with R values ranging from 1.6 to 6 within the layering zones and larger values outside the layers [24]. All these lake observations are consistent with typical ranges of R ρ in natural DD environments [14]. It will be interesting to analyse how R ρ restricted the dynamic development of the DD staircase in Lake Nyos. Almost all reports on DD staircases in natural waters stem from stratifications that are in a quasi-steady equilibrium. The boundary conditions, such as background stratification and its supporting fluxes in and out of the DD staircase region are usually long-term sustained and subsequently, the driving forces and the stimulated convection are balanced and vary only slowly with time. In this publication, we focus on the limited duration of an unsustained (one-time "run-down") stratification, where a DD staircase could develop temporarily until the boundary conditions for the zone of the mixed layers had changed to a degree that the staircase pattern could not be maintained. The energy source for forming mixed layers and enhancing the vertical fluxes through the staircase stems from potential energy, which is delivered by the warm subaquatic water inflow. The fact, that the DD event had limited duration, means practically that the extraction of potential energy due to DD was too large in relation to the potential energy supply from the subaquatic sources. We describe the initiation and dynamic development of the staircase and discuss the formation and dissolution of layers. Of special interest for other natural systems are the conditions at the time when the staircase collapsed, raising the question why the DD process did not adjust to a sustainable steady state functioning. The complete start-to-end observation of this unique DD-event was pure luck. The dates of the fieldwork were originally chosen for the analysis of the degassing project [7,8] and the coincidence with the entire lifetime of the simultaneous DD-event was discovered as a by-product. Lake Nyos Lake Nyos is world-renowned for its limnic eruption in Aug 1986, when it released a large cloud of carbon dioxide (CO 2 ), asphyxiating ~ 1700 people [5,15]. Most convincing is the explanation that CO 2 of volcanic origin had continuously accumulated to saturation concentrations. It was suddenly released by an unknown trigger (such as baroclinic displacement or rockfall), which led to local supersaturation, from where subsequent bubble formation invaded the entire lake. Lake Nyos is a small crater lake situated on 1091 m above sea level in the north-west of Cameroon, with a maximum depth of 209 m, a surface area of 1.58 km 2 and a volume of 0.15 km 3 . At the time of the presented measurements (2002)(2003)(2004)(2005), the water column consisted of three major layers, separated by two chemoclines at ~ 50 and ~ 170 m depth (Fig. 2). The usual seasonal mixing and stratification was restricted to the top 50 m layer (Fig. 2), whereas the hypolimnetic layers below were permanently stratified. In those deep waters, T, S and CO 2 increased with depth ( Fig. 2), which led to double-diffusively stable density stratification. The DD zone, which is the focus of this publication, evolved in the range between 53 and 95 m depth. The fact that CO 2 already accumulated substantially since the 1986 eruption [16], led to the decision to degas the lake by using self-syphoning over deep vertical pipes ( Fig. 2; [7,8,27]) to avoid future disasters. As the intake was close to the deepest location and the degassed lake water was sprinkled onto the surface, the profiles of the DD zone were only partly affected by the degassing operation (Fig. 2). Over the 1200 days of data collection, the DD staircase zone was drawn down by ~ 6 m ( Fig. SI-1). Measurements The data for this publication stem from a mooring from Nov 2001 to Dec 2002 and 39 CTD profiles, of which some are of high vertical resolution. On the mooring, T was measured every 10 or 15 min at 0, 10,20,41,62,103,144,175,185,195 and 200 m depth with VEMCO minilogs, as well as a SeaBird SBE-39 and a RBR TR-1050 for references. The CTD profiles were collected with a Sea-Bird SBE-19 on eight dates (Table 1) over 1200 days between 19 Jul 2002 (first profile, day no. 166) and 27 Oct 2005 (last profile, day no. 1362). The three slowest recorded CTD profiles had a vertical resolution of < 4 cm. Given the short period considered, the CO 2 profile from Kusakabe et al. [16] was used for the density calculation. This approximation led to negligible additional errors in the water column stability N 2 (z) and the density ratio R ρ , which are both defined below (Eqs. 4 and 5). Data analysis In Lake Nyos, the density profile depends on T [°C], S [g kg −1 ], and CO 2 [mmol L −1 ]. As the lake ionic composition is different to oceanic water, the meaning of salinity in this publication is different from the standard oceanic definition. Here, equivalent to ocean water, salinity S [g kg −1 ] stands for the total concentration of dissolved solids including non-ionic silica. Therefore, the conversion to density, using the haline contraction coefficient, will also be slightly different from ocean water. To determine S, the T-dependent in-situ measured conductivity C T [μS cm −1 ] was transformed first to the T-independent conductivity C 25 [μS cm −1 ] at T = 25 °C. The C T to C 25 conversion depends on the specific ionic composition, taken from Evans et al. [3]. According to the procedure provided in Wüest et al. [40], we can use the following polynomial transformation for the Lake Nyos ionic composition: where T is the in-situ temperature in °C. Again, for this specific ionic composition, the relation between C 25 and S can be approximated by [25] which includes all major ions, such as the charged HCO 3 − , and CO 3 −2 , and the non-ionic silica, but not H 2 CO 3 , the uncharged dissolved aqueous form of CO 2 . For one profile (Jul 2003), the conductivity cell was not working properly and therefore the S-gradients have been used from the two neighbouring profiles but the absolute S is not reported. The concentration of H 2 CO 3 for Dec 2002 was calculated from alkalinity and pH. The dissociation constants K 1 (T) and K 2 (T), were calculated conditional for T and the resulting activity of H 2 CO 3 was corrected with the activity coefficient for the ionic strength to result in H 2 CO 3 concentration [26]. The concentrations of H 2 CO 3 in the considered DD zone, from 53 to 95 m depth were approximated, assuming that the relation between dissolved CO 2 and conductivity had not changed. Finally, the water density was calculated as a function of T and the two dissolved substances S and H 2 CO 3 using: For the T-dependent ρ(T) [kg m −3 ] we followed Chen and Millero [2]. The haline contraction coefficient S = 0.760·10 -3 kg g −1 was calculated according to Wüest et al. [40] and for H 2 CO 3 , CO2 = 1.25 10 -5 L mmol −1 = 0.284 10 -3 L g −1 [19] was used. Although H2CO3 would be the logic symbol for the corresponding density coefficient, we use the established nomenclature CO2 [25,30]. The water column stability N 2 follows the usual definition where g = 9.81 m s −2 and z is the depth, positive upward. The density ratio R ρ [−], expressing the quotient of the stability of the slow-diffusing water constituents divided by the instability of the fast-diffusing T, we estimated by defining: (1) where α and Γ are the thermal expansivity and the adiabatic gradient, respectively. As shown below, the considered CTD profiles developed staircase structures typical for DD stratification. The staircase consists of a series of homogeneously mixed layers separated by stable interfaces. To define the boundaries between layers and interfaces, we plotted the profiles and identified the upper and lower limits by eye. For this manual procedure, the uncertainty of the respective thicknesses is typically one scan of the CTD profile or a few cm in absolute distance. Observations On eight occasions, from 19 Jul 2002 to 27 Oct 2005, 39 CTD profiles were collected over the full depth range of Lake Nyos. Figure 3 provides an overview of the evolution of T and S over the entire 1200 days of observation (day no 166 to day no 1362; Table 1). The surface layer, which was well-mixed during the dry seasons and never dipped below 55 m depth, is not discussed any further, as our focus is solely on the development of the upper hypolimnetic stratification. Figure 3 reveals that the changes in the hypolimnion were tiny overall. Given the low mechanical energy input to this wind-protected hole-like water body and given the strong Fig. 3). Based on the meteorological forcing, we assume that the rate of vertical expansion of the DD zone was similar during the previous few weeks, and conclude that DD layering started on 3 Feb 2002 at ~ 53 m depth. Realistically, these estimates have uncertainties of a few days and up to 1 m in the vertical. We ignore both errors in view of the dimension and duration of the following vertical expansion of the DD zone over almost three years. In the following analysis (Table 1), we define this date as time zero and 53 m depth as the upper bound of the DD zone. Expansion of the DD staircase Following the manual procedure presented in Sect. 2.3, we identified all upper and lower boundaries of the mixed layers from all eight CTD profiles. Because of the 70-times larger molecular diffusivity of T compared to S, we can expect that the mixed layers show edges that are more distinct in the S profile, as shown in detail in Sommer et al. [31]. For some profiles, when the conductivity cell was not working optimally, we used the T profiles to identify layers and interfaces. Figure 5 shows four examples out of the eight profiles with some layers indicated. The complete data sets of the manual analysis are provided in the Supporting Information Tables SI-1 (Table 1; Fig. 5c). In the following, we call this period of active expansion of the DD zone the quasi-steady phase. With this term, we express the steady character of this period, consisting of continuous and balanced generation and degeneration of layers, as compared to the unsteady phases of build-up at the beginning and decay at the end of the DD event. After the maximal extension between Mar and Aug 2004, the number of layers decayed, the identifiability of the layers increasingly worsened and the DD zone shrank quickly. In Mar and Oct 2005, only 4 and 2, respectively, poorlydefined layers remained (Table 1; Fig. 5d). Evolution of the staircase characteristics The characteristic staircase properties evolved over time differently during the three phases of DD layering (Fig. 6). The average layer thickness H L increased from ~ 0.5 to ~ 1 m during the active quasi-steady phase and subsequently expanded quickly while the staircase decayed (Table 1; Fig. 6b). Interface thicknesses were only estimable during the quasi-steady phase. No trend could be detected and the average thickness was ~ 25 cm ( Table 1). The temperature gradient, which is the key prerequisite for the driving force of DD convection, showed the strongest trend (Fig. 6c). After the initiation of DD (Feb. 2002), T∕ z dropped drastically followed by a gradual decline from the first to the last CTD profile, when T∕ z decreased continuously by a factor of ~ 4 (Fig. 6c), while the DD zone expanded downwards into deeper regions of lower stratification and lower T-gradients. At the end of the DD event, T∕ z taken over the entire DD zone dropped to the level of T∕ z at 95 m depth (Fig. 6c), demonstrating the loss of driving force for DD. Similarly, as the stratification decreased towards greater depth, the water column stability N 2 dropped from the uppermost to the deepest layers by an order of magnitude over the DD zone (Tables SI-2 to SI-5; Supporting Information). For the susceptibility of DD layering, the density ratio R is the most important parameter. The initiation of DD in 53 m depth occurred when R was in the range of ~ 2 [26,36]. The accuracy is unknown, as R was not directly measured at this particular moment. Tables SI-1 to SI-8 show that the variations of R values between different parts in the staircase did not exhibit strong trends. Although, the values cover a range of up to 7 over the entire DD event, the histogram for the individual layer-interface pairs demonstrates that ~ 50% of the R values ranged between 2 and 4 (Fig. 6d). The values of R , estimated over the full DD zones, varied only in the quite narrow range of 3 to 4 among the different profiles (Table 1). In addition, there is a consistent vertical structure with larger R values near the upper bounds and lower R near the lower bounds of the DD zones (Tables SI-2 to SI-6). Consistent with the cooling of the DD zones (Figs. 3 and 6a), the upward heat fluxes into the DD zones were typically a factor ~ 5 lower than the fluxes out of the DD zones (Tables SI-2 to SI-6; Fig. SI-3, Supporting Information). The divergence of the heat flux, with ~ 0.1 W m −2 in-flux and ~ 0.5 W m −2 out-flux, was largest at the beginning of the DD layering. As the DD zone moved further down to deeper regions with weaker T-gradients, the divergence decreased, but out-fluxes were always larger than in-fluxes. Therefore, the heat flux was not only driving convective mixing in the staircase layers, it simultaneously cooled the DD zone as well. Discussion In this section, we discuss the evolution of the staircase characteristics along the three phases of the DD event. We relate the developments to the energetics for layer formation and the boundary conditions during expansion and collapse of the DD zone. Of special interest is the self-destructing character of the unsustained temperature stratification. Adequacy of the one-dimensional approach The presented CTD profiles were all taken near the deepest location, close to the center of the lake. Although the subaquatic water inflow of ~ 18 L s −1 is not well known [3,25], the water residence time in the hypolimnion is of the order of hundred years, whereas horizontal spreading in such a small lake takes only days to weeks [6]. On 21 Mar 2004, we collected several profiles in the lake, including some at the vertical sidewall, which confirmed that the DD layering extended nearest to the wall, indicating that boundary mixing had no effect on the layer-interface pairs. For the following discussion, we therefore adhere to only one-dimensional considerations. DD zone subduction by degassing The T-S profiles changed overall only little during the DD event (Fig. 3). As shown by the T-S diagram in Fig. SI-1 (Supporting Information), the observed modifications were (i) due to downwelling caused by the degassing operation (Fig. 2) and (ii) due to DD-induced vertical fluxes). Whereas the decrease in S were almost entirely due to downwelling, the T changes were due to both downwelling and DD-induced upward heat fluxes (Fig. SI-1). The downwelling at depth z is given by the pipe flow divided by the cross-sectional area at z. Over the entire period of the 1200 days, the chemocline subduction, at the level S = 0.5 g kg −1 , was ~ 6 m, corresponding to an average pipe flow of ~ 45 L s −1 , which is realistic for the early degassing phase [7,8,27]. During the period of the quasi-steady expansion of the DD zone (Dec 2002 to Mar 2004), the subduction was ~ 2.25 m, which fits well to the observation that the uppermost (first) mixed layer migrated downwards from 52.9 to 55.0 m depth ( Table 1). We conclude that the T-S profiles in the DD zone have been vertically dislocated by subduction, but the vertical structure within the DD zone was only slightly stretched. As the cross-sectional areas of the lake were 0.81 and 0.64 km 2 in 53 and at 95 m depth, respectively, the downdraught were only 20% different. In other words, the DD zone was dislocated, yet with only 1% internal distortion over the 40 m vertical layer. Divergence of the upward heat fluxes The upward heat flux through the stable stratification is the key driver for staircase formation. While S and CO 2 stabilize the water column, only the heat flux could cause local density instabilities and provide the necessary buoyancy flux to mix layers. It is not possible to accurately estimate the upward heat fluxes, as the temperature changes due to the subsidence (cooling) interfere with the changes by DD (cooling first and warming later). As those T changes are of the same magnitudes, the heat balance remains uncertain (Fig. SI-3; Supporting Information). Therefore, we use earlier results to estimate the DD-induced heat fluxes, relaying on empirical relations [13,17,37], which have been confirmed by observational data from Lake Nyos [26] and Lake Kivu [30]. Although these estimates cannot be precise because of the continuous transformation of the DD-layering, these empirical heat fluxes agree well with observations for Lake Nyos conditions with R from 2 to 7 [30]. We approximate the heat flux F h within the DD zone using the formulations by [13] as well as by Turner [37] and Linden and Shirtcliffe [17] where C p is the heat capacity of water, ΔT is half of the temperature step between neighbouring layers, H is the layer thickness, including half of the interfaces, and D T and are the thermal diffusivity and the kinematic viscosity, respectively. The resulting heat flux values in Tables SI-2 to SI-5 are shown in Fig. SI-3 together with heat budget estimates over the entire quasi-steady period (Dec 2002 to Mar 2004) for the lower region of the DD zone. Although these estimates are not directly comparable, as they mirror different The same comparison at the lower bound of the DD zone indicates an even larger flux enhancement by DD. We can conclude that DD increased the upward heat flux by approximately one order of magnitude. The difference between the heat flux into the DD zone at the lower bound (~ 0.1 W m −2 ) and the flux out at the upper end (~ 0.5 W m −2 ) led to a typical heat divergence of ~ 0.01 W m −3 (average over 53 to 95 m depth). This heat extraction corresponded to a cooling of ~ 0.08 K yr −1 , which is orders of magnitudes stronger than the changes below the DD zone. This cooling is well visible in the T-S diagram of Fig. SI-1, where the T decrease during the quasi-steady phase was even slightly larger in the deepest region of the DD zone. Heat flux induced turbulence A heat flux of ~ 0.5 W m −2 generates a buoyancy flux of ~ 2.7 × 10 -10 W kg −1 . This value is much higher than typical turbulent dissipation rates in the interior of hypolimnia of even wind-exposed lakes [4,29,41]. The comparison with Lake Kivu, the undoubted marvel of lake DD with more than 300 layers in its deep staircase [30,33], reveals that the buoyancy flux in Lake Nyos is about 5-times stronger, despite the large surface and corresponding long wind fetch of Lake Kivu. Given the topographically well-protected small surface of Lake Nyos and given its deep hole-like structure, the mechanical excitations (currents or baroclinic oscillations) in the hypolimnion were negligible and background turbulence had no effect on the water column. This was confirmed in Mar 2004 by CTD profiles taken directly at the rock wall, which showed identical layering as in the open water and even boundary mixing had obviously no effect on the DD layering. An alternative indication for the level of turbulence in a water column of stability N 2 is offered by quantifying the activity of turbulent mixing. In order to sustain active mixing, the turbulent dissipation has to exceed νN 2 by at least an order of magnitude [12]. At the upper bound of the DD zone, with stabilities of N 2 ≈ 10 -3 s −2 (Tables SI-1 to SI-8), active mixing would therefore require a dissipation of at least 1.5 × 10 -8 W kg −1 . The above mentioned buoyancy flux of ~ 2.7 × 10 -10 W kg −1 is subsequently two orders of magnitude away from active turbulence. However, before DD enhanced the vertical heat fluxes, the energetics in the water column was even lower than during the DD event. Therefore, we can safely assume that before the onset of DD in Feb 2002, the vertical heat flux was on a molecular level only. Range of density ratios For the establishment of staircase structures, the relative stability of the water column, expressed by the density ratio R , is the critical parameter. The susceptibility to DD convection rises with decreasing R . For the smallest value of R = 1 , the water column stability would drop by definition to N 2 = 0 . Such a configuration would not be stable and the lowest possible R for DD-layering is R > 1. When DD was initiated in Feb 2002, R was ~ 2, which is realistic, as even smaller values had been observed for natural waters [20]. For large R , the temperature gradient becomes too small and the heat flux too weak to drive the required convection for mixed layers formation [37]. Interestingly, the distribution of R was rather stable during the quasi-steady expansion (Tables 1 and SI-2 to SI-5) and clustered close to R ≈ 3 (Fig. 6d). There was, however, a vertical structure with up to 50% higher values at the upper bound compared to lower values near the lower bound of the DD zone (Tables SI-2 to . This is consistent with the observation that layers disappeared near the upper end and new layers formed at the lower end, which led to a downward migration of the DD zone. According to Tables SI-2 to SI-5, R 1 3 values varied around 2 to 3 at depth of new layer formation, which agrees with the model by Toffolon et al. [36], where R values dropped after new layer formation and oscillated around 2 (Fig. 10c in Toffolon et al. [36]). After the event reached maximal vertical expansion, R increased in the lower zone and led the DD-event collapse. Rates of new layer formation and layer decay During the quasi-steady expansion of the DD zone, the number of observed layers remained constant, within the uncertainty of about 1 to 2, at 27 ± 1. As the layers found in those four profiles did not keep their identity from one profile to the next, the constancy of the number implies that for every newly formed layer at the base of the staircase, a layer must have eroded or two layers must have merged. The new layers formed during the quasi-steady phase at the lower end of the DD zone with average layer and interface thicknesses of ~ 80 and ~ 35 cm, respectively (Table 1; Fig. 6b). Given the vertical expansion of ~ 3.5 cm day −1 during this phase (Table 1), it takes ~ 32 days to form a new layer-interface pair of 1.15 m thickness (80 + 35 cm). We can conclude that during the quasi-steady expansion, the rates of layer generation and layer dissipation were identical at ~ 1 per month. This observation is consistent with the above presented heat flux divergence F h ∕ z within the T-stratification at the lower bound of the DD zone. To cool a layer of thickness H L within a T-gradient T∕ z to homogeneous temperature, the heat content of 1 ∕ 2 ⋅ C p ⋅ H 2 L ⋅ T∕ z needs to be extracted, following purely geometrical arguments. This amount of heat can be set equal to the cooling effect of the flux divergence F h ∕ z acting for duration F over the layerH L . The time scale F to generate a new layer of thickness H L is then given by For typical values at the lower DD zone (Tables SI-2 (Table 1), when it lasted on average ~ 32 days to form a new layer-interface pair of 1.15 m thickness (see above). Although the numerical agreement is coincidental, it is a logical consequence of the geometrical setting. The new layer formation is in competition to the layer decay due to diffusive smoothing by D T . The diffusive decay time D for a layer thickness H L is given by H 2 L = 2 ⋅ D T ⋅ D . For an average H L ≈ 0.8 m , the expected decay time was D ≈ 25 days. This comparison confirms that during quasi-steady expansion, the two time scales ( F , D ) almost equalled, and layer merging / decay were in balance with new layer formation. We can expect that layer merging contributed to the observed increase of the average layer thickness (Fig. 6b). Collapse of the DD layering Interestingly, the vertical widening of the DD zone followed a diffusion-type expansion (Fig. 6b) with an apparent diffusivity of ~ 1.0 × 10 -5 m 2 s −1 . This implies that the expansion was faster at the beginning and slowed down over time. We explain this observation by the fact that the stronger temperature gradients and the higher heat fluxes in the shallower reaches of the DD zone led to larger flux divergences, which caused faster cooling of the DD zone. This stronger heat export led subsequently to faster generation of new layers and a more rapid expansion. As the DD zone expanded deeper towards weaker T-gradients (Table 1; Fig. 6c) and weaker heat fluxes (Tables SI-2 to SI-5), the subsequent flux divergence decreased. In addition, below 95 m depth, the R stratification was less favourable for DD staircase formation [26]. In combination, a critical point was reached, when the time scale for new formation (Eq. 8) exceeded the time scale of decay D . By then, the intensity of DD forcing was weakened, layers eroded faster than new ones were formed and the DD event eventually ended as the layering collapsed after ~ 850 days (Fig. 6b). A key feature of this type of DD is the efficient upward removal of heat while dissolved substances remain in the deep reaches. This is the consequence of the DD flux laws for heat and for salt, which express that the T-related density flux is larger than the opposing S-related density flux [17,37]. For typical density ratios R , as observed in Lake Nyos, the positive heat-driven buoyancy flux is about 7-times stronger as the opposing negative salt-induced buoyancy flux [37]. In effect, this discrepancy of the vertical fluxes of heat and salt intensifies density stratification and increases the water column stability. In Lake Kivu, this phenomenon [33] allows the accumulation of methane and CO 2 to extremely high concentrations. The same can be expected for CO 2 , which was intruding into Lake Nyos until oversaturation was reached in the past. The practical consequence of DD is a "fortunate" conservation of methane energy in Lake Kivu, but it led to a human disaster in Lake Nyos. Considering that after such a DD event ended in the past, heat could had accumulated again in the deep waters, and subsequently another cycle of DD layering could follow. What we have observed may therefore not be a unique phenomenon for Lake Nyos, but it was unique that we had the fortune to observe it by chance. Conclusions By analysing 39 fine-scale CTD profiles and temperature records from moored thermistors, both collected in Lake Nyos over a period of four years, we have been able to observe a complete lifecycle of a double-diffusive (DD) staircase. To our knowledge, this is the first report of the entire lifetime of a DD event in a natural water body, which lasted ~ 850 ± 50 days. Based on the data analysis we conclude as follows: (1) The DD lifetime consisted of (i) a build-up period of less than 300 days, while up to 27 layers-interfaces pairs were formed, (ii) a quasi-steady period of ~ 450 days of vertical expansion of the DD zone, where the number of pairs remained constant, and finally (iii) a decay phase of a few dozen days when the molecular diffusion of heat eroded the DD layering. (2) The DD zone was initiated in a depth of 53 m in Feb 2002, and expanded to a maximal vertical extension down to ~ 95 m depth after ~ 850 days. The DD zone widened vertically proportional to (time) 1/2 , indicating a diffusive-type of expansion with an "apparent diffusivity" of 1.0 × 10 -5 m 2 s −1 , orders of magnitude faster than molecular diffusivities. During this expansion, the layers grew up to ~ 1.3 m thick, while the DD zone cooled off. (3) During the build-up phase, the new layer generation rate exceeded the decay rate and the number of layers increased rapidly. During the period of quasi-steady expansion, the number of layers and interfaces remained constant, indicating that the rate of new layer generation was equal to the rate of layer decay. Differences between subsequent CTD profiles indicated that the rates of formation and decay was ~ 1 per month. New layers formed at the lower bound of the DD zone, which continuously expanded downward. Embedded in the vertical profiles of increasing temperature and salinity with depth, the new layers were warmer and saltier. 3 (4) The rates of new layer formation and decay can be explained by the vertical divergence F h ∕ z of the upward heat flux F h , which was found in good agreement with the DD flux laws of Kelley [13]. The formation time scale was proportional to the temperature gradient T∕ z and inversely proportional to the divergence F T ∕ z . This relationship is consistent with the observation that during the build-up phase the time scale for generation was shorter than the decay time by diffusion. DD enhanced the vertical heat fluxes and led to strong divergence in the lower DD zone. Subsequently, the generation time scale shortened at the lower bound of the DD zone and the DD-enhanced heat flux sustained the downward expansion. (5) With the expansion of the DD zone, the vertical gradient T∕ z decreased and the density ratio R increased. The decreasing divergence F h ∕ z reduced the buoyancy flux for convective mixing in the layers. As an effect, the time scale for new layer generation exceeded the decay time scale. In consequence, the staircase structure gradually eroded and within a few weeks, DD layering could not be recognized anymore. (6) Whereas the initiation of the DD event and the subsequent staircase formation could be identified exactly to within less than one week of uncertainty, the end of the lifetime after ~ 850 ± 50 days was not-well defined as the distinct well-mixed homogeneous layers and sharp interfaces faded away over several months. (7) As an overall effect, the DD-enhanced heat fluxes removed heat out of the DD zone into the layer above and thereby weakened the temperature-induced convective instability and killed the DD event itself.
9,149
2022-07-18T00:00:00.000
[ "Environmental Science", "Geology" ]
An Alkylphenol Mix Promotes Seminoma Derived Cell Proliferation through an ERalpha36-Mediated Mechanism Long chain alkylphenols are man-made compounds still present in industrial and agricultural processes. Their main use is domestic and they are widespread in household products, cleansers and cosmetics, leading to a global environmental and human contamination. These molecules are known to exert estrogen-like activities through binding to classical estrogen receptors. In vitro, they can also interact with the G-protein coupled estrogen receptor. Testicular germ cell tumor etiology and progression are proposed to be stimulated by lifelong estrogeno-mimetic exposure. We studied the transduction signaling pathways through which an alkyphenol mixture triggers testicular cancer cell proliferation in vitro and in vivo. Proliferation assays were monitored after exposure to a realistic mixture of 4-tert-octylphenol and 4-nonylphenol of either TCam-2 seminoma derived cells, NT2/D1 embryonal carcinoma cells or testis tumor in xenografted nude mice. Specific pharmacological inhibitors and gene-silencing strategies were used in TCam-2 cells in order to demonstrate that the alkylphenol mix triggers CREB-phosphorylation through a rapid, ERα36-PI3kinase non genomic pathway. Microarray analysis of the mixture target genes revealed that this pathway can modulate the expression of the DNA-methyltransferase-3 (Dnmt3) gene family which is involved in DNA methylation control. Our results highlight a key role for ERα36 in alkylphenol non genomic signaling in testicular germ cell tumors. Hence, ERα36-dependent control of the epigenetic status opens the way for the understanding of the link between endocrine disruptor exposure and the burden of hormone sensitive cancers. Introduction Over the last 50 years, the incidence of male reproductive disorders such as hypospadias, cryptorchidism, hypofertility and testis cancer has dramatically risen. For instance, testicular germ cell tumor (TGCT) has become the leading cause of cancer in men aged 15 to 45 years from industrialized countries. Among malignant tumors of the testis, 95% are testicular germ cell tumors, which are classified into two main categories based upon histologic, molecular and epigenetic traits: seminoma and nonseminoma [1]. Both derive from a common precursor cell type called carcinoma in situ (CIS) [2] which is believed to originate from misdifferentiated primordial germ cells or gonocytes in response to altered hormone signaling [3]. CIS cells appear during fetal life and then enter a period of dormancy in infancy until after puberty when TGCT emerge [4]. This prepubertal dormancy suggests a hormone sensitive mechanism for TGCT development and tumor progression at puberty. A wide range of published data dealing with TGCT geographic incidence variation, epidemiological studies performed on migrant men, and exposure model analyses in vivo and in vitro strongly suggest the participation of endocrine disrupting compounds (EDCs) in both initiation and progression of testis cancer. In occidentalized countries, this hypothesis also emerge to explain the burden of testis cancer associated pathologies such as cryptorchidism, hypospadias, hypofertility, as well as increased incidence of other hormone sensitive cancer (breast, prostate, ovary, endometrium) [5][6][7][8]. Among the great diversity of compounds potentially able to alter hormone signaling, plasticizers are of great concern because of (i) the ubiquitous and persistent environmental and human contamination, (ii) their presence in everyday life used cosmetics, food, drinking water, home cleansers and (iii) their ability to trigger estrogenic signaling [9]. These molecules such as bisphenol A (BPA), 4-nonylphenol (NP) or 4-tert-octylphenol (OP) belong to the alkylphenol family and are still used in various industrialized processes. Once released in the environment, they become persistent pollutants that are poorly eliminated by liver detoxification enzymes in mammals and can enter cells, especially in body fat due to their lipophilic properties [10,11]. BPA and NP are also able to cross the seminiferous tubules basal lamina, alter the testis-blood barrier and trigger differentiating germ cell sloughing and apoptosis by disrupting Sertoli/germ cell attachment and communication [12]. Moreover, this class of EDCs appears to promote the development and progression of estrogen-dependent cancers [13]. BPA was also reported to promote mitogenic effect in JKT-1 seminoma derived cells [14,15]. Binding experiments indicate that alkyphenols could mimic estrogen mitogenic signaling since BPA, NP and OP display a relative binding affinity to17b-estradiol (E 2 ) in the range of 0.1% for the nuclear receptor ERa66 and 50% for the transmembrane g-protein coupled estrogen receptor (GPER) [9,16]. Besides, nothing is known about alkylphenol binding to ERa36, a novel 36 kDa NH2-term truncated form of the canonical human estrogen receptor ERa66, retaining the DNA-binding, partial dimerisation and an altered ligand-binding domains [17]. ERa36 was previously shown by us and others to mediate estrogen non conventional mitogenic signaling in TCam-2 seminoma cells, triple-negative breast cancers cells and endometrial cancer cells [18][19][20]. Depending on the cell lines tested, ERa36 acts as a membrane located ER, sometimes collaborates with either GPER or EGFR, and triggers several kinase-dependent pathways, such as MAPK, STAT5 or PKA [20][21][22][23]. In an attempt to understand the mechanisms underlying the deleterious effects of EDCs on neoplastic germ cells, we aimed to decipher the alkylphenol-dependent transduction pathways in TCam-2 cells, a unique seminoma cell line [24]. The test compounds, 4-tert-octylphenol (4-t-OP) and 4-nonylphenol (4-NP) were mixed based on their realistic concentration ratio (1:30) in food from Raecker and colleagues [25]. The resulting mix was called M4 and used at concentrations that mimic human environmental exposure. First, we show that M4 increases the TCam-2 seminoma cell proliferation rate in vitro by triggering the stimulation of ERa36 dependent mitogenic pathways. Second, we confirm the M4 stimulates NT2/D1 embryonal carcinoma cell proliferation in vitro as well as tumor growth in NT2/D1 xenografted nude mice. Finally, we demonstrate that alkylphenol signaling pathway ends on target genes involved in epigenetic modifications. Ethic Statement Animals used in the present research have been treated humanely according to institutional guidelines (Directive EU/ 63/2010), with due consideration to the alleviation of distress and discomfort. Protocol for animal handling and experiments was approved by the ''Lorraine Regional Committee for Animal Experiments'' and carried out by competent and authorized persons (personal authorization number 54-89 issued by the Department of Vetenary Services) in a registered establishment (establishment number C-54-547-03 issued by the Department of Veterinary Services). Animal experiment was planned in respect to the 3R rule: minimal number of mice necessary and sufficient to reach statistical significance was calculated a priori. For each animal, several organs (testis, liver, tumors, blood.) were harvested at the end of treatment for further analysis. Animals were housed in cages sized and filled with appropriate litter respectfully to European ethic guidelines, with free access to tap water and food ad libidum, in filtered atmosphere to avoid pathogen contamination. Males were reared in individual cages to avoid fight stress and aggressiveness. Animals were housed for 3 weeks before the beginning of any experiment. Tumor grafts were performed rapidly under sodium pentobarbital anesthesia in a warm separate room. Tumor grafts, radiotherapy and resection were performed under general anesthesia All s.c. injections and animal handling were performed by the same technician in a separate room and all efforts were made to minimize suffering. At the end of treatment, mice were anesthetized with 8 mg/kg xylazine and 90 mg/kg ketamine injection, blood was collected by cardiac puncture and animals sacrificed by cervical dislocation. Nude Mouse Xenograft Model Pathogen-free, 5-7 week-old male athymic NMRI-nu (nu/nu) mice were purchased from Janvier Laboratories (Le-Genest-St-Isle, France). Animals were housed in solid-bottomed plastic individual cages with free access to tap water and standard food ad libidum. Primary tumors were obtained after intra-testicular injection of 1610 7 NT2/D1 cells. Six weeks later, tumors reached the ethic volume (0.5 cm 3 ). Primary tumors were harvested and cut into 1 mm 3 pieces in order to be grafted sub-cutaneously in Nude male mice. Six males bearing bilateral NT2/D1 grafts were daily inoculated s.c. with either vehicle, or M4 (1.0 mg/kg bw or 10 mg/kg bw), 5 days per week. Nude mice were pre-treated for 2 weeks before graft, tumor pieces were grafted and treatment was applied for 4 additional weeks ( Figure S1). The low dose was relevant to daily children intake (0.8 mg/kg bw/day) [25]. The tumor-take rate ranged from 95-100%. Mice weight ( Figure S1) and tumor volume were monitored twice a week by caliper measurement of the length, width, and height and were calculated using the formula V = D*d 2 /2. At the end of treatment, tumors were removed, weighed, and fixed in 4% formalin for histological and immunohistochemical characterization ( Figure S1). Cell Culture TCam-2 and NT2/D1 cells were respectively maintained in RPMI1640 (GIBCO) and DMEM/F12 (1:1, GIBCO) supplemented with 10% fetal calf serum (FCS, Invitrogen) and 2 mM Lglutamine and cultured in a 5% CO 2 containing atmosphere at 37uC. Briefly, cells were plated in 10% FCS containing medium for 24 h and then starved for 24 h in 0.5% charcoal-stripped FCScontaining medium without phenol red. Treatments were performed on 0.5% charcoal-stripped FCS cultured cells, plated at a density of 8610 4 cells per well in 6-well plates. In case of inhibitor use, the corresponding compound was added to the medium 30 minutes before M4 or E 2 treatment. Cell Proliferation Assay Cells were seeded in 96-well plates at a density of 1610 3 cells/ well, in 0.2 ml medium supplemented with 10% FCS and 2 mM L-glutamine. They were washed with PBS (GIBCO) once they had attached and then incubated in phenol red-free medium containing 0.5% charcoal-stripped FCS for 24 h. Cells were then submitted to the indicated treatments for 48 h. At the end of the treatment, cells were counted by using an inverted microscope. Each treatment was replicated six times. For proliferation assays, the DMSO dilution retained was similar to the one of M4 having the most important effect (1 nM M4 corresponding to 10 25 % DMSO). However, none of the DMSO doses tested displayed proliferation stimulation effect compared to non treated cells. Real-time PCR Analysis Reverse transcription and real-time PCR analyses were performed as previously described [19]. Large ribosomal protein (RPLPO) encoding gene was used as a control to obtain normalized values. Primers are listed in Table S1. Assays were performed at least in triplicate, and the mean values were used to calculate expression levels, using the DDC(t) method referring to RPLPO housekeeping gene expression. When treatments were performed, the variation of expression was measured as treated/ DMSO treated cells (control). RNA Interference The small-interfering RNA (siRNA) duplexes for targeting GPER (Table S1) and scrambled control (SR-CL000-005) were purchased from Eurogentec (Angers, France). TCam-2 cells (8610 4 ) were plated into 6-well plates, in 2 ml of medium supplemented with 10% FCS and 2 mM L-glutamine the day before transfection. Cells were transiently transfected with either GPER (200 nM of the duplex) or scrambled siRNAs by using the Oligofectamine TM Reagent (Invitrogen) according to the manufacturer's instructions. After 24 h, cells were washed with PBS and the medium was replaced with phenol red-free RPMI supplemented with 0.5% charcoal-stripped FCS and 2 mM L-glutamine. 24 h later, cells were treated in phenol red-free and 0.5% stripped FCS RPMI and harvested for further analyses. Efficacy of RNA interference is presented in Figure S2A. Transient Transfection and Establishment of Stable Cell Line TCam-2 cells were transfected with the empty expression vector or the ERa36-specific shRNA expression vector kindly provided by Dr Wang ZY (Creighton University medical school, Omaha, USA) using the ExGen500 in vitro transfection reagent (Euromedex, France) as described previously [19]. Efficacy of shRNA knock-down is shown in Figure S2B. Statistical Analysis Data were summarized as the mean 6 s.e.m. Proliferation analysis data from each dose group were compared by analysis of variance (one-way ANOVA) followed by the Bonferroni multiple procedure with SPSS software (SPSS Inc., Chicago, USA). Realtime PCR and western blot experiments results were analyzed as follows: variance analysis of treated versus control cells was performed using Dunnett's test for multiple comparisons. Differences in which P was less than 0.05 were considered as statistically significant. The M4 Alkylphenol Mix Stimulates Testicular Germ Cell Tumor Growth in vitro and in vivo To test if an alkylphenol mix such as M4 could act as a proliferation inducer in seminoma-derived (TCam-2) and embryonal carcinoma (NT2/D1) cell lines, the growth rate of TCam-2 and NT2/D1 cells was determined by counting the number of cells exposed to different concentrations of M4. After a 24 h serum deprivation, cells were treated for 48 h with M4 decimal dilutions starting from 1.0 mM to 0.01 pM and counted by using an inverted microscope. M4 stimulated TCam-2 and NT2/D1 proliferation whatever the dose tested ( Figure 1A). The dose-response curves of these cells to M4 exhibited non-monotonic or biphasic pattern. When compared to vehicle exposure, a maximum proliferation increase was observed for cells treated in the nanomolar range, which corresponds to environmental doses. Therefore, the dose of 1.0 nM was retained for further analyses. To assess the effect of in vivo M4 exposure in male (androgenic; low estrogenic) hormonal context, a NT2/D1 derived germ cell tumor xenograft model was established (see material and methods section for details). Figure 1B shows that, at the end of treatment, tumor weight was significantly higher in M4 (1 mg/kg bw) versus vehicle injected mice. These data confirmed that an exposure to a low dose of M4, which corresponds to human daily intake [25] stimulates embryonal carcinoma growth in vivo. Notably, It is noteworthy that NT2/D1 cells knocked down for ERa36 are not viable. They can be selected and isolated after shRNA transfection but do not divide and therefore cannot be amplified for in vivo injection or used in vitro for proliferation assays. We also tried twice to obtain sh36-TCam-2 derived tumors after intra testicular injection of 1610 6 , 5610 6 , 1610 7 or 2610 7 cells but we never observed any tumor take, even 10 weeks after injection. M4 Triggers CREB Phosphorylation Through an ERa36 Dependent Pathway Since we previously demonstrated that E 2 and E 2 BSA both trigger CREB phosphorylation and in TCam-2 cells through GPER-ERa36 dependent mechanisms [19], we tested the potential estrogenicity of M4 by assessing phosphorylated CREB level. Western blot analysis clearly indicated an increase of CREB phosphorylation level (Figure 2A) after a 20 minute exposure to 1.0 nM M4. Several membrane receptors such as EGFR or GPER have been previously described to trigger estrogen-like signaling in various cancer cell lines [19,26]. However, CREB phosphorylation induction was still observed in scrambled siRNA, EGFR-targeted transfected cells, suggesting that EGFR is not required for M4 signaling in TCam-2 seminoma-derived cells (data not shown). In order to check GPER involvement in M4 signaling, we used GPER agonists or antagonists in several contexts: scrambled siRNA transfected cells and their GPER-targeted counterparts were exposed to vehicle, 1.0 nM M4, 1.0 nM E 2 , 100 nM G1 (a GPER agonist) or 100 nM G15 (a GPER antagonist) for 20 minutes. M4, G1 and G15 appeared to be powerful inducers of CREB phosphorylation whereas E 2 displayed lower efficiency in control cells ( Figure 2B). In GPER-knocked down cells, M4, G1 and G15 could still stimulate CREB phosphorylation, even if this effect was milder, demonstrating that GPER activity was not fully required. As previously demonstrated, GPER knockdown pre-vented E 2 -dependent CREB phosphorylation, which suggests that the mechanisms involved in M4 signaling do not fully mimic those of estrogens [19]. Since G1 and G15 can act as ERa36 agonists and stimulate non genomic signaling pathways [21,27] the results presented in Figure 2B also suggest that ERa36 in addition to GPER could trigger M4 dependent CREB phosphorylation. ERa36 Mediates M4 Induced Cell Proliferation and CREB Phosphorylation in TCam-2 Cells Normal human germ cells from the testis, malignant germ cells and their derived cell lines TCam-2 or NT2/D1 do not express the long form of ERa, ERa66. Nevertheless, they express the ERa36 isoform, which is necessary for mitogenic response to estrogens [19]. To address the hypothesis of ERa36 involvement in M4 proliferative effects, we performed dose-response experiments in the neo-TCam-2 and sh36-TCam-2 stable cell lines, which contain respectively, an empty plasmid or the corresponding vector expressing ERa36 targeted shRNA [19]. Figure 3A shows that a 48 h exposure to 1.0 nM M4 could stimulate neo-TCam-2 cell proliferation whereas this mitogenic effect was not observed in sh36-TCam-2 cells. Besides, CREB phosphorylation level did not increase in M4-treated ERa36-knocked down TCam-2 cells ( Figure 3B). Both results strongly suggest that ERa36 is required for M4 signaling in seminoma-derived cells. Neo-TCam-2 and sh36-TCam-2 cells were pre-treated with antagonists for several signaling pathways for 30 minutes before M4 exposure: a PKC inhibitor (BIM), the PKA inhibitor H89 and the PI3K inhibitor wortmanin. BIM did not prevent M4 induced CREB phosphorylation whereas H89 totally blocked it as well as its basal level, even in the non-M4 treated cells (data not shown). Wortmanin seemed to be the only antagonist able to prevent an M4 specific effect in neo-TCam-2, suggesting that M4 triggers ERa36-PI3kinase-dependent CREB phosphorylation in TCam-2 cells ( Figure 3B). The key role of PI3K-dependent signaling was further confirmed in both TCam-2 and NT2/D1 cells since wortmanin pre-treatment also impaired M4-enhanced cell proliferation ( Figure 3C). Noteworthy, wortmanin, by itself, seemed to trigger a mild stimulation of NT2/D1 proliferation at high doses (1 mM to 10 mM; data not shown). Such an effect was not observed in TCam-2 cell line. M4 Represses DNA-methyltransferase Expression Through ERa36 Dependent Mechanisms in TCam-2 Cells In order to determine the transcriptional profile of TCam-2 cells exposed to M4, we performed a microarray analysis of gene expression after a 60 minutes or a 24 h treatment. TCam-2 cells were cultured for 24 h in 0.5% FCS containing medium in the presence of vehicle or 1.0 nM M4. Total RNA was extracted for global analysis of gene expression on Nimblegen microarray. Venn diagram presented in Figure 4 indicates that 1124 transcripts and 633 transcripts were up or down regulated (absolute variation factor $2 in duplicate RNA samples, P,0.05) after 1 hour or 24 hours M4 exposure, respectively. Among them, 264 genes were similarly regulated in both conditions and the corresponding list was analyzed for functional classes and networks with the Ingenuity software (Tables S2, S3, S4, [28]). As expected, main networks and biological functions associated to the list of M4 regulated genes referred to cancer, developmental disorder, cell growth and proliferation (Tables S2 and S3). Moreover, the predicted upstream regulators were all related to estrogens (Table S4). Among the functional classes of genes whose expression is significantly up-or down-regulated (top list provided in Table S5), we focused on those involved in epigenetic modifications which seemed related to PI3K/CREB and estrogen receptor signaling in Ingenuity sorting ( Figure S3). Indeed, two of the three Dnmt3 genes display predicted CREB response elements half-sites (TGACG/CGTCA) in their promoter region (Dnmt3A: 2684; 2783; 21468; 22644; 23605; Dnmt3L:22297; 22884 from the transcription start site) and therefore are good candidates for CREB-dependent expression control. Namely, the Dnmt3 gene family displayed a mild but reproducible down regulation after both 60 minutes and 24 h M4 exposures ( Table 1). The results from microarray analysis were confirmed by quantitative RT-PCR ( Figure 5A) and western-blot ( Figure 5B) analyses. Indeed, M4 triggered a downregulation of DNMT3 expression in TCam-2 and neo-TCam-2 cells whereas such a repression was not observed in sh36-TCam-2 cells or after a 30 minute wortmanin pre-treatment ( Figure 5C). Similar downregulation of Dnmt3A and Dnmt3L gene expression was observed in NT2/D1 cells ( Figure 6A). Hence, M4-and PI3K-dependent repression of DNMT3B and DNMT3L was further observed at the protein level ( Figure 6B). This suggests that the ERa36-dependent M4 signaling ending at target genes involved in DNA-methylation status could be a common feature of both seminoma and embryonal carcinoma cells. Discussion Although numerous chemicals are now known or suspected to have endocrine disruption effects, a relevant classification based on comprehensive understanding of their mode of action and targets is still failing. More confusing is the wide variety of cocktails detected in the environment, when trying to decipher doseresponse consequences for lifelong human exposure. In the present study, we chose to focus on a well defined mix of alkyphenols -M4 -composed of 4-tert-octylphenol and 4-nonylphenol (1:30 ratio). Despite the burden of recent research on BPA which belongs to the same chemical family and exert various estrogenic effects, 4tert-octylphenol and 4-nonylphenol are still neglected. High doses of tert-octylphenol or nonylphenol ranging from 25 to 200 mg/kg bw were previously shown to significantly decreased sperm count and quality in male mice, and affect uterine weight, vaginal opening and reproductive ability in female rats [29,30]. However, both molecules have never been associated in a realistic mixture mimicking daily human contamination from household products, cosmetics and food. Here, estrogen-like mechanisms of action were addressed in a model of TGCT lacking the long form of ERa receptor (ERa66). As in the case of 17b-estradiol or its BSAcoupled counterpart, M4 doses ranging from 10 nM to 0.1 nM stimulated both seminoma and embryonal carcinoma cell pro-liferation in a non monotonic dose-response manner. Moreover, we observed a positive impact of M4 exposure on tumor growth in a TGCT xenograft nude mouse model after a treatment corresponding to human intake (1 mg/kg bw) [25]. No stimulating effect was detected after exposure to the higher dose (10 mg/kg bw), suggesting (i) that tumor growth in xenografted mice could respond to M4 in a non-monotonic way, as observed for in vitro cell proliferation or (ii) that a mild toxicity could appear after exposure to high doses of the mix. Taken together, these results suggest that alkylphenol exposure may on the one hand, alter normal germ cell multiplication and differentiation during development [12,31] through mutagenic or clastogenic mechanisms at high doses [32,33] as described by others and on the other hand, elicit neoplastic germ cell proliferation at low doses, as shown in this study. Therefore we investigated the rapid non-genomic transduction pathways potentially involved. Whereas estradiol and BPA were previously shown to bind and exert such mitogenic effects through GPER in both SKBR3 breast cancer cells and JKT-1 seminoma derived cells [15,34], we demonstrated that M4 acts mainly via an ERa36-dependent pathway. Indeed, we evidence here that M4 triggers PI3K activity and CREB phosphorylation. Nevertheless, preliminary data indicate that both GPER and ERa36 may activate downstream signaling such as src phosphorylation and thus modulate the expression of M4 target gene subclasses as well as cell proliferation (A. Chesnel, personal communication). Since GPER was shown to partially govern ERa36 expression in our TCam-2 model [19] and may collaborate with ERa36 for estrogenic activities in other cancer cell lines [26,35], it would be relevant to test the participation of ERa36 in alkylphenol response in hormone-sensitive cancers such breast or prostate cancers. The microarray analysis performed in order to describe the gene expression pattern of M4-treated TCam-2 cells, indicated that several epigenetic modification enzymes encoding genes are affected. We focused on M4-dependent down-regulation of DNMT3 expression because (i) other estrogeno-mimetic such as genistein and resveratrol or anti-androgenic compounds such as vinclozolin that are present in food have been previously demonstrated to modulate tumor suppressor gene expression through epigenetic mechanisms [36,37], (ii) DNMT3 proteins have been shown to be involved in germ cell proliferation and differentiation control during a developmental window when neoplastic germ cells (CIS) are believed to emerge [38], (iii) polymorphism of these genes is clearly associated with gastric and breast cancer, as well as ovarian endometriosis susceptibility [39][40][41]. Indeed, ''Ingenuity sorting'' clearly classifies DNMT3 downregulation into functional networks involved in cancer progression and cell proliferation downstream estrogen receptors and estrogens (Tables S2, S3, S4). Moreover, several studies point out the key role of DNA methylation in testicular tumor initiation, progression and resistance to chemotherapy [42][43][44][45][46], highlighting the importance for examining carefully which upstream compounds or regulation factors are able to modulate DNMT expression and activity. Hence, our results indicate that either wortmanin treatment or ERalpha36 knockdown can impair M4-dependent Dnmt3 repression while ERalpha36 expression appears to be necessary for M4-dependent enhanced proliferation. CREB target gene database detects CREB response elements half-sites in Dnmt3A and Dnmt3L promoters and further suggest that both gene could be a target for the PI3K/CREB dependent pathway [47]. Moreover, Hervouet and coworkers [48] demonstrated that DNMT3B and 3A can physically interact with several transcription factors involved in proliferation control, such as CREB, FOSB, KLF12, EGR1 or JUN, which were proposed to direct methylation on specific gene promoter sequences. DNMT3 can also regulate each other expression through promoter methylation [38]. Finally, since ERalpha36 promoter is located into the first intron of ESR1 gene, balanced expression of either ERalpha66 or ERalpha36 could be regulated by differential methylation. This point was already addressed by others who demonstrated that downregulation of DNMT3A and DNMT3B led to regulation of ESR1 or ESR2 via promoter DNA aberrant methylation in acute myeloid leukemia, endometriosis, prostate and ovarian cancer [49][50][51][52]. Endocrine disruptors such as alkylphenols are also suspected to alter germ cell epigenetic reprogramming during fetal and perinatal development, thus triggering long-term disruption of gene expression which, in turn, could be a main risk factor for hormone-dependent cancers. Anway and colleagues [37] also found DNMT3A and DNMT3L isoforms to be repressed in the testis after embryonic exposure to the endocrine disruptor vinclozolin. This commonly used fungicide suspected to have antiandrogenic effects triggered transgenerational epigenetic reprogramming associated with increased adult onset diseases, namely prostate disease, testis abnormalities, and tumor development [53]. Therefore, it could be relevant to address the effects of delayed consequences of a M4 exposure. Surprisingly, DNMT3L expression was clearly detected at both mRNA and protein level in TCam-2 seminoma-derived cells by using commercially available antibody contrary to previous work on human biopsies using homemade polyclonal antibody and indicating that DNMT3L expression was restricted to embryonal carcinoma [54]. In the germ cell lineage, DNMT3L is involved in de novo retrotransposon methylation and appears to be a signature of prospermatogonia stage [55]. Therefore, the expression of DNMT3L could be a hallmark of undifferenciated stage. DNMT3A and DNMT3B are usually described as enzymes responsible for the establishment of specific CpG dinucleotides methylation essential for embryonic development and gene repression at the time of implantation [56]. However, a growing number of evidences support the hypothesis for their contribution in the maintenance of DNA methylation [57]. DNMT3L also participates in a complex coupling H3K4 methylation and DNMT3A-dependent DNA methylation, thus modifying chroma- tin accessibility [58]. Therefore, M4-dependent downregulation of these enzymes could trigger DNA hypomethylation and chromatin opening, thus leading to aberrant gene expression pattern, which can be maintained transgenerationally. These observations are of particular interest in the field of testicular germ cell carcinogenesis since CIS are proposed to originate in non-differentiated primordial germ cells displaying a gene expression pattern of pluripotency. This could be also relevant in the context of testis tumor growth since lifelong M4 exposure may maintain a population of non-differentiated/highly proliferative cancer cells in the tumor. Taken together, these data suggest that M4 exposure may elicit a positive feedback loop beginning at ERa36 activation, triggering PI3K-dependent CREB phosphorylation, ending on Dnmt3 repression which, in turn, could stimulate and maintain a high level of ERa36 expression and rapid proliferation. Epigenetic regulation of ESR1 locus remains to be carefully examined in various cell contexts in order to address such a hypothesis. Figure S1 Characterization of testis tumor xenograft model in nude mice. Germ cell tumor xenograft models were first established after intra testicular injection of 1610 7 TCam-2 or NT2/D1 cells in 0,9% NaCl in nude mice. Tumors developed to approximately 0.5 cm 3 in 6 weeks. MRI imaging was used (Spectro-imageur Bruker Biospec Avance 24/40; 2.4 teslas magnetic field) to confirm the presence of tumors into the scrotum. Tumors were harvested and seminoma or embryonal carcinoma identity was attested by histological and immunohistochemical analyses. The slices presented are hematoxylin/eosin/ safran colorations of TCam-2 derived or NT2/D1 derived tumors. Tumor tissue was harvested and subcutaneously (s.c.) grafted in the inguinal pit of male nude mice. Because NT2/D1, but not TCam-2 derived tumors developed, we focused on the NT2/D1 model to examine the effects of M4 on tumor growth. Nude mice were s.c. implanted with 1-2 mm 3 tumor pieces harvested from previously grown (0.5 cm 3 ) NT2/D1 tumor (third passage). For alkylphenol assay, M4 or vehicle treatment was injected five days per week subcutaneously in male nude mice 2 weeks before, and 4 weeks after tumor graft in order to mimic everyday life contamination (see text for details). Bars
6,587
2013-04-23T00:00:00.000
[ "Biology", "Chemistry" ]
Biomarker Categorization in Transcriptomic Meta-Analysis by Concordant Patterns With Application to Pan-Cancer Studies With the increasing availability and dropping cost of high-throughput technology in recent years, many-omics datasets have accumulated in the public domain. Combining multiple transcriptomic studies on related hypothesis via meta-analysis can improve statistical power and reproducibility over single studies. For differential expression (DE) analysis, biomarker categorization by DE pattern across studies is a natural but critical task following biomarker detection to help explain between study heterogeneity and classify biomarkers into categories with potentially related functionality. In this paper, we propose a novel meta-analysis method to categorize biomarkers by simultaneously considering the concordant pattern and the biological and statistical significance across studies. Biomarkers with the same DE pattern can be analyzed together in downstream pathway enrichment analysis. In the presence of different types of transcripts (e.g., mRNA, miRNA, and lncRNA, etc.), integrative analysis including miRNA/lncRNA target enrichment analysis and miRNA-mRNA and lncRNA-mRNA causal regulatory network analysis can be conducted jointly on all the transcripts of the same category. We applied our method to two Pan-cancer transcriptomic study examples with single or multiple types of transcripts available. Targeted downstream analysis identified categories of biomarkers with unique functionality and regulatory relationships that motivate new hypothesis in Pan-cancer analysis. INTRODUCTION The revolutionary advancement of high-throughput technology in recent years has generated large amounts of omics data of various kinds (e.g., genetics variants, gene expression and DNA methylation, etc.), which improves our understanding of human disease and enables the development of more effective therapies in personalized medicine (Richardson et al., 2016). As more studies are conducted on a related hypothesis, meta-analysis, by combining evidence from multiple studies, has become a popular choice in genomic research to improve upon the power, accuracy, and reproducibility of individual studies (Ramasamy et al., 2008;Begum et al., 2012;Tseng et al., 2012). One of the main purposes of transcriptomics studies is to identify genes or RNAs that express differently between two or more conditions (e.g., diseased patients vs. healthy controls), also known as differential expression (DE) analysis or candidate biomarker detection. Many meta-analysis methods have been developed or applied to DE analysis, including combining p-values (Fisher, 1992) or effect sizes (Choi et al., 2003) and rank-based approaches (Hong et al., 2006). One may refer to Tseng et al. (2012) for an overview of the major meta-analysis methods in transcriptomic studies and Ma et al. (2019) for an overview of available software tools. Yet, a majority of conventional meta-analysis methods only generate a list of differentially expressed genes with strong aggregated evidence without further investigating in what studies are the genes differentially expressed. Study or population heterogeneity always exists and has been critical to biomarker detection (Di Camillo et al., 2012). For example, The Cancer Genome Atlas (TCGA) consortium completed a Pan-Cancer Atlas of multi-platform molecular profiles spanning 33 cancer types in an effort to provide insights into the commonalities and differences across tumor lineages (Weinstein et al., 2013;Hoadley et al., 2018). When metaanalysis is performed on Pan-cancer transcriptomic studies, we expect to see both DE genes common in all tumor types as well as genes differentially expressed in some tumor types but not others. Biomarker categorization according to their DE patterns across studies is demanding in genomic studies for three reasons. First, biomarkers that share unique cross-study DE patterns are potentially involved in related functions (Berger et al., 2018). Such unique categories of genes with similar function can be used to generate new biological hypotheses. Second, biomarker categorization can make high dimensional genomic data more tractable. For example, in cancer transcriptomic studies, which frequently detect thousands of DE genes, downstream analysis methods such as pathway enrichment analysis or network analysis cannot be applied directly. By partitioning the original large set of DE genes into smaller subsets, biomarker categorization facilitates more focused downstream analysis. Third, RNA sequencing (RNAseq) technology has led to an explosion of transcriptomic studies profiling both coding (i.e., mRNA) and noncoding RNAs (i.e., miRNA, rRNA, lncRNA, etc.) (Di Bella et al., 2020). Joint analysis of different RNA types with the same cross-study DE patterns can improve understanding of their regulatory relationships, which may lead to inferences about the underlying mechanisms of complex human diseases like cancer. Li and Tseng (2011) first proposed an adaptively weighted Fisher (AW-Fisher) method for biomarker categorization that assigns a binary weight of 0 or 1 to each study and searches for the pattern of weights that minimizes the aggregate statistics for each gene. Though the method incorporates statistical significance by combining two-sided p-values across studies, it does not take into account the direction of regulation (e.g., up-regulated or down-regulated). Other methods incorporate biomarker categorization within the Bayesian framework and combine one-sided p-values or Bayesian posterior probabilities (Ma et al., 2017;Huo et al., 2019) but not the magnitudes of effect sizes. In practice, biological significance (i.e., large effect size) and statistical significance (i.e., small p-value) do not always occur in tandem (depending on sample size and variance) though they are equally important in interpreting study results (Sullivan and Feinn, 2012;Solla et al., 2018). In this paper, we propose a novel meta-analysis method to detect and categorize biomarkers by simultaneously considering concordant pattern (i.e., direction of regulation), biological and statistical significance across studies. In addition, we develop a permutation test to assess the uncertainty of the proposed statistics and to control the false discovery rate (FDR). When only coding genes are included, after categorization we perform downstream pathway enrichment analysis with topological information on each category of genes for more biological insights ( Figure 1A). In the presence of diverse RNAs, we jointly analyze all RNA species in the same category using miRNA/lncRNA target enrichment analysis and lncRNA-mRNA and miRNA-mRNA causal regulatory network analysis ( Figure 1B). We show by simulation that our method detects both concordant and discordant biomarkers and assigns the correct weights. We apply our method to two Pan-cancer transcriptomic data examples: (1) Pan Gynecologic cancer (Pan-Gyn) data with coding genes only; (2) Pan Kidney cancer (Pan-Kidney) data that include mRNA, miRNA as well as lncRNA. The identified biomarker categories show unique functionality and informative regulatory relationships and could suggest new hypotheses about mechanisms underlying exclusive and shared features of different cancer types. Tseng et al. (2012) reviewed the major types of meta-analysis methods for DE gene detection in microarrays and classified the methods into four main classes: combining p-values, combining effect sizes, combining ranks, and direct merging. We will discuss selected meta-analysis methods from the first two classes that are relevant to our proposed method. Popular Meta-Analysis Methods Combining P-Values Fisher's method (Fisher, 1992) The conventional Fisher's method combines log transformed p-value from each study with the statistic T Fisher = −2 K k=1 log p k , which follows a χ 2 distribution with 2K degrees of freedom under the null hypothesis (i.e., genes not differentially expressed in all studies), where K is the number of studies and p k is the p-value of study k, 1 ≤ k ≤ K. Stouffer's method (Stouffer, 1949) The Stouffer's method proposes inverse normal transformation of p-value with the statistic T Stouffer K, which follows a standard normal distribution under the null, where −1 (x) is the inverse cumulative distribution function of the standard normal distribution. The scenario with mRNA (or coding genes) only. The heatmap shows the gene expression of all samples from three studies. Rows refer to genes sorted by the specified weight category, columns refer to samples, and solid white lines are used to separate different conditions (control vs. case). Colors of the cells correspond to scaled expression level. The green/red indicates lower/higher expression. Pathway enrichment analysis is applied to genes belonging to the same weight category with topological information to visualize the cross-study DE patterns at the molecular level. (B) The scenario with diverse RNA species (e.g., mRNA, miRNA, and lncRNA). The three heatmaps show the expression of different types of transcripts of all samples from three studies, sorted by weight category. In the presence of multiple types of RNA species, we will perform integrative analysis on all the transcripts belonging to the same weight category together. Possible downstream analysis includes miRNA/lncRNA target enrichment analysis and lncRNA-mRNA and miRNA-mRNA causal regulatory network analysis. Adaptively weighted fisher's method (AW-Fisher) (Li and Tseng, 2011) Fisher's method does not differentiate DE in a single study or multiple studies as long as their aggregate contribution to the final statistics remains the same. To overcome this and better explain the between study heterogeneity, Li and Tseng (2011) introduced an AW-Fisher's method as a modification of the original Fisher's method. The AW-Fisher method considers U( − → w ) = −2 K k=1 w k log(p k ) for each gene, where − → w = (w 1 , . . . , w K ) and each w k is a binary weight of 0 or 1 assigned to each study k. Denote by p U( − → w ) the p-value when the weight − → w is given, the AW-Fisher statistic is defined as: T AW = min− → w p U( − → w ) , where the optimal weight ( w 1 , . . . , w K ) that minimizes the p-value indicates the subset of studies that contribute to the aggregate statistics and naturally categorizes the biomarkers. There is no closed-form distribution for AW-Fisher statistics under the null, so permutation tests and importance sampling is used to obtain the p-value and control the FDR. Combining Effect Size Fixed effect model (FEM) and random effect model (REM) (Choi et al., 2003) Fixed effect model (FEM) combines effect sizes across all studies for each gene using a simple liner model: where µ is the overall mean and the within-study variance s 2 k represents the sampling error conditioned on study k. The combined point estimate of µ is a weighted average of studyspecific effect sizes, where weights are equal to the inverse of s 2 k . FEM will prioritize concordant genes with the same directionality across all studies. When strong between studies heterogeneity exists and the underlying population effect size is assumed to be unequal across studies, an REM is given hierarchically as where between-study variance τ 2 represents the additional source of variability between studies. A homogeneity test can be performed to test whether τ 2 is zero or not, and determine the appropriateness of FEM or REM. Like FEM, REM also prioritizes concordant genes but with more flexibility across studies. Neither of FEM nor REM produces biomarker categorization results. Remarks P-value combination methods are powerful for detecting genes that have non-zero effects in at least one study (HS B alternative hypothesis setting as in Chang et al. (2013) without considering the magnitudes and directionality of effects across studies. Thus, p-value methods cannot distinguish concordant genes (i.e., upregulated or downregulated in all studies) from discordant genes (i.e., upregulated in some studies but downregulated in others). In contrast, effect size combination methods take directionality into account but favor only concordant genes. Even so, discordant genes can still be of interest in, for example Pan-cancer analysis, to understand between tumor heterogeneity. We, therefore, propose a new metaanalysis method that incorporates both p-value and effect size combination methods, and considers concordant pattern as well as biological and statistical significance simultaneously to assist biomarker detection and categorization. Here we will introduce our method namely BCMC (Biomarker Categorization in Metaanalysis by Concordance). New Meta-Analysis Method for Biomarker Detection and Categorization Suppose there are K transcriptomic studies, each study k (1 ≤ k ≤ K) measures the gene expression of n k samples and G genes. We use gene expression as example to introduce our method though the method is ready to analyze other types of transcripts such as miRNA and lncRNA. Our objective in meta-analysis is to detect candidate genes differentially expressed between the case (e.g., patients diagnosed with disease) and control (e.g., healthy subjects) group in multiple studies and categorize the detected genes by their DE patterns across studies. We first perform DE analysis using popular methods such as limma (Ritchie et al., 2015) for microarray or DESeq2 (Love et al., 2014) for RNA-seq in each study and obtain the summary statistics including effect size estimates (log2 fold change or LFC gk ) and p-values (p gk ) for each gene g (1 ≤ g ≤ G) in each study k. Effect sizes and p-values represent biological and statistical significance, respectively, and can be treated as DE evidence for single studies. The smaller the p-value and the larger the magnitude of effect size, the more likely a gene will be a DE gene in the study. In meta-analysis, concordance (i.e., a gene having the same sign of effect size in different studies) is regarded as additional piece of DE evidence. We define gth gene as being up-regulated in kth study when LFC gk > 0 (i.e., having higher expression in case group) and being down-regulated when LFC gk < 0 (i.e., having higher expression in control group). When integrating multiple transcriptomic studies, DE genes may be altered in study-specific patterns. For example, some genes are differentially expressed in all studies while others are only differentially expressed in specific subset of studies. Meta-analysis methods also have different groups of targeted biomarkers as reflected by different statistical hypothesis settings. The null hypothesis for each gene in meta-analysis is commonly defined as: H 0 : θ g1 = · · · = θ gK = 0, where θ gk represents the true effect of gene g in study k. Depending on the types of targeted biomarkers, three alternative hypotheses have been proposed in the meta-analysis literature (Birnbaum, 1954;Tseng et al., 2012;Song and Tseng, 2014). The first setting (HS A ) aims to detect DE genes that have non-zero effect in all studies, i.e., θ gk = 0 for all k. The second setting (HS B ) aims to detect DE genes that have non-zero effect in at least one study, i.e., θ gk = 0 for some k. The third setting (HS r ) aims to detect DE genes that have non-zero effect in at least r studies, i.e., K k=1 I θ gk = 0 ≥ r. As we show next, our method generally follows HS r setting with specifically r = 2 (i.e., we detect DE genes that have non-zero effect in at least two studies). To detect DE genes and categorize them by cross-study DE patterns, we propose the following two aggregate statistics for each gene that combines DE evidence across up-regulated studies or down-regulated studies, respectively: where w + gk and w − gk are binary weights of 0 or 1 assigned to the kth study for gth gene, indicating whether a study is selected for inclusion in aggregate statistics or not, LFC gk is the log 2 fold change and p gk the corresponding p-value for gene g in study k obtained from single study DE analysis. For gth gene, T + g( − → w + g ) aggregates the information of single study summary statistics (including both p-value and effect size) over up-regulated studies (i.e., those studies with LFC gk > 0), while T − g( − → w − g ) aggregates that over down-regulated studies (i.e., those studies with LFC gk < 0). The binary weights are used to indicate what studies to include to the aggregate statistics and the optimal weights that maximize the statistics will be searched for each gene. In the proposed aggregate statistics, we simultaneously account for concordant patterns (where LFC gk and LFC gk have the same sign), biological significance (estimated as the product of LFC gk ) and statistical significance [estimated as the sum of log 10 (p gk )]. This will encourage combining studies with the same directionality to find the best evidence for DE, which is consistent with the purpose of meta-analysis to identify more reproducible genes in multiple studies. Similar statistics have been proposed for concordant and discordant analysis of orthologous genes between a pair of species (Domaszewska et al., 2017). From the formula, we can see that the proposed statistic is essentially a weighted average of all study pairs with effect sizes in the same direction. A weighted average of all studies instead of study pairs is an alternative approach but it tends to exclude studies with moderate effect sizes or p-values (see a toy example in Supplementary Table 1). By default, we assume w + gk = 0 for studies with LFC gk < 0 and w − gk = 0 for LFC gk > 0 to avoid conflict between the two statistics. When no studies are up-regulated or down-regulated for a particular gene, we suppress the corresponding T + to zero and assign zero weights. The statistics aggregates over study pairs so we need to choose at least two studies to make it meaningful. When only one study is up-regulated or down-regulated, we also suppress the corresponding T + We then search for the optimal weights to identify the subset of studies that maximize each of the two aggregate statistics. Such optimal weights describe the DE patterns of each gene across studies and provide natural categorization of all genes with potential biological interpretation. The corresponding maximum statistics are defined as: where W is the pre-defined searching space of weights with aforementioned restrictions. The resulting optimal weights are denoted as − → w + * g and − → w − * g . The biomarkers are then categorized according to the distribution of optimal weights among studies by merging the information of w + * g and w − * g , i.e., the final weights − → w * For example, concordantly up-regulated genes with − → w Note that the proposed statistics can describe both up-regulated and downregulated patterns in the same gene, thus also allowing the detection of discordant genes. In cases both patterns exist and we want to find a dominant pattern in the discordant gene, we can further define R g = max (R + g , R − g ) and use the corresponding To assess the uncertainty of R + g and R − g and determine DE in meta-analysis, we develop a permutation-based test to calculate the p-value and FDR adjusted p-value (also known as q-value) of the statistics. We permute group labels (i.e., case or control group) in each study B times and calculate the maximum statistics in each permuted dataset. For each gene, we obtain two p-values corresponding to R + g and R − g , respectively: are the maximum statistics for gth gene in bth (1 ≤ b ≤ B) permutation. The value of one is added to both numerator and denominator to avoid zero p-values. After p-values are generated, we further estimate the proportion of null genes π 0 as: normally we choose A = [0.5, 1] and (A) = 0.5 to estimate the null proportion, following the guidance in the previous methods and the literature of FDR (Storey, 2002;Storey and Tibshirani, 2003;Li and Tseng, 2011). In most cases, the density of p-values beyond 0.5 is fairly flat, implying most null p-values are located in this region. In practice, depending on the problem, other common choices of A = [0.05,1] or A = [0.025,1] can also be applied. The optimal A can be empirically determined by minimizing some loss function, we do not discuss further here and refer readers to Storey (2002), Storey and Tibshirani (2003) for more details. Then, q-values can be calculated as Likewise, p-value and q-value of the dominant pattern statistics R g (i.e., p g(R g ) and q g(R g ) ) can be obtained in the same way. In real data application, we determine DE in metaanalysis using the permuted p-value or q-value for the dominant pattern. Note that p-values and q-values of a zero R + g or R − g are equal to one. Downstream Analysis on Each Identified Categories of Biomarkers Each transcriptomic study was carefully assessed for inclusion to meta-analysis using objective criteria or systematic quality control methods (Kang et al., 2012). When only expression of mRNA data is available for the K selected transcriptomic studies, we applied our meta-analysis and identified multiple categories of mRNAs at certain BCMC p-value or q-value cutoffs, each with a unique DE pattern across the studies. DE analysis is useful to narrow down targets but focusing on single gene change across datasets is not sufficient. We still need to conduct further investigation on whether mRNAs belonging to the same category contain unifying biological theme. For each unique category of mRNAs, we then performed pathway enrichment analysis to gain more insights into their unique functions (section "Pathway Enrichment Analysis of mRNA Expression"). When expression data of mRNA, miRNA and lncRNA are all available, we applied our meta-analysis method to each type of transcripts separately and then analyzed each unique category of differentially expressed mRNA, miRNA, and lncRNA (those with the same weight or same cross-study DE pattern) together. Specifically, we performed miRNAs/lncRNAs target gene enrichment analysis (section "miRNAs/lncRNAs Target Gene Enrichment Analysis") and LncRNA-mRNA and miRNA-mRNA causal regulatory network analysis (section "LncRNA-mRNA and miRNA-mRNA Causal Regulatory Network Analysis"). Pathway Enrichment Analysis of mRNA Expression For each category of mRNAs with unique DE pattern across the studies, we looked for biological pathways that are enriched in each category of genes more than would be expected by chance. The enriched pathways for each category can infer the unique biological functions only associated with specific study subsets and help generate new hypotheses. The p-value for the enrichment of a pathway was calculated using Fisher's exact test (Upton, 1992) and multiple testing was corrected by Benjamini-Hochberg (BH) procedure (Benjamini and Hochberg, 1995). Multiple popular pathway databases were used including Gene Ontology (GO) (Ashburner et al., 2000), Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa et al., 2017), Oncogenic signaling Pathways (Sanchez-Vega et al., 2018) and Reactome (Fabregat et al., 2016). Pathways in each pathway database was carefully selected for their relatedness to the problem of interest and small pathways (e.g., pathway size <10) were filtered out for the lack of power. For pathways with topological information available (e.g., pathways in KEGG), we apply the R package "Pathview" (Luo and Brouwer, 2013), to display the study-specific information (e.g., weights, effect sizes, etc.) on relevant pathway topology graphs. miRNAs/lncRNAs Target Gene Enrichment Analysis Going beyond the traditional central dogma, non-coding RNAs such as micro-RNA (or miRNA) and long non-coding RNAs (lncRNA) play important regulatory roles in mRNAs expression (Bartel, 2004;Hubé and Francastel, 2018). To understand whether miRNA/lncRNA target at mRNAs in the same category with unique cross-study DE pattern, we analyzed each unique category of mRNA, miRNA and lncRNA of the same crossstudy DE pattern together and performed miRNA/lncRNAs target gene enrichment analysis on each category. Specifically, for each unique category, we first used the miRTarBase database (Chou et al., 2018) and LncRNA2Target v2.0 database (Cheng et al., 2019) to obtain common target genes of each miRNA and lncRNA in this category. We then looked for miRNA/lncRNA with target genes enriched in the gene list falling in the same category more than would be expected by chance. The p-value for the enrichment of miRNA/lncRNA was calculated using Fisher's exact test (Upton, 1992) and multiple testing was corrected by BH procedure (Benjamini and Hochberg, 1995). LncRNA-mRNA and miRNA-mRNA Causal Regulatory Network Analysis In addition to target gene enrichment analysis, we are also interested in investigating the causal regulatory relationship among the various types of transcripts in the same category using network analysis. For each unique category of mRNA and lncRNA with the same cross-study DE pattern, we followed the MSLCRN pipeline to perform module-specific lncRNA-mRNA regulatory network analysis (Zhang et al., 2019). The MSLCRN pipeline starts by using WGCNA (Langfelder and Horvath, 2008) to construct lncRNA-mRNA co-expression networks and identify modules that contain both lncRNA and mRNA. For each lncRNA-mRNA module, parallel IDA (Le et al., 2016) is then applied to learn the causal structure and estimate the causal effect of lncRNA on mRNA. IDA consists of two main steps. It first uses a parallel version of the PC algorithm (Spirtes et al., 2000;Kalisch and Bühlman, 2007;Le et al., 2016), commonly used approach for learning the causal structure of a Bayesian network, to obtain the directed acyclic graphs (DAGs) for each module. Then, the causal effect of lncRNAs on mRNAs (i.e., the lncRNA ≥ mRNA directed edges in the DAG) are estimated by applying do-calculus (Pearl, 2000), causal calculus that uses Bayesian conditioning to generate probabilistic formulas for the causal effect. Lastly, the module-specific causal regulatory networks are integrated to form the global lncRNA-mRNA causal regulatory network and visualized using Cytoscape (Shannon et al., 2003). In constructing the regulatory network, we use absolute values of the causal effects cutoffs to assess the regulatory strengths and confirm the regulatory relationships. More details on the use of MSLCRN to infer causal regulatory network can be found in Zhang et al. (2019). Module-specific miRNA-mRNA causal regulatory networks can be obtained in a similar way using the same tool. SIMULATION We conduct simulation studies to evaluate the performance of our method in biomarker detection and categorization when compared to AW-Fisher (Li and Tseng, 2011), FEM and REM methods (Choi et al., 2003). Only power is assessed for FEM and REM methods since they do not categorize biomarkers. We assume a total of G = 2000 genes expressed in K = 5 studies, each study has a total sample size of n = 100, evenly split into control and case groups n case = n control = n 2 = 50 . The details on how data are simulated are described below: 1. We generate 800 genes with 40 gene clusters (20 genes in each cluster) and another 1,200 genes that do not belong to any cluster. The cluster indexes for each gene g 1 ≤ g ≤ 2000 is randomly sampled. 2. For genes in cluster c (1 ≤ c ≤ 40) and study k (1 ≤ k ≤ 5), we first generate a covariance matrix according to inverse Wishart distribution ck ∼ W −1 ( , 60), where = 0.5I 20×20 + 0.5J 20×20 , I is the identity matrix and J is the matrix with all elements equal to one. Then, we standardized ck into ck to make sure all the diagonal elements are one. 3. We sample baseline gene expression levels of the 20 genes in cluster c for sample i in study k by X g c1 ik , . . . , X g c20 ik T ∼ MVN(0, ck ), where 1 ≤ i ≤ n and 1 ≤ k ≤ K. For those 1200 genes that are not in any cluster, we sample the baseline gene expression level independently from N 0, σ 2 k , where 1 ≤ k ≤ 5 and σ k ∼ Unif (σ − 0.2, σ + 0.2) with σ = 2. 4. Denote by δ gk ∈ {0, 1, −1} that gene g is non-DE, upregulated or down-regulated in study k. We assume the first 800 genes to be DE genes divided into four mutually exclusive parts: (1) Concordantly up-regulated genes (N = 225): randomly sample δ gk ∈ {0, 1, −1} such that k I {δ gk =1} ≥ 2 and k I {δ gk =−1} ≤ 1. Frontiers in Genetics | www.frontiersin.org 5. To simulate effect size for DE genes in each study (when δ gk = 0), we sample from a uniform distribution µ gk ∼ Unif (1, 3). The gene expression level X gik are assumed to be X gik for control samples and X gik = X g(i+n/2)k + µ gk · δ gk for case samples, where 1 ≤ g ≤ 2000, 1 ≤ i ≤ n/2, and 1 ≤ k ≤ 5. To assess power and biomarker categorization performance, we focus on DE genes in the first three categories of genes with concordant patterns in at least two studies (N = 600). We also simulate additional scenario with smaller sample size and variance: n = 20 & σ = 1, results are included in the Supplement (Supplementary Figure 1 and Supplementary Table 2). Figure 2 shows the number of true DE genes detected among the top genes ranked by p-value for each method. BCMC is more powerful than AW-Fisher and FEM/REM by detecting more true DE genes among the top ranked genes. Table 1 summarizes the number of true DE genes detected as well as with correct weight pattern in each of the three categories of DE genes identified by each method. BCMC and FEM detect more true DE genes than AW-Fisher for concordant genes. Due to the model restriction, FEM and REM fail to detect most discordant genes. AW-Fisher is equally powerful as BCMC in detecting discordant genes, however, it ignores the directionality of effects, and thus assigns the incorrect weights to genes with both up-regulated and down-regulated patterns (basically they fail to distinguish w = −1 from w = 1). Our method detects these discordant DE genes while at the same time assigns the correct weights categorizing these genes. REAL DATA APPLICATION Gene Expression Analysis in Pan-Gynecologic (Pan-Gyn) Studies We applied our method to the gene expression data of TCGA Pan-Gyn studies including high-grade serous ovarian cystadenocarcinoma (OV), uterine corpus endometrial carcinoma (UCEC), cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), uterine carcinosarcoma (UCS), and invasive breast carcinoma (BRCA) (Berger et al., 2018). Berger et al. (2018) identified 23 genes (e.g., BRCA1, PTEN, TP53, etc.) that were mutated at higher frequency across all Pan-Gyn cancers than non-Gyn cancers, highlighting the similarities across Pan-Gyn cohort. We focused on 19 of these genes and split samples in each study into a mutation "carrier" group and a mutation "non-carrier" group depending on whether subjects gained mutations in at least one of the genes (Supplementary Figure 2). Since no or very few samples were assigned to the mutation carrier group for UCS (N mutation = 0) and UCEC (N mutation = 8), we excluded those two studies and restricted our meta-analysis to only three gynecologic cancer types (i.e., number of studies K = 3) including OV (mutation carrier vs. non-carrier: 217/90), BRCA (692/408) and CESC (109/197). The purpose is to detect differentially expressed genes between mutation carrier and non-carrier groups and categorize them according to their cross-study DE patterns. We found the overall survival differed significantly between the two groups for each cancer type (Supplementary Figures 3-5). This implied the differentially expressed biomarkers between these two groups can have potential prognostic values related to mutational processes and serve as optimal therapeutic intervention targets (Helleday et al., 2014;Lawrence et al., 2014). The RNA-seq data in Transcripts Per Million (TPM) values of each cancer type were downloaded from LinkedOmics (Vasaikar et al., 2018). We first merged the three datasets by matching the gene symbols and removed genes with mean TPM < 5. A total of 9,900 mRNAs remained and were log 2 transformed for analysis. We performed DE analysis by limma (Ritchie et al., 2015) and obtained the p-value and LFC from each of the three studies. We then performed meta-analysis using BCMC and the other methods. All methods detected thousands of DE genes at both q-value cutoffs (for BCMC, q-value for dominant pattern was used so we focused on concordant genes only), which is common in Pan-cancer studies ( Table 2). It becomes imperative task to partition these DE genes into smaller subsets by cross-study DE patterns before performing downstream analysis. BCMC categorized these DE biomarkers (q < 0.05) into eight groups according to the optimal weight assignments, each displaying a unique expression pattern across the different studies (Figure 3 and Supplementary Table 3). We then merged genes with equal | w * g | into the same group (i.e., genes with w * g = (0, 1, 1) and those with w * g = (0, −1, −1) are merged into the same group, allowing both up-regulated and down-regulated genes in the same pathway) and performed pathway enrichment analysis on each of the four merged groups using four pathway databases: GO (Ashburner et al., 2000), KEGG (Kanehisa et al., 2017), Oncogenic (Sanchez-Vega et al., 2018) and Reactome (Fabregat et al., 2016). The top 100 pathways enriched by each category have little overlap partly validating our speculation in motivation that the different categories of biomarkers may play different functional roles (Figure 4). For example, top pathways for | w * g | = (1, 0, 1) (i.e., DE in OV and CESC but not in BRCA) are mainly involved in cell junction and adhesion related functions (Supplementary Table 4 in Supplemental File 1). Top pathways for | w * g | = (1, 1, 0) (i.e., DE in OV and BRCA but not in CESC) are mainly involved in immune and defense response. Figure 5 shows the topology of one example KEGG pathway "Antigen processing and presentation" enriched by the genes with | w * g | = (1, 1, 0). The highlighted DE genes showed strong DE signals (signed LFC) in OV and BRAC but not in CESC. These genes colocalized and interacted with each other as a functional unit inside the pathway. These unique gene sets of different cross-cancer DE patterns and the associated pathways enriched help gain more insights into the homogeneous and heterogenous molecular mechanism of different Gynecologic cancer and assist the development of useful diagnostic and therapeutic strategies common or specific to cancer types. Understanding commonality and difference in drug targets can also guide the drug repurposing strategy in cancer drug development (Li et al., 2021). Integrative Analysis of mRNA, lncRNA, and miRNA in Pan-Kidney Studies We also used BCMC to perform integrative analysis of three different types of transcripts (mRNA, lncRNA, and miRNA) in the TCGA Pan-Kidney cohort including kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), and kidney renal papillary cell carcinoma (KIRP). LncRNA and miRNA have been found playing important regulatory roles on gene expression in kidney cancers (Linehan et al., 2010;Linehan, 2012;Ricketts et al., 2018). The integrative analysis of these multi-omics data provides additional insights into the biological mechanism underlying the multiple histologic subtypes of kidney cancers. We aimed to detect the differentially expressed biomarkers (mRNA, miRNA, or lncRNA) that drive the progression of kidney cancer by comparing samples from early pathologic stage (stage I and II) to late stage (stage III and stage IV) for three kidney cancer types (i.e., number of studies K = 3) and investigating the regulatory relationships among these biomarkers. Number of subjects in the two pathologic stages of each kidney cancer available in mRNA, miRNA and lncRNA expression data were summarized in Supplementary Table 5. We downloaded mRNA (in Reads Per Kilobase of transcript per Million mapped reads or RPKM) and miRNA (in Reads Per Million mapped reads or RPM) sequencing data from LinkedOmics (Vasaikar et al., 2018) and lncRNA sequencing data (in RPM) from The Atlas of Noncoding RNAs in Cancer (TANRIC) (Li et al., 2015) for all the three kidney cancer subtypes. We first merged the three subtypes by matching RNA symbols/IDs. We then separately filtered each of the three types of biomarkers by removing mRNAs with mean RPKM < 5, lncRNAs with mean RPM < 0.1, and miRNAs with mean RPM = 0, followed by log 2 transformation. A total of 15,332 mRNAs, 2,415 lncRNAs and 719 miRNAs remained for analysis. We performed DE analysis by limma (Ritchie et al., 2015) in each study and then meta-analysis to categorize biomarkers according to cross-study DE patterns for each RNA species. For different types of RNA belonging to the same category, we further performed miRNA target gene enrichment analysis and lncRNA-mRNA causal regulatory network analysis to understand their complex interacting relationships in kidney cancer. Both BCMC and AW-Fisher methods detected thousands of differentially expressed biomarkers (including mRNA, lncRNA, and miRNA) at both q-value cutoffs with high proportion of overlap (Table 3). Biomarkers detected by BCMC tend to have both significant p-values and large effect sizes in the studies indicated by optimal weights (Supplementary Figure 6). These biomarkers (q < 0.05) were partitioned into eight categories by different weight patterns (Supplementary Table 6). We merged biomarkers with the same | w * g | into the same group. We focused on the group with | w * g | = (1, 1, 1) to understand the common multi-omics regulatory among all histologic subtypes of kidney cancer and performed downstream analysis. In miRNA target gene enrichment analysis, we found the target gene sets of two DE miRNAs "miR-655" and "miR-326" were enriched in the DE gene list FIGURE 4 | Venn diagram of top 100 pathways enriched by each of the four categories [|w * g | = (0, 1, 1) , (1, 0, 1) , (1, 1, 0) , and (1, 1, 1) ; corresponding to OV, BRCA and CESC, respectively] for the Pan-Gyn study example. in the same group (p < 0.05; Supplementary Table 7 in the Supplementary File 1), implying the potential regulatory relationship between different biomarker types consistent in all kidney cancer subtypes. The gene ATAD2 targeted by miR-655 was reported as a prognostic marker for kidney disease . In causal network analysis, we identified two lncRNA-mRNA regulatory networks (Supplementary Figure 8 and Supplementary Table 8). Figure 6 shows the network with two hub lnRNAs, the hub lncRNA ENSG00000267449 and several mRNAs belonging to the ribosomal protein family in the same network were found consistently differentially expressed in all three subtypes, implying their potentially joint role in promoting the development of kidney cancers (Zhou et al., 2015;Dolezal et al., 2018). FIGURE 5 | Visualization of the topology plot of a KEGG pathway "Antigen processing and presentation" enriched by the genes with |w * g | = (1, 1, 0) (corresponding to OV, BRCA and CESC, respectively) for the Pan-Gyn example. Each box that represents a gene is split into three parts to represent the three studies. Colors indicate the signed LFC of the mapped DE genes in the three studies. These results demonstrate the power of our method to detect biomarkers of different types in Pan-cancer metaanalysis and to categorize them into functionally relevant biomarkers by DE patterns, which could suggest commonalities and differences in underlying mechanisms of multiple cancer types. DISCUSSION In this paper, we proposed a novel meta-analysis method for candidate biomarker detection in multiple transcriptomic studies that further categorizes biomarkers by concordant patterns as well as by biological and statistical significance across studies. Intersection 10,057 1,244 292 FIGURE 6 | One example lncRNA-mRNA regulatory network identified from biomarkers with | w * g | = (1, 1, 1) (corresponding to KICH, KIRC, and KIRP, respectively) for the Pan-Kidney example. The circle shapes represent lncRNAs highlighted in green and diamond shapes represent mRNAs highlighted in purple. The arrows indicate the network relationships between lncRNAs and mRNAs. Numerous downstream analysis tools including pathway analysis and causal network analysis are applied to each category of biomarkers with either single or multiple types of RNA species. Simulations and real data application to two Pan-cancer multi-omics studies showed the advantage of our method in classifying differentially expressed biomarkers into classes with unique biological functions and relationships that can be further investigated in future studies. Meta-analysis is a set of statistical analytical methods and tools that combine multiple related studies to improve power and reproducibility over a single study. In recent years, we have witnessed the development of many useful meta-analysis methods applied to genomic studies for different biological purposes (Choi et al., 2003;Shen and Tseng, 2010;Li and Tseng, 2011;Huo et al., 2016Huo et al., , 2020Kim et al., 2016Kim et al., , 2018Zhu et al., 2017;Ma et al., 2019;Zeng et al., 2020). Genomic data is usually of high dimension and the between study heterogeneity is large due to both technological and cohort effects. In addition to improving power, post-hoc categorization of biomarkers into smaller subsets by cross-study patterns for subsequent analysis is important in genomic meta-analysis. Our meta-analysis method that aggregates over both p-value and effect size is a fast and intuitive solution for this purpose. Compared to other popular meta-analysis methods that include biomarker categorization, our method considers concordant pattern, and biological and statistical significance simultaneously. By calculating statistics separately for up-regulated and down-regulated parts, we can detect both concordant genes that have consistent patterns across all studies and discordant genes that are up/down regulated in some studies while down/up regulated in others. Both of these kinds of genes can be of interest in Pan-cancer analysis. For example, high expression of some genes might worsen the prognosis of all cancer types, while high expression of other genes might worsen prognosis for some cancers but be beneficial to other cancer types. Our method also applies to the scenario when there is more than one RNA species present and proposes to jointly analyze different types of biomarkers under the same category for more biological insights. As more omics data are accumulated in the public domain, similar strategies can be applied for integrative analysis, for example with epigenomic (e.g., DNA methylation, histone modification), proteomic and metabolomic data. Unique features of each omics data type need to be addressed and will be considered as a future direction to extend our method. Like most other two-stage meta-analysis methods, our method is based on summary measures such as p-values and log 2 fold changes from each study. In addition, the method assigns a single optimal weight to each gene without quantifying the uncertainty in weight assignment. A more comprehensive Bayesian hierarchical model can be applied to raw data and summary measures to better capture the stochasticity and provide soft weight assignment. Our method requires the DE genes to be concordant in at least two studies to be detected, consistent with the purpose of meta-analysis in prioritizing more reproducible biomarkers. As the number of studies becomes large, the likelihood of being differentially expressed in only one study decreases. Thus, we expect the method to perform well as the number of studies increases. Since the method relies on summary measures, increasing the number of studies will not materially increase the computational burden. Additionally, use of more sophisticated parallel computing techniques will improve the speed of permutation tests. An R package called "BCMC" is available at https://github.com/kehongjie/BCMC to implement our method. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: https://github.com/kehongjie/BCMC. AUTHOR CONTRIBUTIONS ZY and HK developed the method, performed the analysis, and wrote the manuscript. TM supervised the project and took
9,942.8
2021-07-02T00:00:00.000
[ "Biology" ]
EURASIP Journal on Applied Signal Processing 2005:5, 604–610 c ○ 2005 Hindawi Publishing Corporation An Improved Overloading Scheme for Downlink CDMA An improved overloading scheme is presented for single-user detection in the downlink of multiple-access systems based on OCDMA/OCDMA (O/O). By displacing in time the orthogonal signatures of the two user sets that make up the overloaded system, the cross-correlation between the users of the two sets is reduced. For random O/O with square-root cosine rollo ff chip pulses, the multiuser interference can be decreased by up to 50% (depending on the chip pulse bandwidth) as compared to quasiorthogonal sequences (QOS) that are presently part of the downlink standard of CDMA2000. This reduction of the multiuser interference gives rise to an increase of the achievable signal-to-interference-plus-noise ratio for a particular channel load. INTRODUCTION In any synchronized multiple-access system based on codedivision multiple access (CDMA), the maximum number of orthogonal users equals the spreading factor N. In order to be able to cope with overloading of a synchronized CDMA system (i.e., with a number of users K = N + M : N < K ≤ 2N), several schemes have been proposed in literature.Apart from the trivial random spread system (PN) [1], one can look for signature sets that are "as orthogonal as possible."A popular measure for the quasiorthogonality of a signature set is the total squared correlation (TSC) [2]; signature sets that minimize TSC among all possible signature sets are called Welch bound equality (WBE) sequences [3,4].A third approach consists of the OCDMA/OCDMA (O/O) systems [5,6,7,8,9,10,11,12], where a complete set of orthogonal signature sequences are assigned to N users ("set 1 users"), while the remaining M users ("set 2 users") are assigned another set of orthogonal sequences.The motivation behind the latter proposal is that the interference levels of the users are decreased considerably as compared to other signature sequence sets (e.g., random spreading), since each user suffers from interference caused by the users of the other set only. WBE sequences have some very interesting properties: they maximize both the sum capacity [3,4] and achieve the network capacity [13] for synchronous systems based on CDMA.Unfortunately, two major drawbacks of WBE sequences seriously complicate their application to cellular systems: (1) they give rise to an unscalable system,1 and (2) the chips of the sequences can be taken binary only if K is a multiple of 4 [14,15].As a result, in spite of their superior performance, they are not considered for implementation in cellular systems and they have to be replaced by suboptimal signature sets that do not suffer from the above mentioned drawbacks, for example, the O/O system. In [5,6,7,8,9,10,11], the potential of various O/O types with multiuser detection [16] was investigated, while [12] evaluates the downlink potential with single-user detection of a particular type of O/O: "quasiorthogonal sequences (QOS)."Especially the latter application is of practical interest since alignment of the different user signals is easy to achieve in the downlink (as opposed to the uplink), while single-user detection is the obvious choice for detection at the mobile stations.The QOS, discussed in [12], are obtained by assigning orthogonal Walsh-Hadamard sequences [17] to the set 1 users, while each set 2 user is assigned a Walsh-Hadamard sequence, overlaid by a common bent sequence with the window property [12].These QOS minimize the maximum correlation between the set 1 and set 2 users, which was the incentive to add these QOS to the CDMA2000 standard, so that overloaded systems can be dealt with [18]. Up to now, the chip pulses of all users are perfectly aligned in time in all considered O/O systems.However, an additional degree of freedom between the set 1 and the set 2 users has been overlooked: one can actually displace the set 2 signatures with respect to the set 1 signatures, without destruction of the orthogonality within each set.In this contribution, we investigate the impact of this displacement on the cross-correlation between the set 1 and set 2 users, and the resulting favorable influence on the downlink performance.In Section 2, we present the system model, along with the conventional QOS system.In Section 3, we introduce a new type of O/O with the displaced signature sets, and we compute the cross-correlation among the user signals.In Section 4, we assess the downlink performance in terms of maximum achievable signal-to-interference-plus-noise ratio (SINR) as a function of channel overload.Finally in Section 5, conclusions are drawn and some topics for future research are identified. CONVENTIONAL OCDMA/OCDMA: QOS Consider the downlink of a perfectly synchronized single-cell CDMA system with spreading factor N and K users.Since all signals are generated and transmitted at the same base station, this signal alignment is easy to achieve, and the total transmitted signal S 1 (t) is simply the sum of the signals s k (t) of all users k (k = 1, . .., K): In this expression, (i) and a k (i) are the signature sequence, the (real-valued) amplitude, and the data symbol of user k in the symbol interval i, respectively.We restrict our attention to BPSK modulation (i.e., a k (i) ∈ {1, −1}) with normalized binary signature sequences β (i) √ N} N .The extension to QPSK modulation and complex-valued signatures is straightforward; (ii) p c (t) is a real, unit-energy chip pulse.We restrict our attention to a square-root Nyquist pulse with bandwidth (1 + α)/T c , and chip period T c [19].The associated pulse, obtained after matched filtering of p c (t), is a Nyquist pulse φ c (t) with rolloff α.Note that φ c ( jT c ) = δ j . We focus on a single-path channel with complex-valued gain γ k (i) from the base station to user k in symbol interval i.In order to obtain a decision statistic z k (i) for the detection of databit a k (i), the received signal is applied to a matched filter p c (−t), followed by a sampling at the chip rate on the time instants (iN + j)T c ( j = 0, . .., N − 1).The resulting samples are correlated with the signature sequence β (i) k , and finally a normalization is carried out by multiplying the result by γ * k (i)/|γ k (i)|2 .Since φ c (t) is a Nyquist pulse, and due to the perfect alignment of the user signals, the observable 2) , where σ 2 k is the power spectral density of the noise at the receiver input of user k. In order to assure that all users meet a predefined qualityof-service constraint, power control is applied in the downlink.Power control can be achieved by updating the amplitudes of the users once every L symbol intervals, based on (an estimate of) the variance of the sum of noise and interference over a time span of L symbol intervals, if LNT c is smaller than the minimum coherence time of the channels between the base station and the users. 2To meet the quality-of-service constraint for user k, it is necessary that this variance remains lower than the predefined threshold.Since the channel gain is essentially constant over these L symbol intervals, the variance of interest is given by (k = 1, . .., K) where x denotes the (constant) value of x over the considered time span of L symbols that starts at time index i 1 , and is the cross-interference between user k and user j over the considered time interval.The variance μ2 k is related to the signal-to-interference-plus-noise ratio SINR k of user k by SINR k = Ã2 k / μ2 k .From expression (3), it is obvious that the squared crosscorrelations between the signatures of the users should be as small as possible in order to restrict the multiuser interference (MUI).As long as K remains smaller than N, taking orthogonal signatures, as is done in IS-95 [22] and CDMA2000 [18], is optimum because they yield ρ k, j = 0 for k = j.If K exceeds N (i.e., an overloaded system), the O/O system tries to eliminate as much intracell interference as possible, by taking orthogonal signatures for N users (set 1 users), and by taking another set of orthogonal signatures for the remaining M users (set 2 users).This eliminates the interference of (N − 1) and (M − 1) users in the detection of the set 1 and set 2 users, respectively.Indexing the set 1 users as the first N users and the set 2 users as the last M users, (3) turns into ( The signatures of the set 1 users span the complete vector space of dimension N, implying that From this expression, it is immediately seen that the maximal correlation between the set 1 and the set 2 users will be minimized if and only if all of these correlations are equal to 1/N: In CDMA2000 [18], condition (7) is met (approximately) by means of QOS, where the signatures of the users do not change from one symbol interval to the next: the signatures of the set 1 users are the Walsh-Hadamard sequences WH (k) N (k = 1, . . ., N) of order N, and the signatures of the set 2 users are obtained by overlaying the same Walsh-Hadamard sequences by means of a (quasi-)bent sequence Q ∈ {1, −1} N [12]: For N = 2 2n , this signature set has the property that and ( 7) is met with equality.For N = 2 2n+1 , however, (7) cannot be met with equality (for binary signature sequences), and the best one can do is to use a quasibent sequence as scrambling sequence, so that IMPROVED OCDMA/OCDMA In the conventional O/O systems, there is an additional degree of freedom that has not been exploited.Indeed, in order to make sure that the first N user signals are orthogonal, they have to be perfectly aligned in time, and the same is true for the set 2 user signals.However, the set 1 user signals do not need to be aligned with the set 2 user signals to provide this property.Hence, the displacement τ (τ ∈ [0, NT c )) of the set 2 users with respect to the set 1 users is an additional degree of freedom.Adopting the same notation s k (t) as in (1) for the signal of user k, the total transmitted signal S 2 (t) is now given as We focus on random O/O [8], where the signatures of the set 1 and set 2 users are obtained by overlaying the Walsh-Hadamard vectors in every symbol interval i with the respective scrambling sequences P (i) 1 and P (i) 2 that are chosen completely at random and independently out of {1, −1} N in every symbol interval: The decision statistics z k (i) for the detection of databit a k (i) are obtained by applying the received signal to a matched filter p c (−t), sampling at the chip rate on the instants (iN + j)T c (set 1 users) or (iN + j)T c + τ (set 2 users), followed by a correlation with the signature sequence β (i) k and a normalization.We consider the contribution z j k (i) of set-2 user j to the decision variable z k (i) of set-1 user k in symbol interval i: where ρ (i,s) k, j (τ) is defined as Hence, the cross-interference Rk,j between set-1 user k and set-2 user j is given by and is an approximately Gaussian random variable.However, its expected value is multiplied by a factor λ ≤ 1 as compared to QOS (see Appendix A), implying a reduction of the average cross-interference: The variance ψ2 k, j = E[( Rk,j − λ/N) 2 ] of Rk,j is dependent on the length L of the time interval.Figure 1 shows the relative spread ψk,j /E[ Rk,j ] obtained by simulations as a function of L for N = 16, 32, 64, and 128, when p c (t) is a square-root cosine rolloff pulse with rolloff α = 0.25.A detailed observation of the plots of Figure 1 brings to light that the relative spread can be expressed as where Ω (16), Ω(32), Ω(64), and Ω(128) are identified as 0.89, 0.94, 0.97, and 0.99, respectively.So, as compared to the original QOS system, the expected value of the MUI of all set 1 and set 2 users is decreased by a factor 1/λ that is dependent on the rolloff α and on ∆.For ∆ = 0, the chip pulses of all users are perfectly aligned, and λ = 1, whether the symbol boundaries of the set 1 and the set 2 users are aligned or not. The function λ(α, ∆) is the interference function of the pulse p c (t).This interference function was introduced in another context (PN-spread asynchronous communication) in [23].If the excess bandwidth of p c (t) is less than 100%, it was shown in [23] that λ(α, ∆) can be written as a function of the Fourier transform P c ( f ) of p c (t): According to this expression, λ is minimal for ∆ = T c /2.This is illustrated in Figure 2 for a cosine rolloff chip pulse, where λ is plotted as a function of ∆, for α = 0, 0.1, 0.2, . .., 1.In Appendix B, we derive the following relationship between the optimal value of λ and the rolloff of the square-root cosine rollof chip pulse: So, it is obvious that we can obtain important decreases in MUI by displacing the chip pulses of the set 2 users by half a chip period as compared to the chip pulses of the set 1 users.This decrease can be up to 50% for α = 1, and amounts to 12.5% for a practical rolloff value of 0.25. DOWNLINK PERFORMANCE In order to assess the performance of the considered O/O signature sets in the downlink, we focus on a single-cell scenario, where each user suffers from intracell interference and white thermal noise.We assume that the cell is circular and that all users are within a range r ∈ [100 m, 1500 m] from the base station, scattered uniformly over this cell.It is assumed that all |γ k | 2 (k = 1, . .., K) are independent, with probability density function (pdf) illustrated in Figure 3.It consists of three contributions [20,21]: where (i) n is the path loss exponent, which is taken here as n = 3; (ii) η log is a lognormally distributed loss term that accounts for the large scale shadowing effects.We take the large-scale fading margin at 10 dB; (iii) η Rayleigh is a loss term that accounts for the small scale Rayleigh fading.If we impose on all users a common quality-of-service constraint, so that the SINR k for all users k has to be at least κ, then, as long as a solution exists, the minimum (optimum) power solution corresponds to the case where all SINR k are exactly κ [24,25]: with (A) k = Ã2 k , (d) k = δk for k = 1, . .., K, and (R) i, j = Ri,j if i and j are from a different orthogonal set, while (R) i, j = 0 if i and j are from the same set.It is well known [24,25,26,27] that (21) has a positive solution A, if and only if the Perron-Frobenius eigenvalue of (κ • R) is smaller than 1.Moreover, solving (21) by means of a Jacobi iteration converges monotonically to the optimal solution, if and only if a positive solution to (21) exists.As a consequence, it is sufficient to try to solve (21) by means of the Jacobi iteration (n = 0, . .., +∞) starting from any positive power vector A (0) .If the iterations converge to a solution, we obtain at the same time the optimum power vector.If the iterations diverge, the quality-ofservice constraint κ cannot be met by any (positive) power vector A for all users at the same time.Note, however, that for any R, (21) always has a positive solution for a range of values κ ∈ (−∞, κ max ], where κ max is the maximum achievable SINR for this setting.We replace R in ( 21) by E[R], which implies ignoring the statistical fluctuation of R. Taking (16) into account, the corresponding value of κ max is given by and will be denoted as the achievable SINR for "static" R. Simulations have been performed for QOS and the improved O/O systems with square-root cosine rolloff chip pulses with rolloff α = 0.25, 0.5, 0.75, and 1, spreading factor N = 64, and a number of users K = 65, . .., 101, in order to determine κ max .The time shift between the two orthogonal sets of the improved O/O system is taken as τ = (N +1)•T c /2.The number of symbol intervals with fixed channel and amplitude characteristics is L = 20.The achievability of a particular SINR value κ for all users has been determined by means of the Jacobi iterations of ( 22), for a fixed value of d and A (0) , with a maximum of 50 iterations.For improved O/O, the entries of R are random variables, and we determined the minimum value κ (min) max (K) of κ max that was obtained over a wide range of random realizations of R for K users.In Figure 4, we illustrate the obtained values of κ (min) max (K) for improved O/O and QOS.We also added the upper bound for synchronous CDMA systems (achieved for WBE sequences), along with the achievable SINR corresponding to static R. It is immediately seen that κ (min) max is higher for any of the considered improved O/O systems as compared to QOS.For a practical rolloff value α = 0.25, the maximum achievable SINR is about 0.6 dB higher than for QOS over the entire range of K.This gain in κ max rises to about 1.3 dB, 2 dB, and 3 dB for α = 0.5, 0.75, and 1, respectively.In addition to this, improved O/O with α = 1 and 0.75 performs better than the upper bound for synchronous transmission for a number of users higher than 82 and 91, respectively.For α = 0.5, improved O/O almost achieves the upper bound for K = 100.Further, we note that the achievable SINR is only slightly less than the value corresponding to static R.This indicates that the statistical fluctuation of R has only a minor effect on achievable SINR, which hence can be approximated by the simple expression (23).Alternatively, one can tabulate the maximum acceptable channel overload (K max − N)/N as a function of the required SINR for the considered O/O systems, as is done in Table 1.Once more, we notice the superiority of the improved O/O systems.Although we have presented numerical results for N = 64 only, these results should be representative also for larger values of N, since the distribution of Ri,j is only slightly dependent on N (as illustrated in Figure 1). CONCLUSION AND TOPICS FOR FUTURE RESEARCH In this paper, we extended the idea of perfectly aligned oversaturated O/O systems to O/O systems where the set 1 and set 2 user signals are displaced in time.We found that such a displacement reduces the multiuser interference between the set 1 and set 2 users by up to 50% (depending on the chip pulse bandwidth) for randomized O/O signature sets with square-root cosine rolloff chip pulses.Hence, as compared to quasiorthogonal sequences that are presently applied in the downlink of the CDMA2000 system, one can achieve higher feasible SINR values for the improved O/O systems that even go beyond the upper bound for synchronous systems.We conclude that the improved O/O systems are a promising and superior alternative to QOS for channel overloading in the downlink of systems based on CDMA.In this paper, the simulations focused on square-root cosine rolloff chip pulses, but one can expect to be able to decrease the MUI even further by a proper selection of the (band-limited) chip pulses.For instance, it was shown in [23] that the square-root brick-wall rolloff chip pulse 3 minimizes λ(α, T c /2) over all pulses with excess bandwidth α/T c .Another possibility is to add an extra degree of freedom to the system: one can actually try to optimize at the same time the chip pulses (possibly different for different user sets) and the time shifts for the users of the two sets.Both topics are left for future research. Figure 4 : Figure 4: κ (min) max as a function of K for the considered O/O systems (N = 64). Table 1 : Maximum acceptable channel overload for a required SINR of the considered O/O systems.
4,813.2
2004-01-01T00:00:00.000
[ "Computer Science" ]
T-systems and Y-systems for quantum affinizations of quantum Kac-Moody algebras The T-systems and Y-systems are classes of algebraic relations originally associated with quantum affine algebras and Yangians. Recently the T-systems were generalized to quantum affinizations of a wide class of quantum Kac-Moody algebras by Hernandez. In this note we introduce the corresponding Y-systems and establish a relation between T and Y-systems. We also introduce the T and Y-systems associated with a class of cluster algebras, which include the former T and Y-systems of simply laced type as special cases. The T-systems are generalized by Hernandez [27] to the quantum affinizations of a wide class of quantum Kac-Moody algebras studied in [15,56,34,46,47,26]. In this paper we introduce the corresponding Y-systems and establish a relation between T and Y-systems. We also introduce the T and Y-systems associated with a class of cluster algebras, which include the former T and Y-systems of simply laced type as special cases. It will be interesting to investigate the relation of the systems discussed here to the birational transformations arising from the Painlevé equations in [50,51], and also to the geometric realization of cluster algebras in [5,23]. The organization of the paper is as follows. In Section 2 basic definitions for quantum Kac-Moody algebras U q (g) and their quantum affinizations U q (ĝ) are recalled. In Section 3 the T-systems associated with the quantum affinizations of a class of quantum Kac-Moody algebras by [27] are presented. Based on the result by [27], the role of the T-system in the Grothendieck ring of U q (ĝ)-modules is given (Corollary 3.8). In Section 4 we introduce the Y-systems corresponding to the T-systems in Section 3, and establish a relation between them (Theorem 4.4). In Section 5 we define the restricted version of T-systems and Y-systems, and establish a relation between them (Theorem 5.3). In Section 6 we introduce the T and Y-systems associated with a class of cluster algebras, which include the restricted T and Y-systems of simply laced type as special cases. In particular, the correspondence between the restricted T and Ysystems of simply laced type for the quantum affinizations and cluster algebras is presented (Corollaries 6.20, 6.21, 6.25, and 6.26). Quantum Kac-Moody algebras and their quantum affinizations In this section, we recall basic definitions for quantum Kac-Moody algebras and their quantum affinizations, following [26,27]. The presentation here is a minimal one. See [26,27] for further information and details. Definition 2.1 ( [14,33]). The quantum Kac-Moody algebra U q (g) associated with C is the C-algebra with generators k h (h ∈ h), x ± i (i ∈ I) and the following relations: Quantum affinizations In the following, we use the following formal series (currents): We also use the formal delta function δ(z) = r∈Z z r . 15,34,27]). The quantum affinization (without central elements) of the quantum Kac-Moody algebra U q (g), denoted by U q (ĝ), is the C-algebra with generators x ± i,r (i ∈ I, r ∈ Z), k h (h ∈ h), h i,r (i ∈ I, r ∈ Z \ {0}) and the following relations: When C is of finite type, the above U q (ĝ) is called an (untwisted) quantum affine algebra (without central elements) or quantum loop algebra; it is isomorphic to a subquotient of the quantum Kac-Moody algebra associated with the (untwisted) affine extension of C without derivation and central elements [15,4]. (A little confusingly, the quantum Kac-Moody algebra associated with C of affine type with derivation and central elements is also called a quantum affine algebra and denoted by U q (ĝ).) When C is of affine type, U q (ĝ) is called a quantum toroidal algebra (without central elements). In general, if C is not of finite type, U q (ĝ) is no longer isomorphic to a subquotient of any quantum Kac-Moody algebra and has no Hopf algebra structure. The category Mod(U q (ĝ)) Let U q (h) be the subalgebra of U q (ĝ) generated by k h (h ∈ h). The following theorem is a generalization of the well-known classification of the simple finitedimensional modules of the quantum affine algebras by [7,8]. Theorem 2.5 ([46,47,26]). We have L(λ, Ψ) ∈ Mod(U q (ĝ)) if and only if there is an I-tuple of polynomials (P i (u)) i∈I , We call (P i (u)) i∈I the Drinfeld polynomials of L(λ, Ψ). In the case of quantum affine algebras, λ is also completely determined by the Drinfeld polynomials by the condition λ(α ∨ i ) = deg P i . This is not so in general. (b) Choose any λ satisfying (λ, α ∨ i ) = 0 (i ∈ I) and also set P i (u) = 1 (i ∈ I). The corresponding module L(λ, Ψ) is written as L(λ). The module L(λ) is one-dimensional; it is trivial in the case of the quantum affine algebras. Kirillov-Reshetikhin modules The following is a generalization of the Kirillov-Reshetikhin modules of the quantum affine algebras studied by [37,38,3,8,43,9]. Definition 2.7 ( [27]). For any i ∈ I, m ∈ N, and α ∈ C × , set the polynomials (P j (u)) j∈I as and P j (u) = 1 for any j = i. [27] in order to make the identification to the forthcoming T-systems a little simpler. T-systems Throughout Sections 3-5, we restrict our attention to a symmetrizable generalized Cartan matrix C satisfying the following condition due to Hernandez [27]: where D = diag(d 1 , . . . , d r ) is the diagonal matrix symmetrizing C. In this paper, we say that a generalized Cartan matrix C is tamely laced if it is symmetrizable and satisfies the condition (3.1). As usual, we say that a generalized Cartan matrix C is simply laced if C ij = 0 or −1 for any i = j. If C is simply laced, then it is symmetric, d a = 1 for any a ∈ I, and it is tamely laced. With a tamely laced generalized Cartan matrix C, we associate a Dynkin diagram in the standard way [35]: For any pair i = j ∈ I with C ij < 0, the vertices i and j are connected by max{|C ij |, |C ji |} lines, and the lines are equipped with an arrow from j to i if C ij < −1. Example 3.1. (1) Any Cartan matrix of finite or affine type is tamely laced except for types A (1) 1 and A (2) 2ℓ . (2) The following generalized Cartan matrix C is tamely laced: The corresponding Dynkin diagram is For a tamely laced generalized Cartan matrix C, we set an integer t by t = lcm(d 1 , . . . , d r ). For a, b ∈ I, we write a ∼ b if C ab < 0, i.e., a and b are adjacent in the corresponding Dynkin diagram. Let U be either 1 t Z, the complex plane C, or the cylinder C ξ := C/(2π √ −1/ξ)Z for some ξ ∈ C \ 2π √ −1Q, depending on the situation under consideration. The following is a generalization of the T-systems associated with the quantum affine algebras [43]. Definition 3.2 ([27] ). For a tamely laced generalized Cartan matrix C, the unrestricted Tsystem T(C) associated with C is the following system of relations for a family of variables where T and E[x] (x ∈ Q) denotes the largest integer not exceeding x. 1. This is a slightly reduced version of the T-systems in [27, Theorem 6.10]. See Remark 3.7. The same system was also studied by [54] when C is of affine type in view of a generalization of discrete Toda field equations. 2. More explicitly, S For example, for d b = 1, and so on. 3. The second terms in the right hand sides of (3.2) and (3.3) can be written in a unified way as follows [27]: T-system and Grothendieck ring Let C continue to be a tamely laced generalized Cartan matrix. The T-system T(C) is a family of relations in the Grothendieck ring of modules of U q (ĝ) as explained below. 1. For a pair of ℓ-highest weight modules V 1 , V 2 ∈ Mod(U q (ĝ)), there is an ℓ-highest weight module V 1 * f V 2 ∈ Mod(U q (ĝ)) called the fusion product. It is defined by using the udeformation of the Drinfeld coproduct and the specialization at u = 1. Therefore, the Grothendieck ring R(C) of the modules in Mod(U q (ĝ)) having finite compositions is well defined, where the product is given by * f . Let R ′ (C) be the quotient ring of R(C) by the ideal generated by all L(λ, Ψ) − L(λ ′ , Ψ)'s. In other words, we regard modules in R(C) as modules of the subalgebra of U q (ĝ) generated by Proof . It follows from [27,Corollary 4.9] that R ′ (C) is generated by the fundamental modules L(Λ i , Ψ). The q-character morphism χ q defined in [27] induces an injective ring homomorphism We set C t log q := C/(2π √ −1/(t log q))Z, and introduce alternative notation W In terms of the Kirillov-Reshetikhin modules, the structure of R ′ (C) is described as follows: (1) The family W generates the ring R ′ (C). (3) The proof is the same with that of [32, Theorem 2.8] by generalizing the height of T As a corollary, we have a generalization of [32, Corollary 2.9] for the quantum affine algebras: Y-systems 4.1 Y-systems Definition 4.1. For a tamely laced generalized Cartan matrix C, the unrestricted Y-system Y(C) associated with C is the following system of relations for a family of variables 0 (u) −1 = 0 if they occur in the right hand sides in the relations: Remark 4.2. 1. The Y-systems here are formally in the same form as the ones for the quantum affine algebras [42]. However, p for Z p,m (u) here may be greater than 3. and so on. There are p 2 factors in Z Relation between T and Y-systems Let us write both the relations (3.2) and (3.3) in T(C) in a unified manner where M (a) m (u) is the second term of the right hand side of each relation. Define the transposition Proof . This can be proved by case check for d a > 1 and d a = 1. For any commutative ring R over Z with identity element, let R × denote the group of all the invertible elements of R. Theorem 4.4. Let R be any commutative ring over Z with identity element. (2) We modify the proof in the case of quantum affine algebras [32, Theorem 2.12] so that it is applicable to the present situation. Here, we concentrate on the case U = 1 t Z. The modification of the proof for the other cases U = C and C/(2π √ −1/ξ)Z is straightforward. Case 1. When C is simply laced. Suppose that C is simply laced. Thus, d a = 1 for any a ∈ I and t = 1. For any Y satisfying Y(C), we construct a desired family T in the following three steps: Step 1. Choose arbitrarily T . (4.9) Repeat it and define T (a) 1 (u) (a ∈ I) for the rest of u ∈ Z by (4.9). where T (a) 0 (u) = 1. Claim. The family T defined above satisfies the following relations in R for any a ∈ I, m ∈ N, u ∈ Z: . (4.13) Proof of Claim. This ends the proof of Claim. Now, taking the inverse sum of (4.12) and (4.13), we obtain (4.3). Therefore, T satisfies the desired properties. Case 2. When C is nonsimply laced. Suppose that C is nonsimply laced. Then, in Step 2 above, the factor M da (u) for a and b with a ∼ b, d a > 1, and d b = 1. Therefore, Step 2 should be modified to define these terms together. For any Y satisfying Y(C), we construct a desired family T in the following three steps: Step 1. Choose arbitrarily T (a) Substep 1. (4.14) where T (a) (2) There is a (not unique) ring homomorphism There is another variation of Theorem 4.4. Let T × (C) (resp. Y × (C)) be the multiplicative subgroup of all the invertible elements of T(C) (resp. Y(C)). Clearly, T × (C) is generated by T (1) There is a multiplicative group homomorphism . (2) There is a (not unique) multiplicative group homomorphism Restricted T and Y-systems Here we introduce a series of reductions of the systems T(C) and Y(C) called the restricted T and Y-systems. The restricted T and Y-systems for the quantum affine algebras are important in application to various integrable models. We define integers t a (a ∈ I) by t a = t d a . Proof . The calculation is formally the same as the one for Theorem 4.4. We have only to take care of the boundary term which formally appears in the right hand sides of (4.1) and (4.2) for m = t a ℓ − 1. Since , d a = 1, the right hand side of (5.1) is 1 under the boundary condition T T and Y-systems from cluster algebras In this section we introduce T and Y-systems associated with a class of cluster algebras [17,19] by generalizing some of the results in [19,29,10,36,32,11]. They include the restricted T and Y-systems of simply laced type in Section 5 as special cases. Systems T(B) and Y ± (B) We warn the reader that the matrix B in this section is different from the one in Section 2 and should not be confused. Definition 6.1 ([16]). An integer matrix B = (B ij ) i,j∈I is skew-symmetrizable if there is a diagonal matrix D = diag(d i ) i∈I with d i ∈ N such that DB is skew-symmetric. For a skewsymmetrizable matrix B and k ∈ I, another matrix B ′ = µ k (B), called the mutation of B at k, is defined by The matrix µ k (B) is also skew-symmetrizable. The matrix mutation plays a central role in the theory of cluster algebras. We impose the following conditions on a skew-symmetrizable matrix B: The index set I admits the decomposition I = I + ⊔ I − such that if B ij = 0, then (i, j) ∈ I + × I − or (i, j) ∈ I − × I + . (6.2) Furthermore, for composed mutations µ + = i∈I + µ i and µ − = i∈I − µ i , Note that µ ± (B) does not depend on the order of the product due to (6.2). Lemma 6.2. Under the condition (6.2), the condition (6.3) is equivalent to the following one: For any i, j ∈ I + , The same holds for i, j ∈ I − . For Y-systems, it is natural to introduce two kinds of systems. Theorem 6.6. Let R be any commutative ring over Z with identity element. For any family Then, Y satisfies Y + (B). Similarly, define a family Y by Proof . By Remark 6.5, it is enough to prove the first statement only. Then, Note that for j in the right hand side of (6.5), j ∈ I ∓ by (6.2). By putting (6.6)-(6.8) into (6.5), the right hand side of (6.5) is which is the left hand side of (6.5). Examples Let us present some examples of T(B) and Y ± (B). Definition 6.7. A symmetrizable generalized Cartan matrix C = (C ij ) i,j∈I is said to be bipartite if the index set I admits the decomposition I = I + ⊔ I − such that if C ij < 0, then (i, j) ∈ I + × I − or (i, j) ∈ I − × I + . Example 6.8 ( [17,19]). Let C be a bipartite symmetrizable generalized Cartan matrix, which is not necessarily tamely laced. Define the matrix B = B(C) by otherwise. (6.9) The rule (6.9) is visualized in the diagram: Then, B is skew-symmetrizable and satisfies the conditions (6.2) and (6.3). The corresponding T(B) and Y − (B) are given by where j ∼ i means C ji < 0. These systems are studied in [17,19]. When C is bipartite and simply laced, they coincide with T 2 (C) and Y 2 (C) (for U = Z) in Section 5. When C is bipartite, tamely laced, but nonsimply laced, they are different from T 2 (C) and Y 2 (C), because the latter include factors depending on u + α (α = 0) in the right hand sides. Example 6.9 (Square product [29,10,36,32,11]). Let C = (C ij ) i,j∈I and C ′ = (C ′ i ′ j ′ ) i ′ ,j ′ ∈I ′ be a pair of bipartite symmetrizable generalized Cartan matrices with I = I + ⊔ I − and I ′ = I ′ + ⊔ I ′ − , which are not necessarily tamely laced. For i = (i, i ′ ) ∈ I × I ′ , let us write i : The rule (6.10) is visualized in the diagram: Since it generalizes the square product of quivers by [36], we call the matrix B the square product B(C) B(C ′ ) of the matrices B(C) and B(C ′ ) of (6.9). Proof . Let diag(d i ) i∈I and diag(d ′ i ) i∈I ′ be the diagonal matrices skew-symmetrizing C and C ′ , respectively, and let D = diag(d i d ′ i ′ ) (i,i ′ )∈I×I ′ . Then, the matrix DB is skew-symmetric. The condition (6.2) is clear from (6.11). To show (6.4), suppose, for example, that i = (i, i ′ ) : (++) and j = (j, j ′ ) : (−−). Then, B ik B kj = 0 only for k = (i, j ′ ) or k = (j, i ′ ); furthermore, B ik , B kj ≥ 0 (resp. ≤ 0) for k = (i, j ′ ) (resp. k = (j, i ′ )), and B ik B kj = C ij C ′ i ′ j ′ for both. Thus, (6.4) holds. The other cases are similar. The corresponding T(B) and Y + (B) are given by where j ∼ i and j ′ ∼ i ′ means C ji < 0 and C ′ j ′ i ′ < 0, respectively. These systems slightly generalize the ones studied in connection with cluster algebras [29,10,36,32,11]. When C is bipartite and simply laced, and C ′ is the Cartan matrix of type A ℓ−1 with I ′ + = {1, 3, . . . } and I ′ − = {2, 4, . . . }, T(B) and Y + (B) coincide with T ℓ (C) and Y ℓ (C) in Section 5. (The choice of I ′ ± is not essential here.) As in Example 6.8, when C is bipartite, tamely laced, but nonsimply laced, and C ′ is the Cartan matrix of type A ℓ−1 , they are different from T ℓ (C) and Y ℓ (C). Example 6.11. Let us give an example which does not belong to the classes in Examples 6.8 and 6.9. Let B = (B ij ) i,j∈I with I = {1, . . . , 7} be the skew-symmetric matrix whose positive components are given by The matrix B is represented by the following quiver: With I + = {2, 3} and I − = {1, 4, 5, 6, 7}, the matrix B satisfies the conditions (6.2) and (6.3). T(B) and Y ± (B) as relations in cluster algebras The systems T(B) and Y ± (B) arise as relations for cluster variables and coefficients, respectively, in the cluster algebra associated with B. See [19,36] for definitions and information for cluster algebras. T(B) and cluster algebras We start from T-systems. Let ε : I → {+, −} be the sign function defined by ε(i) = ε for i ∈ I ε . For (i, u) ∈ I × Z, we set the 'parity conditions' P + and P − by where we identify + and − with 1 and −1, respectively. For ε ∈ {+, −}, define T • (B) ε to be the subring of T • (B) generated by those T i (u) with (i, u) satisfying P ε . Then, we have Let A(B, x) be the cluster algebra with trivial coefficients, where (B, x) is the initial seed [19]. We set x(0) = x and define clusters x(u) = (x i (u)) i∈I (u ∈ Z) by the sequence of mutations (6.12) The ring A T (B, x) is no longer a cluster algebra in general, because it is not closed under mutations. Y ε (B) and cluster algebras We present a parallel result for Y-systems. A semifield (P, +) is an abelian multiplicative group P endowed with a binary operation of addition + which is commutative, associative, and distributive with respect to the multiplication in P [19,31]. (Here we use the symbol + instead of ⊕ in [19] to make the description a little simpler.) Definition 6.16. For ε ∈ {+, −} and a skew-symmetrizable matrix B satisfying the conditions (6.2) and (6.3), letỸ ε (B) be the semifield with generators Y i (u) (i ∈ I, u ∈ Z) and the relations Y ε (B). LetỸ • ε (B) be the multiplicative subgroup ofỸ ε (B) generated by Y i (u) and 1 + Y i (u) (i ∈ I, u ∈ Z). (We use the notationỸ to distinguish it from the ring Y in Definition 4.5.) Let A(B, x, y) be the cluster algebra with coefficients in the universal semifield Q sf (y), where (B, x, y) is the initial seed [19]. To make the setting parallel to T-systems, we introduce the coefficient group G(B, y) associated with A(B, x, y), which is the multiplicative subgroup of the semifield Q sf (y) generated by all the elements y ′ i of coefficient tuples of A(B, x, y) together with 1 + y ′ i . We set x(0) = x, y(0) = y and define clusters x(u) = (x i (u)) i∈I and coefficient tuples y(u) = (y i (u)) i∈I (u ∈ Z) by the sequence of mutations (6.13) Definition 6.17. The Y-subgroup G Y (B, y) of G(B, y) associated with the sequence (6.13) is the multiplicative subgroup of G(B, y) generated by y i (u) and 1 + y i (u) (i ∈ I, u ∈ Z). Proof . This follows from the exchange relation of a coefficient tuple y by the mutation µ k [19]: Theorem 6.19. The groupỸ • Proof . Let us show thatỸ • + (B) + ≃ G Y (B, y). Let f : Q sf (y) →Ỹ + (B) be the semifield homomorphism defined by Then, due to Lemma 6.18 (2), it can be shown by induction on ±u that we have f : y i (u) → Y i (u) for any (i, u) satisfying P + , and f : y i (u) → Y i (u − 1) −1 for any (i, u) satisfying P − . By the restriction of f , we have a multiplicative group homomorphism f ′ : G Y (B, y) →Ỹ • + (B) + . On the other hand, by Lemma 6.18 (2) again, a semifield homomorphism g :Ỹ + (B) → Q sf (y) is defined by Y i (u) → y i (u) ±1 for (i, u) satisfying P ± . By the restriction of g, we have a multiplicative group homomorphism g ′ :Ỹ • y). Then, f ′ and g ′ are the inverse to each other by Lemma 6.18 (1). Therefore,Ỹ • y). The other cases are similar. Restricted T and Y-systems and cluster algebras: simply laced case The restricted T and Y-systems, T ℓ (C) and Y ℓ (C), introduced in Section 5 are special cases of T (B) and Y ± (B), if C is simply laced. Therefore, they are also related to cluster algebras. Bipartite case Suppose that C is a simply laced and bipartite generalized Cartan matrix. Then, we have already seen in Examples 6.8 and 6.9 that T ℓ (C) and Y ℓ (C) coincides with T (B) and Y ε (B) for some B and ε. Therefore, we immediately obtain the following results as special cases of Theorems 6.15 and 6.19. The slight discrepancy of the signs between ℓ = 2 and ℓ ≥ 3 is due to the convention adopted here and not an essential problem. Nonbipartite case Let us extend Corollaries 6.20 and 6.21 to a simply laced and nonbipartite generalized Cartan matrix C. The Cartan matrix of type A (1) 2r is such an example. In general, a generalized Cartan matrix C is bipartite if and only if there is no odd cycle in the corresponding Dynkin diagram. Without loss of generality we can assume that C is indecomposable; namely, the corresponding Dynkin diagram is connected. Definition 6.22. Let C = (C ij ) i,j∈I be a simply laced, nonbipartite, and indecomposable generalized Cartan matrix. We introduce an index set I # = I # + ⊔ I # − , where I # + = {i + } i∈I and I # − = {i − } i∈I , and define a matrix C # = (C # αβ ) α,β∈I # by 2, α = β, C ij , (α, β) = (i + , j − ) or (i − , j + ), 0, otherwise. We call C # the bipartite double of C. It is clear that C # is a simply laced and indecomposable generalized Cartan matrix; furthermore, it is bipartite with I # = I # + ⊔ I # − . Example 6.23. Let C be the Cartan matrix corresponding to the Dynkin diagram in the left hand side below. Then, C # is the Cartan matrix corresponding to the Dynkin diagram in the right hand side. Here is another example. Proposition 6.24. Let C = (C ij ) i,j∈I be a simply laced, nonbipartite, and indecomposable generalized Cartan matrix, and C # be its bipartite double. (2) For ℓ ≥ 3, T • ℓ (C) is isomorphic to A T (B, x) with B = B(C # ) B(C ′ ) by the correspondence T Corollary 6.26. Let C and C # be the same ones as in Proposition 6.24. Concluding remarks One can further extend Corollaries 6.20, 6.21, 6.25, and 6.26 to the tamely laced and nonsimply laced case by introducing T and Y-systems associated with another class of cluster algebras 1 . Therefore, we conclude that all the restricted T and Y-systems associated with tamely laced generalized Cartan matrices introduced in Section 5 are identified with the T and Y-systems associated with a certain class of cluster algebras. The following question is left as an important problem: What are the T and Y-systems associated with nontamely laced symmetrizable generalized Cartan matrices?
6,368.6
2009-09-25T00:00:00.000
[ "Mathematics" ]
DETERMINATION OF INITIAL COMMUTATION ANGLE OFFSET OF PERMANENT MAGNET SYNCHRONOUS MACHINE-AN OVERVIEW AND SIMULATION Identification of initial commutation angle belongs to the basic routines at the commissioning of industrial drives with permanent magnet synchronous machine. This paper deals with problem of commutation angle offset determination. Two methods are described and simulated in detail. The first method is based on application of a DC voltage and the second one is based on the use of the current controllers. Both methods use rotor movement to reach defined position. Simulation and experimental results are included providing mutual comparison of these methods. DETERMINATION OF INITIAL COMMUTATION ANGLE OFFSET OF PERMANENT MAGNET SYNCHRONOUS MACHINE -AN OVERVIEW AND SIMULATION 1. INTRODUCTION Permanent magnet synchronous machine (PMSM) is highly dynamic motor with compact dimensions, mostly used in robotic applications or machine tools.There are several common ways to control PMSM; the most common are v/f control in open loop; field oriented control (FOC) and direct torque control (DTC) in closed loop resp.However, FOC is the most popular in applications requiring high dynamics and precision.Before they are being used in a standard operation, industrial PMSM drives must successfully pass the commissioning routines. During these routines the following parameters are identified and calculated: • commutation angle (CA) offset, • machine parameters, • parameters of the current controllers, • parameters of the speed controller, • parameters of the position controller. CA is the angle between the vector of stator magnetic flux position and the vector of magnetic flux created by permanent magnets.If the industrial drive is used for the first time, these vectors are arbitrarily (randomly) placed, thus the angle between them is unknown.Identification of CA is crucial for the FOC of PMSM because the value of CA is necessary for the Park transformation [5]- [8].Furthermore, if the CA is kept close to the 90 degrees (electrically), PMSM provides the maximum torque at given stator current. This paper deals with the determination of CA using data from position sensor.If absolute position sensor is used and the value of CA is once obtained, the actual motor electrical position is stored into the EEPROM memory as CA offset and for the next operation this value is loaded from memory, so CA is known immediately.If the position sensor does not have absolute tracks, determination of CA has to be repeated every time during commissioning. There are various methods, how to determine the CA.Some of them are briefly described in [1], [2], [3] or [4].In this paper, two approaches of CA determination are completely described, modelled in Simulink environment, and experimentally verified.Results of mutual comparison of presented approaches can be found in conclusion. MATHEMATICAL MODEL OF PMSM Mathematical model of PMSM is described in rotor reference frame as (see [6], [9]): where R, L d and L q are the per-phase armature resistance and the d-axis and q-axis inductances, respectively; ψ PM is the permanent-magnet flux, p is the number of pole pairs, J is total moment of inertia, T e and T L are electromagnetic and load torque, respectively; ω is rotor angular speed, ϕ is rotor angular position and i d , i q are the d-axis and q-axis component of the stator current, respectively.Considering rotor reference frame, inputs are voltages u d and u q and outputs are currents i d and i q , rotor angular speed ω and rotor position ϕ.However, in the real world, the machine is supplied by 3-phase voltage and phase currents are measured.Therefore, the machine model was coupled with reference frame transformations. DESCRIPTION OF METHODS Methods, described in this section, are based on rotor movement from an unknown initial position to zero position by applied voltage and during this routine, the value of CA is considered to be zero.Afterwards, CA is a signal obtained from position sensor. Method I -Determination of CA by DC voltage The first method is based on application of constant DC link voltage on the machine in such a way, that positive voltage of a DC link is connected to A phase, and negative voltage of DC link is connected to B and C phase, as is depicted in Fig. 1, where vector of a stator current is I S ; vector of a rotor flux by permanent magnets is denoted as ψ; i a , i b , i c are phase currents, αβ is stator 2-phase reference frame; dq is rotor reference frame and ρ is value of CA offset. At the moment of DC voltage application, rotor flux vector is arbitrarily placed, according to actual rotor position Fig. 2(a).Application of DC voltage causes constant current flow in all phases and places stator current vector in α-axis.Therefore, rotor is forced to move towards stator flux caused by stator current vector until both flux vectors are aligned, as in Fig. 2(b) .Thus, CA is now considered as zero.There is also a possibility to align machine electrical position with q-axis by following connection: phase A is floating, phase B is connected to positive voltage and phase C is connected to zero voltage Fig. 3.In this case, electrical position after reference frames alignment is ρ = π/2 as rotor flux lies in β -axis (see Fig. 4(b)).The simulation scheme for this method is in Fig. 8 3.2.Method II -Determination of CA by using current controller The second method uses PI current controllers for d and q current components in closed loop similar to current control loop in FOC.In this case, comparing to previous method, current components are controlled.The difference between normal FOC current loop and this method lies in transformation angle used for current feedback.Here, assumed transformation angle (or CA) remains zero during whole procedure (see simulation scheme in Fig. 9).Desired current d-component value is set to reference value (usually motor nominal current value or less) and current q-component reference is set to zero.Both references cause that stator current vector is placed to α-axis (as transformation angle is held to zero), similarly to Method I. Considering an angle between stator current vector and rotor flux vector is no-zero, rotor starts to move until both vectors are aligned and no torque is produced.At this point, actual position is stored as a commutation offset, current setpoints are set to zero and transformation angle for current feedback is switched from the constant value to measured one. It is also possible to set desired current d-component to zero and q-component to the reference value.In this case, stator current vector is placed into β -axis.After the rotor movement, when stator and rotor flux are aligned, the commutation angle is ρ = π/2.In fact, there is no significant difference in using alignment with α or β -axis that clearly determines the right manner.However, from the practical point of view, current d-component is usually held on zero value in FOC motor control, so it is more convenient to maintain this approach. SIMULATION RESULTS Both methods were simulated with same motor model in Matlab/SIMULINK.A standard mathematical model of a PMSM machine in dq reference frame is used, as described in Section 2. It was assumed that PMSM is loaded with static friction during movement, therefore appropriate load torques were applied with Signal builder block.Note that precise simulation of static load torque is a complicated problem beyond the scope of this paper, and so only simplified static load torque value was applied.For this reason, some differences between simulation and experimental results occurred. Simulation results of the Method I is depicted in Fig. 5(a).Value of 1.3 rad was used as a random initial rotor electrical position of PMSM.Voltage vector, that was applied on the machine in the simulation, is placed into αaxis (voltage β -component is zero), thus stator current and flux is in α-axis as well.It can be observed, that at the t = 60 ms, when phase currents become constant, rotor is pulled into the zero position.The same simulation conditions were applied also on the simulation of Method II, depicted in Fig. 5(b). Both described methods were also simulated with voltage applied in β -axis, simulations results are in Fig. 5(c) for Method I and in Fig. 5(d) for Method II.The same simulation conditions were used, the only difference was in initial position.In this case, initial electrical position was set to 0.3 rad.It can be observed, that simulation responses of both methods are similar to responses on voltage applied in α-axis.The final rotor position is ρ = π/2 and rotor flux is aligned with β -axis.Response time of Method II is slightly larger in comparison to Method I, but both methods are still fast enough, considering the fact that CA determination runs only once. APPLICATION ISSUES AND EXPERIMENTAL RESULTS Experimental setup in Fig. 6 consists from PMSM fed by voltage source inverter (VSI), controlled by Texas Instruments floating point DSP.Actual position of PMSM is measured by sine/cosine incremental encoder with 2048 lines per revolution and with absolute tracks.Encoder signals are firstly evaluated by electronic interface to obtain suitable signals for DSP.Parameters of PMSM in experimental setup are in Table 1.Method I, based on DC voltage and described in the paper, is frequently used in industrial power converters, and can be easily implemented on DSP.Therefore, it was experimentally verified.Commutation angle given by described method is used in FOC control algorithm and implemented just on the same experimental setup. It is possible to determine the CA offset with Method I by switching on the corresponding transistors in VSI.But if full DC-link voltage is applied directly on the machine with small value of phase resistance, it leads to very high current and damage of the machine is very likely.In experiments voltage applied on the machine by VSI was modulated with PWM in order to get a permissible value.Thus, maximum allowed motor current have not been exceeded.Reference values for phase voltages are computed as it is in simulation scheme in Fig. 8, i.e. voltage vector is placed on α-axis by defining voltage α-component, whereas β -component remains zero and this references are transformed to 3-phase by inverse Clarke transformation.The main algorithm is described by simplified flowchart in Fig. 7. Experimental results were compared to the simulation results from Fig. 5(a) and results are in Fig. 10.It can be observed that experimental results correspond with simulation results.Differences are caused by friction and static load torque, which are not concerned in the model. CONCLUSION The paper presents an overview and detailed explanation of two solutions to a practical problem with commutation angle offset determination of PMSM control. Presented methods are based on rotor movement with limited current.Therefore, they should be solely used without the load on the motor output.Otherwise, results may not be satisfying and possibly lead to undesired behaviour such as inappropriate currents to given load, which causes unwanted increase of machine temperature and impossibility to reach reference torque.Advantage of the Method I is its simplicity and effectiveness.On the other hand, Method II brings controlled currents during procedure, but it is more complicated and needs to switch the feedbacks after CA offset determination. Fig. 8 Fig. 9 Fig. 8 Simulation Scheme of Method I -Determination of CA by using constant DC voltage
2,584.4
2014-12-01T00:00:00.000
[ "Engineering" ]
Lactobacillus isolates from healthy volunteers exert immunomodulatory effects on activated peripheral blood mononuclear cells As probiotics in the gut, Lactobacilli are believed to play important roles in the development and maintenance of both the mucosal and systemic immune system of the host. This study was aimed to investigate the immuno-modulatory function of candiate lactobacilli on T cells. Lactobacilli were isolated from healthy human feces and the microbiological characteristics were identified by API 50 CHL and randomly amplified polymorphic DNA (RAPD) assays. Anti-CD3 antibody activated peripheral blood mononuclear cells (PBMCs) were treated by viable, heat-killed lactobacilli and genomic DNA of lactobacilli, and cytokine profiles were tested by ELISA. Isolated lactobacilli C44 and C48 were identified as L. acidophilus and L. paracacei, which have properties of acid and bile tolerance and inhibitor effects on pathogens. Viable and heat-killed C44 and C48 induced low levels of proinflammatory cytokines (TNF-α, IL-6 and IL-8) and high levels of IFN-γ and IL-12p70 in PBMCs. In anti-CD3 antibody activated PBMCs, viable and heat-killed C44 increased Th2 cytokine levels (IL-5, IL-6 and IL-10), and simultaneously enhanced Th1 responses by inducing IFN-γ and IL-12p70 production. Different from that of lactabacillus strains, their genomic DNA induced low levels of IL-12p70, IFN-γ and proinflammatory cytokines in PBMCs with or without anti-CD3 antibody activation. These results provided in vitro evidence that the genomic DNA of strains of C44 and C48, especially C44, induced weaker inflammation, and may be potentially applied for treating allergic diseases. INTRODUCTION Lactobacilli are the major members of probiotics, which are defined by the FAO/WHO as "live microorganisms which when administered in adequate amounts confer a health benefit on the host" [1] . Mem-bers of these genera are commensal bacteria in human intestine and have a long history of safe uses. It is well documented that various lactobacillus species contribute to the health of the host, which is known to modulate immune responses [2] . The most interesting thing is their property in regulating the polarization of naive immune system by skewing it away from T helper 2 (Th2) toward Th1 responses, and thus promoting cell mediated immunity [3] , which will lead to the application in prevention and treatment of allergic diseases. There has been a significant increase in the prevalence of allergic diseases over the past 2 to 3 decades such as atopic dermatitis, atopic eczema, and allergic rhinitis. Among factors possibly contributing to the increase in the prevalence of allergic diseases, modification of the intestinal flora or lack of microbial exposure during childhood has been proposed. Th2cytokines increase the production of IgE and stimulate mast cells and eosinophils, whereas Th1-cytokines, such as interferon (IFN)-γ, may suppress IgE synthesis and stimulate the expression of secretory IgA [4][5][6] . Although there were substantial in vivo evidence from animal models and clinical trials on Th2 cytokine inhibitory effects of lactobacilli [3,7,8] and in vitro evidence that lactobacilli stimulated antigen presenting cells (APC) that trigger T cell polarization [9,10] , few direct cellular witnesses from T cells have been provided in vitro. As probiotics, a candidate lactobacillus should survive passage through the gastrointestinal tract and transiently colonize the host epithelium [1] . The most important property for survival is the tolerance of highly acidic conditions present in the stomach and the concentrations of bile salts in the small intestine. Besides, probiotic lactobacilli are able to inhibit, displace and compete with pathogens, and enhance mucosal barrier activity, although these abilities are strain-dependent. In the present study, two lactobacillus strains, L. acidophilus and L. paracasei, were selected from bacteria isolated from healthy volunteers to determine the effects of lactobacilli on T cell polarization in vitro, including the capacity to induce immune responses in peripheral blood mononuclear cells (PBMCs). Isolation and biochemical characterization of candidate lactobacilli Fecal samples were provided by two healthy volunteers who did not take any probiotic-based supplements. A 10 -1 dilution was prepared by sterile Maximal Recovery Diluent (MRD, Oxoid UK) and serial dilutions to 10 -7 were generated. Dilutions including 10 -4 , 10 -5 , 10 -6 and 10 -7 were plated onto Man-Rogosa-Sharpe (MRS) agars (Oxoid, Basingstoke, Hampshire, UK) using modified Miles & Misra plating technique (10×10 μL) and allowed to dry and then were incubated anaerobically at 37°C for 72 hours [13] . All presumptive lactobacillus colonies were subcultured onto MRS agar (Oxoid) and incubated anaerobically at 37°C for 48 hours. All catalase negative, Gram positive bacilli were identified. For experimental use, strains were cultured anaerobically at 37°C in MRS broth (Oxoid) to early stationary phase, using three successive subcultures (1% v/v inoculation; 12-15 h). Carbohydrate fermentation profile was obtained by API 50 CHL tests according to the manufacturer specification (BioMérueux, France). The Apiweb R identification software was used to interprete carbohydrate fermentation results. L. acidophilus and L. paracasei were selected because they are the major lactobacillus strains and may have potential immunoregulatory effects on T cell polarization by inducing IL-12 and IFN-γ secretion from dendritic cells (DC) [11] and macrophages [12] . Molecular characterization of candidate lactobacilli Genomic DNA of candidate lactobacilli were extracted using Bacterial Genomic DNA Kit (Sigma, St. Louis, MO), overnight cultured bacteria were lysed by lysis solution at 55°C for 10 minutes, mixed with ethanol and then centrifuged at 6,500 g for 1 minute, washed and finally eluted with elute solution. DNA genotypes were analyzed using the randomly amplified polymorphic DNA (RAPD) fingerprinting method [14] with RAPD Ready-to-Go beads (GE Healthcare, UK) and random primer (MWG Biotech, Germany) at a final volume of 25 μL, including 2.5 μL of PCR buffer, 5 μL of Q-solution, 2 μL of 25 mmol/L MgCl 2 , 2 μL of 10 mmol/L dNTPs mixture , 1 μL of primer, 1 μL of template DNA and 1 unit of Taq DNA polymerase. The PCR was run for 5 minutes at 94°C, 5 minutes at 36°C and 5 minutes at 72°C, 35 cycles of 94°C for1 minute, 36°C for 1 minute, 72°C for 2 minutes, and a final extension of 72°C for 6 minutes. Standard strains of lactobacillus were used as positive controls. Bile and acid tolerance assays Tolerance to bile was assessed by investigating the ability of strains to grow in the presence of different concentrations of bovine bile (Oxiod), as previously described [15] . Freshly cultured lactobacilli (final concentration at 10 6 cell/mL) were inoculated in MRS broth containing 0, 0.3%, 1.0%, 2.0%, and 3.0% (w/ v) bovine bile and incubated anaerobically at 37°C. Bacterial growth was monitored on MRS agar (Oxoid) by viable count every 1 hour for 5 hours. Acid tolerance assay of freshly cultured lactobacilli (at a final concentration of 10 6 cell/mL) was performed in MRS broth at different pH values of 2.0, 3.0, 4.0 and incu- PBMCs isolation PBMCs were isolated from peripheral blood of healthy donors. Briefly, after a Histopaque 1077 (Sigma) gradient centrifugation, mononuclear cells were collected, washed and adjusted to 1×10 6 cells/mL in RPMI 1640 medium supplemented with 10% fetal bovine serum (Sigma). Cytokine production from PBMCs PBMCs (2×10 5 cells/mL) were plated in duplicate in a 96-well culture plate and stimulated with overnight cultured viable lactobacilli at a multiple of infection (MOI) of 1 or heat-killed lactobacilli at an MOI of 10 or bacterial genonomic DNA (3 μg/mL) in 200 μL RPMI 1640. C11 (E. faecalis) was used as a Gram-positive opportunistic pathogenic bacteria control. C30 (B. bifidum), the most traditionally used microorganism as probiotic, was selected as a probiotic control because of its promotion of Th1 polarization [17] . LPS (10 μg/mL, Sigma) as an inflammatory positive control, and unstimulated PBMCs were used as a negative control. PBMCs were also activated with anti-CD3 antibody (5 μg/mL) and stimulated with viable lactobacilli or heat-killed lactobacilli or genomic DNA of both stains. After 24 or 72 hours of stimulation at 37°C in an atmosphere of air with 5% CO 2 , the supernatants were collected, clarified by centrifugation and stored at -20°C until cytokine analysis. Cytokine determination by ELISA Cytokine concentrations in culture supernatants were assayed by sandwich ELISA using BD kits (BD Biosciences, USA) for tumor necrosis factor (TNF)-α, interleukin (IL)-10, IL-6, IL-8, IFN-γ, IL-12p70 and transforming growth factor (TGF)-β in the supernatant of cells stimulated for 24 hours and IL-4, IL-5, IL-13 in the supernatant of cells stimulated for 72 hours according to the manufacturer's recommendations. Statistical analysis All the data were expressed as mean±SD. Comparison between groups were made using one-way analysis of variance (one-way ANOVA) and Student's t test. Differences were considered statistically significant for P value < 0.05. RESULTS Categorization and microbiological characters of potential probiotic lactobacilli Colonies of presumptive lactobacilli were small, circular with a smooth edge, and convex with a glis-bated anaerobically at 37°C [16] . Bacterial growth was monitored on MRS agar (Oxoid) by viable count at 1, 2 and 3 hours. Three independent experiments were carried out in triplicate. Pathogen inhibition experiments Cell-free supernatant from overnight cultured lactobacilli was obtained by centrifugation at 2000 rpm for 10 min and filtered to remove the bacteria, then divided into two groups: the high pH group neutralized with NaOH and the low pH group (without neutralization). Normal MRS broth was used as a control. Different pathogens were cultured in LB broth including C2 (E. coli),C5 (S. typhimurium), C11 (E. faecalis), C14 (K. pneumoniae), C25 (S. flexneri) and QC8 (P. aeruginosa) at 37°C overnight and re-cultured in the two groups mentioned above at 37°C and viable count was performed at 5 hours and 24 hours on LB agar. Increasing or reducing percentage of pathogens were calculated using the following formula: (viable bacteria in the high pH group or in the low pH groupviable bacteria in normal MRS group)/viable bacteria in normal MRS group×100%. Preparation of stimulus Lactobacilli were cultured overnight at 37°C in MRS broth, collected by centrifugation, and washed several times with sterile PBS and diluted in RPMI 1640 medium as viable lactobacilli; viable bacteria were killed by heating at 60°C for 1 hour and viable count was carried out to make sure no viable bacteria survived. Bacteria were stored at -80°C as heat-killed lactobacilli. Genomic DNA of lactobacilli was extracted as described before. tening translucent appearance. Gram staining showed slender Gram-positive rods in single cells. On the basis of carbohydrate utilization profiles of the isolates, two strains among the isolated colonies were selected, named and identified: C44, L. acidophilus, the degree of identity (ID) was 94.5%; C48, L. paracacei, the ID was 95.5%. RAPD analysis showed that the strains had the same fingerprints as the standard strains ( Fig. 1). Tolerance to bile salt and low pH The two isolated strains survived in either acidified MRS broth or MRS broth with bile salt ( Table 1 and Table 2). After 3 hours of exposure, C44 and C48 survived at pH 2.0 and even grew in MRS broth with higher pH (pH 3.0, pH 4.0). After 5 hours of exposure, both isolates survived in 3% bile salt and multiplied in lower concentrations (2%, 1%, and 0.03%). Inhibitory effects of C44 and C48 on pathogenic bacteria All pathogens (E. coli, S. typhimurium, E. faecalis, K. pneumoniae, S. Flexneri and P. aeruginosa) were inhibited after incubation for 5 and 24 hours in culture medium of C44 and C48 under low pH (Fig. 2 Table 1 Tolerance of lactobacilli to different bile concentrations (×10 6 CFU/mL) (n = 3) Lactobacilli were cultured in RPMI 1640 at initial concentrations of 10 7 , 10 6 and 10 5 CFU/mL with or without antibiotics. Data of viable count are expressed as mean ±SD for n = 3. P. aeruginosa and S. flexneri after 5 hours of culture. The same results were obtained after 24 hours of culture except a slight increase in the number of E. coli, S. typhimurium and E. faecalis. In neutralized MRS broth of C48, the number of pathogens slightly increased (increase ≤ 50%) after 5 hours of culture, but after 24 hours of culture, most pathogens were inhibited except E. coli and S. typhimurium. Effects of lactobacillus concentrations on the secretion of IL-10 by PBMCs with and without antibiotics To determine the viability of lactobacilli in RPMI 1640 was influenced by antibiotics, we grew C44 and C48 in different concentrations in RPMI 1640 with or without antibiotics. The results revealed that C44 and C48 grew at high concentration (10 7 CFU/mL) but did not grow at lower concentrations (10 6 CFU/mL and 10 5 CFU/mL) in RPMI 1640 in the absence of antibiotics. However, in the presence of antibiotics, lactobacilli at all concentrations died within 4 hours ( Table 3). The results illustrated that the effects of viable lactobacilli were abolished in the presence of antibiotics in culture medium. To determine if living lactobacilli influence cytokine production in human PBMCs, we stimulated cells with different doses of living lactobacilli (at an MOI of 1 and 10) in RPMI 1640 with or without antibiotics for 24 hours. As shown in Fig. 3, all lactobacilli at 10 7 CFU/mL and 10 6 CFU/mL induced low level of IL-10 with or without antibiotics. All the results indicated that viable C44 and C48 at 10 6 CFU/mL had no influence on cytokine production of PBMCs. In view criminative cytokines, IL-10/IL-12 ratio was used to distinguish between strains exhibiting a "pro-" vs "anti-inflammatory" profile [18] as shown in Fig. 4G. Heatkilled lactobacilli and their DNA evoked high IL-10/IL-12p70 ratio, and the notable inducer was heatkilled C44 (P < 0.05). In general, C44 induced low levels of proinflammatory and anti-inflammatory cytokines and a high ratio of IL-10/IL-12. This indicated that C44 have the potentiality to anti-inflammation. Profiles of cytokine secretion in anti-CD3 antibody activated PBMCs stimulated with viable and heat-killed Lactobacilli and their genomic DNA In an attempt to investigate whether lactobacilli regulated T cell polarization, we activated T cells in PBMCs by anti-CD3 antibody and then stimulated these cells with viable, heat-killed and genomic DNA of C44 and C48. IFN-γ and IL-12p70 are the key cytokines to promote naive T cells to differentiate into Th1 cells [19,20] . Our results showed that IFN-γ secretion was promoted by viable, heat-killed C44 (P < 0.05); similarly, IL-12p70 was triggered by viable C44 (P < 0.01). But genomic DNA of C44 and C48 failed to induce IFN-γ and IL-12p70 production. Produciton of Th2 cytokines (IL-4, IL-5, IL-6, IL-10 and IL-13), was suppressed by C44 and C48 [21] . Especially, viable and heat-killed C48 significantly suppressed the secretion of IL-5, IL-6 and IL-10 induced by anti-CD3 antibody (P < 0.05). But genomic DNA of C48 Profiles of cytokine secretion in PBMCs stimulated with viable and heat-killed lactobacilli and their DNA As shown in Fig. 4, viable and heat-killed C44 and C48 induced the secretion of IL-12p70 and IFN-γ, which are critical for promoting cell immunity. Specifically, viable C44 significantly induced IL-12p70 (P < 0.05). Heat-killed C44 and viable C48 had remarkably potent activities on IFN-γ production (P < 0.05), while viable C44 and heat-killed C48 had even more potent effects (P < 0.01). On the contrary, genomic DNA of both lactobacilli did not induce the secretion of IFN-γ and IL-12p70. TNF-α, IL-6 and IL-8, considered as proinflammatory cytokines, were induced by the lactobacilli, although they were lower than those induced by LPS in PBMCs. In general, DNA of both lactobacilli induced significantly lower TNF-α, IL-6 and IL-8 secretion than LPS did (P < 0.05). Viable C44 and C48 induced lower levels of IL-6 and IL-8 (P < 0.05) compared with LPS. IL-10 and TGF-β are important anti-inflammatory cytokines to suppress cell immunity. Viable C44 and its DNA, as well as viable C48 induced lower IL-10 levels than LPS did (P < 0.05). But no obvious effects were observed on TGF-β production by both lactobacilli (data not shown). As IL-10 and IL-12 appeared to be the most dis-and C44 failed to do so. As a suppressive factor, TGF-β secretion remained invariable when anti-CD3 antibody activated PBMCs were treated with C44 and C48. Only viable C48 can downregulate the production of TGF-β (Fig. 5A-H). In brief, lactobacilli C44 can skew T cell polarization toward Th1 via enhancing the production of Th1 cytokines (IFN-γ and IL-12p70) as well as inhibiting Th2 cytokine (IL-5, IL-6 and IL-10) secretion. Th1/Th2 balance is commonly monitored by IFN-γ/ IL-4 ratio because IFN-γ and IL-4 are classical antagonistic signature cytokines for Th1 and Th2 activity. In this study, among the stimulus, only viable and heat-killed C44 induced high ratio of IFN-γ/IL-4 (P < 0.05) in anti-CD3 activated PBMCs. As controls, C11 (a opportunistic pathogen [22] ) induced low ratio, while IL-4(pg/mL) b l a n k a n t i -C D 3 C 1 1 + a n t i -C D 3 C 4 4 + a n t i -C D 3 C 4 8 + a n t i -C D 3 C 3 0 + a n t i -C D b l a n k a n t i -C D 3 C 1 1 + a n t i -C D 3 C 4 4 + a n t i -C D 3 C 4 8 + a n t i -C D 3 C 3 0 + a n t i -C D 3 * * G b l a n k a n t i -C D 3 C 1 1 + a n t i -C D 3 C 4 4 + a n t i -C D 3 C 4 8 + a n t i -C D 3 C 3 0 + a n t i -C D b l a n k a n t i -C D 3 C 1 1 + a n t i -C D 3 C 4 4 + a n t i -C D 3 C 4 8 + a n t i -C D 3 C 3 0 + a n t i -C D b l a n k a n t i -C D 3 C 1 1 + a n t i -C D 3 C 4 4 + a n t i -C D 3 C 4 8 + a n t i -C D 3 C 3 0 + a n t i -C D 3 * C * ** ** C30 (Bifidobacteria) induced obviously high ratio of IFN-γ/IL-4 (P < 0.01) (Fig. 5I). Differences in their effects on cytokine production between viable, heat-killed and their genomic DNA of lactobacilli Viable and heat-killed lactobacilli were similar in their effects on cytokine secretion, while apparent differences were observed between bacteria and their genomic DNA. C44 DNA induced low levels of IL-12p70 (P < 0.01) and IFN-γ (P < 0.05), both from PBMCs (Fig. 4) and anti-CD3 antibody activated PB-MCs compared with viable C44 (Fig. 5). On the other hand, DNA induced low levels of proinflammatory cytokines, including IL-6, IL-8 and TNF-α in PBMCs (Fig. 4). DNA of C44 triggered lower levels of IL-6 than viable C44 (P < 0.05) and heat-killed C44 (P < 0.05), and simultaneously induced lower levels of IL-8 than heat-killed C44 (P < 0.01). Similar to C44, DNA of C48 evoked lower levels of IL-6 (P < 0.05) and IL-8 (P < 0.01) than heat-killed C48, and similar results were obtained on TNF-α production upon stimulation by viable C48 (P < 0.05) and heat-killed C48 (P < 0.01). These results indicated that DNA of lactobacilli, especially of C44, induced weaker cellular immunity, which leads to inflammation than bacteria itself. DISCUSSION To elucidate the effects of candidate probiotic lactobacilli on immuno-regulation, especially on T cell polarization, we isolated C44 and C48, identified to be L. acidophilus and L. paracacei, from healthy human feces. They have the probiotic properties including acid and bile tolerance, which allow them to survive and enter the intestine, and inhibition of some pathogens. Thus, C44 and C48 are able to localize in the digestive tract and restore intestinal homeostasis, thus improving mucosal barrier functions. For the widely known strain-specific ability of lactobacilli to modulate the immune responses, it is necessary to investigate the characteristics of candidate stains for their potential therapeutic applications. In Pochard's study, lactobacillus-exposed MDCs secreted bioactive IL-12, a critical factor in switching naive or memory T cells to Th1 response [23] . In agreement with this result, the data presented here suggested that C44 and C48 induced low levels of proinflammatory cytokines (TNF-α, IL-6 and IL-8), while inducing high levels of IFN-γ and IL-12p70 but not IL-10, the formers are effective factors to promote cellular immunity and enhance the clearance of pathogens. Meanwhile, C44 is more suitable than C48 in the context of allergic diseases, based on its attenuated Th2 (IL-5, IL-6 and IL-10) potential. Th1 responses, as reflected by IFN-γ and IL-12p70 production, were strongly induced by C44 in anti-CD3 antibody activated PBMCs [12] . These results suggested that C44 and C48, especially C44, may be of potential application in allergic diseases. It is reported that lactobacilli activate innate immune cells such as APCs via pattern recognition receptors (PRRs) and induce the secretion of cytokines that influence the polarization of activated T cells [9,10] instead of interacting with lymphocytes directly. Commensal microflora are not normally found in extra-intestinal sites such as mesenteric lymphoid nodules, spleen, liver or blood in mice. However, commensal bacteria are likely to be continuously traversing the mucosal epithelium at a very low rate and are processed by the host immune cells (DCs) associated with the gut. If the intact mucosal barrier is disrupted by inflammation or injury, indigenous bacteria can easily pass through the ulcerated areas of the mucosa, and perhaps even through loosened tight junctions [24] ; as a result, immune competent cells have the chances to contact with commensal bacteria. In the food allergy mouse model, L. casei administration skewed the pattern of cytokine production by splenocytes toward Th1 dominance, and suppressed IgE and IgG1 secretion by splenocytes [3] . We presumed that as commensal bacteria, lactobacilli had regulatory effects usually under abnormal conditions when the immune system is activated such as allergy. Thus, different with non-activated PBMCs, anti-CD3 antibody activated PBMCs were utilized in vitro in our study, and monocytes were stimulated by lactobacilli. In other words, anti-CD3 activated PBMCs are ready models for screening the regulation on T cell polarization by lactobacilli. The components of probiotics that are responsible for modulation of cytokine induction are largely not known but might be involved in modification of microbe associated molecular patterns (MAMPs) such as lipoteichoic acids (LTA) and (lipo) proteins localized on the bacterial cell surface and interacting with toll like receptors (TLRs), especially TLR2 combining with TLR6 [25][26][27] . Additionally, muramyl dipeptide (MDP), the degradation products of G+ bacteria cell wall, may interact with other host pattern recognition receptors named nucleotide-binding oligomerization domain 2 (NOD2) in plasma of APCs [28] . These products by probiotic cells are the likely targets for strain-dependent interactions with host cells and have been the focuses of several recent reviews [29][30][31] . In the present study, the effects of viable and heat-killed lactobacilli on cytokine production were not the same. As reported, viable lactobacilli may contact with monocytes in PB-MCs via components on complete cell wall including LTA and peptidoglycan (PGN). Although most heatkilled lactobacilli had integral cell wall, degradation was inevitable; thus, MDP may be produced. These components binding different PRRs resulted in different signal transductions. This may explain the analogical effect of viable and heat-killed lactobacilli. Unlike viable and heat-killed lactobacilli, genomic CpG DNA of lactobacilli led to mild inflammation via inducing lower levels of proinflammatory cytokines (IL-6, IL-8 and TNF-α) and cellular immunity (IL-12 and IFN-γ). The results suggest that DNA of lactobacilli is not suitable for the prevention and treatment of allergic disease compared with viable and heat-killed lactobacilli. This behavior may be caused by different receptors. DNA of lactobacilli induced cytokines in a TLR9-dependent manner that may lead to different signal transductions induced by complete and degradative components of the cell wall [32] . In conclusion, the lactobacilli isolated here have the probiotic characteristics not only of microbiological properties but also immuno-regulation by enhancing cellular immunity and reducing Th2 differentiation in vitro. Anti-CD3 antibody activated PBMCs is an effective model that allows pre-selection of probiotics to modulate the host immune system in vitro while reducing considerably the use of animals for screening purposes. Besides, candidate lactobacilli need to be assessed in animal tests and clinical trials for the prevention and treatment of allergic diseases.
5,696.6
2012-12-24T00:00:00.000
[ "Biology", "Medicine" ]
Optical Tweezers Objective An optical tweezers apparatus uses a tightly focused laser to generate a trapping force that can capture and move small particles under a microscope. Because it can precisely and non-destructively manipulate objects such as individual cells and their internal components, the optical tweezers is extremely useful in biological physics research. In this experiment you will use optical tweezers to trap small silica spheres immersed in water. You will learn how to measure and analyze the frequency spectrum of their Brownian motion and their response to hydrodynamic drag in order to characterize the physical parameters of the optical trap with high precision. The apparatus can then be used to measure a microscopic biological force, such as the force that propels a swimming bacterium or the force generated by a transport motor operating inside a plant cell. Introduction The key idea of optical trapping is that a laser beam brought to a sharp focus generates a restoring force that can pull particles into that focus.Arthur Ashkin demonstrated the principle in 1970 and reported on a working apparatus in 1986.The term optical trapping often refers to laser-based methods for holding neutral atoms in high vacuum, while the term optical tweezers (or laser tweezers) typically refers to the application studied in this OT -sjh,rd 1 experiment: A microscope is used to bring a laser beam to a sharp focus inside an aqueous sample so that microscopic, non-absorbing particles such as small beads or individual cells can become trapped at the beam focus.Optical tweezers have had a dramatic impact on the field of biological physics, as they allow experimenters to measure non-destructively and with high precision the tiny forces generated by individual cells and biomolecules.This includes propulsive forces generated by swimming bacteria, elastic forces generated by deformation of biomolecules, and the forces generated by processive enzyme motors operating within a cell.Experimenting with an apparatus capable of capturing, transporting, and manipulating individual cells and organelles provides an intriguing introduction to the world of biological physics. A photon of wavelength λ and frequency f = c/λ carries an energy E = hf and a momentum of magnitude p = h/λ in the direction of propagation (where h is Planck's constant and c is the speed of light).Note that our laser power-up to 30 mW-focused down to a few square microns, implies laser intensities over 10 6 W/cm 2 at the beam focus.Particles that absorb more than a tiny fraction of the incident beam will absorb a large amount of energy relative to their volume rather quickly.In fact, light-absorbing particles can be quite rapidly vaporized (opticuted) by the trapping laser.(Incidentally, your retina contains many such particles -see Laser Safety below).While the scatterer and surrounding fluid always absorbs some energy, our infrared laser wavelength (λ = 975 nm) is specifically chosen because it is where absorption in water and most biological samples is lowest.The absorption rate is also near a minimum for the silica spheres you will study.You should keep an eye out for evidence of heating in your samples, but because of the relatively low absorption rate and because the particles have good thermal conductance with the surrounding water, effects of heating should be modest. The theory and practice of laser tweezers are highly developed and numerous excellent reviews, tutorials, simulations, and other resources on the subject are easy to find online. Physics of the trapped particle The design, operation, and calibration of our laser tweezers draws on principles of optics, mechanics and statistical physics.We begin with an overview of the physics relevant to generating the trapping force and for calibrating the restoring and viscous damping forces associated with its operation. The laser force arises almost entirely from the elastic scattering of laser photons whereby the particle alters the direction of the photon momentum without absorbing any of its energy.It is typically decomposed into two components: (1) a gradient force that everywhere points toward higher laser intensities and (2) a weaker scattering force in the direction of the photon flow.For the sharply focused laser field of an optical tweezers, the gradient force points toward the focus and provides the Hooke's law restoring force responsible for trapping the particle.The scattering force is in the direction of the laser beam and simply shifts the trap equilibrium position slightly downstream of the laser focus. The origin of both forces is similar: the particle elastically scatters a photon and alters its momentum.Momentum conservation implies that the scattered photon imparts an equal and opposite momentum change to the particle.The net force on the particle is a vector equal and opposite the net rate of change of momentum of all the scattered laser photons. For particles with diameters d large com- pared to λ, the ray optics of reflection and refraction at the surface of the sphere provide a good model for the laser forces.The ray drawings in Figure 1 illustrate how laser beam refraction generates a trapping force.The laser beam is directed in the positive z-direction and brought to a focus by a microscope objective.Note that, owing to wave diffraction, the focal region has nonzero width in the xy direction.Near the beam focus, a spherical dielectric particle alters the direction of a ray by refracting it as shown in 1A.Momentum conservation implies that the particle experiences a force, indicated by F in the figure, that is directed toward the beam focus.If the particle is located below the focus, it refracts the con-verging rays (such as rays 1 and 2) as shown in 1B.The corresponding reaction forces F 1 and F 2 acting on the particle give a vector sum F that is again directed toward the laser focus.The net result of all the refractive scattering at any location in the vicinity of the focus results in the gradient force that pulls the particle into the beam focus.Reflection at the boundaries between the sphere and the medium results in the scattering force in the direction of the laser photons. For smaller particles of diameter d λ, Rayleigh scattering describes the interaction: The particle acts as a point dipole, scattering the incident beam in a spatially dependent fashion that depends on the particle's location in the laser field.The result is a net force F given by where p = αE gives the particle's induced dipole moment.The first term is in the direction of the gradient of the field intensity, i.e., the trapping force directed toward the laser focus.The second term gives the weaker scattering force-in the direction of the field's Poynting vector E × B. The center of the trap will be taken as r = 0.For any small displacement (any direction) away from the trap center, the particle is subject to a Hooke's law restoring force, i.e., proportional to and opposite the displacement.Detailed calculations show that the force constant is sensitive to the shape and intensity of the laser field, the size and shape of the trapped particle, and the optical properties of the particle and surrounding fluid.Consequently, the Hooke's law force is difficult to predict.Furthermore, our apparatus operates in an intermediate regime of particle sizes where neither the ray optics nor Rayleigh models are truly appropriate.The diameter d of the silica spheres (SiO 2 ) range in size from 0.5-5 µm.Thus with the laser wavelength of λ = 975 nm, we have d ∼ λ.Fortunately, we do not need to calculate or predict the Hooke's law force constants based on these scattering models.Instead, you will learn how to determine them in situ-from measurements made with the particle in the trap. Consider the motion and forces in terms of their components.The laser beam in our apparatus is directed vertically upward, which will be taken as the +z direction so that the x and y coordinates then describe the horizontal plane.Because the laser beam and focusing optics are cylindrically symmetric around the z-axis, the trap has the same properties in the x-direction as in the y-direction.We need only consider the equations for the x motion of the particle, and a similar set of equations will describe the motion in the y-direction.However, the trapping force that acts along the z direction is different than for x and y, as the laser intensity in the focal region is clearly not a spherically symmetric pattern.The width of the beam focus in its radial (xy) dimension is very narrow.It is limited by wave diffraction to roughly one wavelength (λ ∼ 1µm), whereas this is not the case in z.Hence the restoring force in z is not necessarily as strong as in xy. If the focal "cone" has too shallow an angle (technically, a large f -number or small numerical aperture), particles may be trapped in the xy direction but not trapped along z.The laser beam will tend to pull small particles in toward the central optical axis and then push them up and out of the trap.By employing a large numerical aperture, our apparatus provides excellent trapping in all three directions. We will investigate the motions of the particle in the xy directions only.Consequently, in the discussion that follows, when forces, impulses, velocities or other vector quantities are written without vector notation (e.g., F instead of F) and without explicit directional subscripts (e.g., F z = −k z), they represent the x-component of the corresponding vector quantity.For example, the trapping force in the x-direction is simply What other forces act on the particle?The laser in our apparatus is directed vertically upward-along the same axis as gravitational and buoyant forces.Both silica spheres and bacteria are more dense than water and thus experience a net downward force from these sources.A constant force in the z-direction shifts the equilibrium position along the z-axis but leaves the force constant unmodified.For example, the gravitational force on a mass m hanging from a mechanical spring of force constant k shifts the equilibrium by an amount −mg/k, but the net force F = −kz still holds with z now the displacement from the new equilibrium point.Thus, the F x = −kx, F y = −ky, F z = −k z "trapping force" can and will be taken as relative to the final equilibrium position and includes not only the true trapping force centered at the laser focus, but also the laser scattering force and the forces due to gravity and buoyancy.Keep in mind that these other forces are relatively weak compared to the true trapping force and so the shift in the equilibrium position from the laser focus is rather small. The fluid environment supplies two additional and significant forces to the particle.The particles that we study with our laser tweezers are suspended in water where molecules are in constant thermal motion, i.e., they are moving with a range of speeds in random directions.For still water with no bulk flow, the x-component of velocity (or the component along any axis) is equally likely to be positive as negative and will have an expectation value of zero: v x = 0. Its mean squared value is nonzero, however, as the average ki-OT -sjh,rd 5 netic energy of the molecules is determined by the temperature T .More precisely the equipartition theorem states that the mean squared value of any component of the velocity, e.g.v 2 x , is related to the temperature T by 1 2 where temperature is measured in Kelvin and k B = 1.38 × 10 −23 J/K is Boltzmann's constant.The value of v x for any given particle is a random variable whose probability distribution is known as the Maxwell-Boltzmann distribution: x gives the probability that the velocity component v x for a given particle lies in the range between v x and v x + dv x .The Maxwell Boltzmann distribution is a Gaussian distribution whose variance σ 2 v = v 2 x = k B T /m makes it satisfy the equipartition theorem.Likewise the other velocity components v y and v z obey the same distribution, (Eq.3), with the same variance σ 2 v . Exercise 1 (a) Find the root-mean-square (rms) velocity in three dimension for water molecules near room temperature (23 C).(b) Find the rms x-component of velocity, v 2 = σ v and the number density of water molecules (per unit volume).Use them to estimate the rate at which molecules cross through (in either direction) a 1 µm diameter disk oriented with its normal along the x-direction. Therefore, even if there is no bulk movement of the water, a small particle immersed in water is continuously subject to collisions from moving water molecules.The collisional force F i (t) exerted on the particle during the ith collision delivers an impulse J i = F i (t)dt to the particle over the duration of the collision.By the impulse-momentum theorem, this impulse changes the particle momentum by the same amount ∆p i = J i ; impulse is momentum change and they can be used somewhat interchangeably.For a one-micron particle in water at room temperature, such collisions occur at a rate ∼ 10 19 per second.Over some short time interval ∆t, the total impulse ∆p delivered to the particle is the sum of the individual impulses: ∆p = i J i , and the average collisional force exerted on the particle over this interval is then F c (t) = ∆p/∆t. Theory cannot predict the individual impulses.A head-on collision with a high velocity water molecule delivers a large impulse, while a glancing collision with a low velocity molecule delivers a smaller impulse.Depending on the direction of the collision, J i can point in any direction.Even when summed over an interval ∆t, ∆p will include a random component. When no other forces act on the particle, the impulses push the particle slowly through the fluid along a random, irregular trajectory.This random motion is known as Brownian motion and is readily observed under a microscope when any small (micron-sized or smaller) particle is suspended in a fluid.When the particle is trapped in an optical tweezers, the impulses continually push the particle in random directions.Because of the random component of the force, the particle motion is said to be stochastic (governed by probability distributions), and only probabilities or average behavior can be predicted. Exercise 2 You can estimate the average speed of Brownian motion from the fact that the speed of the microscopic particle at temperature T must also satisfy the equipartition theorem (2).For a silica sphere of diameter 1 µm and a density of 2.65 g/cm 3 , what is its rms velocity at room temperature?Is your result still valid if the particle is in an optical trap? Note that if a particle moves through the fluid at a velocity v, collisions are not equally likely in all directions.More collisions will occur on the side of the particle heading into the fluid than on the trailing side and the total impulse ∆p that is acquired by the particle will acquire a non-zero mean.The direction of this impulse must tend to oppose the motion of the particle through the fluid.Macroscopically, we describe this effect by saying that the particle experiences a viscous drag force F drag that is proportional to (and in opposite direction from) its velocity where γ is the drag coefficient.As they have the same microscopic origin, there must be a connection between the magnitude of the small impulses ∆p and the strength of the macroscopic drag force.We can find this connection by noting that while the microscopic collisions deliver momentum to the particle and drive its Brownian motion, the overall drag force tends to slow the particle down.On average these two effects must balance each other exactly, so that the particle neither slows to a halt nor accelerates indefinitely.Rather the particle maintains an average kinetic energy in accord with the equipartition theorem (Eq.2).In the following we investigate this force balance in order to relate the magnitude of the microscopic impulses to the drag coefficient γ. Therefore suppose that a micron-sized particle is moving through a fluid.For clarity we consider only one component (say x) of its motion, although exactly the same arguments will apply to its motion in y and z.Let ∆p represent the x-component of the net vector impulse ∆p that is delivered to the particle during an interval ∆t.Likewise v and p represent the x-component of the velocity and momentum, and J i is the x-component of impulse from a single collision.Because of the high collision rate, ∆t can be assumed short enough that the particle velocity over this interval is effectively constant, but still long enough to allow, say, a few thousand collisions or moreenough to apply the central limit theorem, which says that the sum ∆p = i ∆p i will be a random variable with a Gaussian probability distribution no matter what probability distribution governs J i .Moreover, the Gaussian distribution will have a mean µ p = ∆p equal to the number of collisions times the mean of the contributing J i and it will have a variance σ 2 p = (∆p − µ p ) 2 equal to the number of collisions times the variance of the J i .Because the number of collisions is proportional to ∆t, both the mean µ p and the variance σ 2 p should be proportional to ∆t. The total impulse ∆p is therefore a random variable that can be expressed as where µ p is a constant (associated with the mean of the impulse distribution) and δp is a zero-mean Gaussian random variable: δp = 0 with a non-zero variance δp 2 (associated with the variance of the impulse distribution). In a still fluid with no bulk flow, a particle at rest (v = 0) experiences collisions from all directions equally.A collision delivering an impulse ∆p i , is exactly as likely as a collision delivering −∆p i .Consequently, ∆p is equally likely to be positive as negative and its expectation value is zero: µ p = 0. However if the particle is moving through the fluid at velocity v, the impulses tend to oppose the motion (as discussed above) and we expect the aver-OT -sjh,rd 7 age impulse µ p will be proportional to v and opposite in sign.This tells us that the average collisional force µ p /∆t is the x-component of the viscous drag force.Then from Eq. 4 we have How does a Brownian particle slow down or speed up due to µ p and δp?How does this produce an average kinetic energy in agreement with the equipartition theorem?The particle's kinetic energy changes because the final momentum p f = p + ∆p = p + µ p + δp differs from the initial momentum p = mv.The energy change is given by This energy change can be non-zero over any interval ∆t; the particle can gain or lose energy in the short term.However, if the particle is to remain in thermal equilibrium over the long term, the average energy change should be zero.Applying the equilibrium condition ∆E = 0 will allow us to relate the variance of δp to factors associated with µ p .Therefore we need to evaluate the expectation value of the right side of this expression, which is simply the sum of the expectation values of each term: Note first of all that the third term on the right side is 2 pδp = 2m vδp .Because the particle velocity v and the random part of the collisional impulse δp are statistically independent, the expectation value of their product is the product of their expectation values: vδp = v δp .This term is zero because δp is a zero-mean random variable.The same applies to the last term in the parentheses, which contains the product 2 µ p δp .Because δp is random and uncorrelated with µ p , this term will also be zero. The first term on the right side involves µ 2 p , where we have already noted that µ p is proportional to the time interval ∆t.Therefore this is the only term in the expression that varies as ∆t 2 , while every other term is proportional to ∆t.Since we can choose ∆t as small as we like, we can make this term arbitrarily small in comparison to the other terms.We can safely discard this term as an insignificant contribution to ∆E .Now we can use Eq. 6 relating the viscous drag behavior and µ p .Making the substitution µ p = −γv∆t, we have the expectation value of the fourth term in Eq. 8: 2 µ p p = −2γ∆t vp = −2mγ∆t v 2 .This term and the remaining δp 2 term then give where in the last line we used the equipartition theorem applied to the particle's mean square velocity: Setting ∆E = 0 and solving for δp 2 then gives δp 2 = 2γk B T ∆t (10) Note this agrees with the prior assertion that δp 2 should be proportional to ∆t.Moreover it gives the proportionality constant, 2γk B T , that is needed to keep the velocity distribution in agreement with the equipartition theorem.Although derived for a particle moving along the x-axis, this same expression will apply to each of the three dimensions, and so we find the desired connection between the viscous drag coefficient, the mean squared random impulse (along any axis), and the temperature (i.e., the thermal equilibrium condition). Equation 10 leads directly to a version of the fluctuation-dissipation theorem, which says that the variance of the fluctuating force must be proportional to the dissipative drag coefficient γ and k B T .To see this, write the total collisional force as where F (t) = δp/∆t and, since δp is a zeromean Gaussian random variable with a variance given by Eq. 10, F (t) will be a zero-mean Gaussian random variable with a variance F (t) is called the Brownian force.In equilibrium, and on average, the energy lost by the particle to the fluid via the drag force F drag (t) is balanced by the energy gained by the particle from the fluid via the fluctuating Brownian force F (t). Note that different values of δp over any non-overlapping time intervals arise from a different set of collisions and thus will be statistically independent.For example, even for adjacent time intervals, the two δp values would be equally likely plus as minus.This independence implies F (t) is uncorrelated in time with F (t)F (t ) = 0, for t = t (or, at least, for |t − t | > ∆t).Thus, F (t) is a very odd force that fluctuates virtually instantaneously on all but the shortest time scales. The local environment may produce other forces on a small particle.The silica particles in our experiment can adhere to a glass coverslip.A vesicle in a plant cell may be pulled through the cell by a molecular motor, while a swimming bacterium generates its own propulsion force by spinning its flagella.These additional forces compete with the trapping and fluid forces.If these forces are known, measurements of the displacements they cause can be used to determine the strength of the trap.If the trap strength is known, measured displacements can be used to determine these additional forces.Subsequent sections describe how to use the physics of Brownian motion and viscous drag to determine the strength of the trapping and drag forces. We will need to know the position x of the particle with respect to the trap.In principle we could calculate x by analyzing microscope images collected with a camera.In practice this does not work well because the displacements are very small and fluctuate rapidly.We can obtain higher precision and faster time resolution if we detect the particle's displacement indirectly by measuring the laser light that the particle deflects from the beam focus.Light scattered by the particle travels downstream (along the laser beam axis) andin our apparatus-is measured on a quadrant photodiode detector (QPD).The QPD is discussed in the experimental section.Here we merely note that as the particle moves within the trap in either the +x or −x-direction, it deflects some of the laser light in the same direction and the QPD reports this deflection by generating a positive or negative voltage V . For small displacements x of the particle from the beam focus, the QPD voltage is linear in the displacement (V ∝ x).Consequently, we can write V = βx (13) We will refer to β (units of volts/meter) as OT -sjh,rd 9 the detector constant.Because the voltage generated by the QPD depends on the total amount of scattered light, β depends on the laser power as well as the shape and size of the particle and other optical properties of the particle and liquid. Analysis of Trapped Motion How can we measure the strength of the trap?Suppose that a particle, suspended in water, is held in the optical trap.If we move the microscope stage (that holds the sample slide) in the x direction at a velocity ẋdrive , the water (sealed in the slide) will move at that same velocity.The water moves with the slide and does not slosh around because it is confined in a thin channel and experiences strong viscous forces with the channel walls.On the other hand, the trap (whose position is determined by the beam optics) will remain fixed so that the fluid and the trapped particle will then be in relative motion.The drag force is opposite the relative velocity and thus given by −γ( ẋ − ẋdrive ).Like the Brownian force, the viscous force is well-characterized and together they will serve as calibration forces for the trap as described next. Together with the viscous force above, the trapping force −kx, and the Brownian force F (t), Newton's 2nd law then takes the form where m is the particle mass and x is its displacement with respect to the equilibrium position of the trap.Macroscopically the drag coefficient γ is related to the viscosity of the fluid and the size and shape of the moving particle.For a sphere of radius a, γ is given by the Einstein-Stokes formula γ = 6πηa (15 where η is the dynamic viscosity of the fluid.While this equation is accurate for a spherical particle in an idealized fluid flow environment, the damping force is influenced by proximity to surfaces (the microscope slide) and is sensitive to temperature and fluid composition through the viscosity η.Thus it is appropriate to determine γ experimentally and compare it with the Stokes Einstein prediction.A complete calibration includes a determination of the trap stiffness k, the detector constant β, and the drag coefficient γ. We use the calibration method designed by Tolic-Norrelykke, et al.The basic idea is to drive the stage back and forth sinusoidally with a known amplitude and frequency and measure (via the QPD detector voltage V ) the particle's response to the three forces.Because the physics of heavily damped motion of a particle in a fluid are well understood, the frequency characteristics of V (t) will reveal the parameters k, β, γ with good precision. You are probably familiar with underdamped oscillators, for which the drag term −γ ẋ in Newton's law is small in comparison to the acceleration ("inertial") term mẍ.For such oscillators the acceleration is largely determined by the other (nonviscous) forces acting on the particle.However, the drag coefficient γ in a fluid generally scales as the radius a of the particle, whereas the mass m scales with the particle's volume, m ∝ a 3 .Consequently, for sufficiently small particles (a ∼ µm), the inertial term is far smaller than the drag term, |mẍ| |γ ẋ|.Under such conditions, the oscillator is strongly over-damped and (to an excellent approximation) we may drop the inertial term from Eq. 14.The particle velocity is then determined by the balance between the viscous force and the other forces acting on the particle.Physically this means that, if any force is applied to the particle, the particle "instantly" (see Exercise 3 below) accelerates to its terminal velocity in the direction of the applied force.When we drop the mẍ term, the equation of motion becomes quite a bit easier to work with: Exercise 3 Suppose that the drag force −γ ẋ is the only force acting on the particle so that the equation of motion becomes mẍ = −γ ẋ.Solve this equation for ẋ(t) for a particle with an initial velocity v 0 .Show that the velocity decays exponentially to zero and give an expression for the time constant involved.(This would also be the time constant for reaching terminal velocity when there are additional forces acting on the particle.)What is the time constant for a 1 µm diameter silica sphere moving through water (η 10 −3 Ns/m 2 )? Integrate your solution for ẋ(t) (assuming x 0 = 0) to determine x(t).If the sphere has an initial velocity v 0 = 1 cm/s, approximately how far does it travel before coming to rest?Give your answer in microns (µm). Dropping the mẍ(t) term in Eq. 14 is equivalent to assuming that the time constant for reaching terminal velocity is negligible.To solve the resulting Eq. 16, first collect the x and ẋ terms on the right side and multiply throughout by e kt/γ Recognizing the right hand side as a derivative, we find where and has units of frequency (oscillations per unit time). Equations 18-21 show that the motion x(t) has two components due to two sources.x T (t) is the response to the random Brownian force F (t) and x resp (t) is the response to the motion of the surrounding fluid.They are integrals of the past values of the source terms with an exponentially decreasing weighting factor having a damping time, 1/2πf c , determined by the ratio of the damping constant to the spring constant.This time constant is typically in the millisecond range and thus only recent past values contribute. Applying a constant velocity flow (via a flow cell) so that ẋdrive = v 0 creates a constant drag force γv 0 and causes a shift in the particle position x resp = γv 0 /k.This is one common way to get information about the trap parameters γ and k.Our apparatus uses an oscillatory flow ẋdrive and looks for the predictable oscillatory response in x(t) to provide the same information. Thus, the microscope stage (i.e., the fluid) will be driven back and forth sinusoidally with a known amplitude A and frequency f d .The location of the stage x drive (with respect to the trap) is then given by and the fluid has a velocity ẋdrive Exercise 4 Derive Eqs.18-20 above.Evaluate the integral for x resp (t) given a constant velocity flow ẋdrive = v 0 and show that it produces the expected shift: x resp (t) = γv 0 /k.Also evaluate the integral given the drive velocity of Eq. 23 and show that x resp (t) will be a sinusoidal oscillation at the same frequency with an amplitude given by Because F (t) is random, x T (t) is randomnon-periodic and noisy.To characterize such signals, a statistical approach is typically used in which the frequency components of x(t) are analyzed.For that we need to return to Eq. 16 and investigate the Fourier transform of the motion. Consider the Fourier transforms of a trajectory x(t) and of the Brownian force F (t) The Fourier transform is evaluated for frequencies f covering both halves of the real axis −∞ < f < ∞ so that the inverse Fourier transform properly returns the original function.For example, x(t) is recovered from the inverse Fourier transform of x(f ): Note that x has units of m/Hz and F has units of N/Hz.A relationship between x and F is readily obtained by taking the Fourier transform of the equation of motion, Eq. 16.That is, multiply both sides by exp (−2πif t) and integrate over dt.The result is To get Eq.28, the Fourier transform of ẋ(t) has been replaced by 2πif times the Fourier transform of x(t)-as can be demonstrated by evaluating ẋ(t) starting from Eq. 27.The explicit form of ẋdrive as given by Eq. 23 has been used and the Fourier transform of sin(2πf d t), which is given by (δ , has been applied.Solving for x then gives where we have replaced k by 2πγf c (Eq. 21).Equation 29 is a perfectly good description of the particle response x-it just happens to be Fourier transformed.We will use it to extract information from measurements of x(t). Discrete Fourier transforms Although we have treated time t as a continuous variable that spans the range −∞ → +∞, in actual experiments we collect a finite number of data values over a finite time interval τ .A typical data set is a discrete sampling of the QPD voltage V (t) = βx(t) over a time interval τ 1 − 2 sec, with measurements acquired at a uniform digitizing rate R around 100,000 samples per second, i.e., with a time spacing between data points ∆t = 1/R.For this discussion, we can consider β as given, so that the data consists of values of x(t m ) at a set of uniformly-spaced sampling times t m . Let's assume that measurements of x(t) are made during the time interval from −τ /2 < t < τ /2.The integration in Eq. 25 needs to be truncated so that t falls within this interval only.Of course, we expect to recover the predicted results in the limit as τ → ∞. To analyze finite, discrete data sets, we need to define the discrete Fourier transform (DFT).The DFT of x(t) is the version of the Fourier transform that is comparable to Eq. 25 but applies to a large (but finite) number L of discretely sampled x(t m ) values.If the measurement times t m are spaced ∆t = τ /L apart in time and the integration is over the range −τ /2 ≤ t ≤ τ /2, then we can write t m = m∆t with −L/2 ≤ m ≤ L/2.The finite integration corresponding to Eq. 25 is performed according to the rectangle rule and becomes The DFT is expected to accurately reproduce the true Fourier transform with some well understood limitations discussed shortly. The DFT is evaluated at fixed frequencies and −L/2 ≤ j ≤ L/2.That is, both x(t) and its DFT x(f j ) contain the same number of points, but each of the x(f j ) has both a real and an imaginary part.However, the two parts are not independent.If the x(t) are real (as is the case here), it is easy to demonstrate (from Eq. 26) that x(−f ) = x * (f ).That is, for opposite frequencies, f and −f , the real parts are equal and the imaginary parts are negatives of one another.Thus x(t m ) and x(f j ) both contain the same number of independent quantities.The two sets are just different ways of representing the same data. The power spectrum Another issue arises because the theory of Brownian motion does not specify F (f ).At any frequency, F (f ) is complex (since e −2πif t = cos 2πf t − i sin 2πf t).For any complex number z = x + iy = re iθ , x and y are the real and imaginary parts of z, r is the modulus and θ = arctan(y/x) is the argument or phase of z.The theory only predicts the intensity given by the modulus squared: , where z * = x − iy is the complex conjugate of z.It does not predict the real or imaginary parts of z individually or the phase.Moreover, the theory predicts that the Fourier intensities F F * obtained from a finite Fourier transform will be proportional to the integration interval τ .The theory thus gives a result that is independent of τ only if the intensities are divided by τ .The traditional characterization of the strength of a real, fluctuating function of time, such as the Brownian force F (t) is its (two-sided) power spectrum or power spectral density (PSD), defined as defined for both positive and negative frequencies.As with x(t), F (t) is real and therefore F (−f ) = F * (f ).This implies that P f (−f ) = P (f ) and for this reason the power spectrum at f and −f is often added together to create the one-sided power spectrum.The power spectrum at f = 0 is left unmodified.It arises from any nonzero (DC offset) in the corresponding quantity.For a Brownian force P F (0) is expected to be zero as there is no long term average force in any direction.For f = 0, the power spectrum of the Brownian force is actually expected to be a constant-independent of f .That P F (f ) is flat and extends out to high frequencies is a result of the collisional origin of the Brownian force as described previously.Furthermore, in order that the average speed of the particle obeys the equipartition theorem (Eq.2), the one-sided PSD must depend directly on both the temperature T and the viscous drag coefficient γ: Equation 33 is another way of expressing the fluctuation-dissipation theorem of Eq. 10. Here, it gives the relationship between γ and the PSD for the fluctuating Brownian force. For any frequency component f of a given trajectory x(t), x(f ) is also a complex random variable with a mean of zero.The square of its Fourier transform, x x * , will have a non-zero mean and, as with F F * , is also proportional to the integration time τ .Thus the power spectral density for x is and is also independent of τ .Again, because x(−f ) = x * (f ), P (−f ) = P (f ) and, as with the power spectrum P F (f ), we add the components at f and −f (and leave the component at f = 0 as is) to create the one-sided power spectrum defined for positive f only.This one-sided power spectrum, which we still call P (f ), is then fit to the predictions for f > 0 given next.(We don't fit at f = 0, as this component arises from any DC component in x(t) and is typically an artifact of imperfectly positioning of the QPD.) To derive the predicted relationship between the one-sided power spectra for x(t) and F (t), consider the case where the stage oscillations are turned off; A = 0 and the delta functions in Eq. 29 are gone.With only the Brownian force contributing, multiply each side of Eq. 29 by its complex conjugate, divide by τ , and add negative and positive frequency com-ponents to get where Eq. 33 was used to eliminate P F (f ). (From here on, all power spectra are the onesided variety.)Notice that P F (f ) has units of N 2 /Hz and P (f ) has units of m 2 /Hz.It makes sense to consider these functions as a squared amplitude per unit frequency.For example, if we integrate P (f ) over a sufficiently small interval ∆f centered around a frequency f 0 , we obtain P (f 0 )∆f .Using the one-sided PSD means this value would represent the mean squared amplitude A 2 /2 of the oscillatory component of x(t) at the frequency f 0 . If the stage oscillations are turned back on, how do they affect the power spectrum?We can refer to Eq. 29 and see how the two delta function terms (resulting from the stage motion of amplitude A at the drive frequency f d ) contribute.The inverse Fourier transform of the delta function term in Eq. 29 shows that it represents oscillations at the drive frequency f d with an amplitude (This result was derived in Exercise 4 from the response integral of Eq. 20.Here we see it can be obtained using Fourier transforms as well.)The case f c f d corresponds to a weak trap or high drive frequency and gives A = A; the amplitude of the particle oscillation equals the amplitude of the stage oscillation.For stronger traps or lower drive frequencies, Eq. 36 shows how the trap attenuates the oscillation of the particle relative to that of the stage; A is smaller than A by the factor of 1 + f 2 c /f 2 d . Therefore the power spectrum of the particle in the trap is the sum of two terms. where P T (f ) is the first term-the power spectrum without stage oscillations (Eq.35) and P resp (f ) is the second term-the δ-function term.These two terms have the noteworthy behaviors discussed next. P resp is such that its integral over any frequency interval that includes f d gives the mean squared amplitude A 2 /2 of the particle's sinusoidal response to the applied stage oscillations. With the trap off (k = 0, f c = 0) and so P T (f ) = k B T /π 2 γf 2 , i.e., it falls off as 1/f 2 .With the trap on (f c = 0), f c plays the role of a "cutoff frequency."At high frequencies f f c , f c can be neglected compared to f and once again, P T (f ) = k B T /π 2 γf 2 -the same as for the trap off; high-frequency oscillations are unaffected by the trap.At low frequencies f f c , f can be neglected compared to f c and P T (f ) = k B T /π 2 γf 2 c .The power spectrum goes flat (becomes independent of f ) and does not continue increasing as f decreases.Moreover, this low-frequency amplitude decreases as 1/f 2 c , i.e., the amplitude of the motion at low frequencies decreases as the trap strength increases.Finally, P T (f ) increases with temperature and decreases with γ; fluctuations in the position of the particle are larger at higher temperatures and are suppressed by the viscous drag. Equation 37 is for the particle's position x, while we will actually measure the QPD voltage V (t) = βx(t).Our experimentally determined power spectrum density will be that of the voltage V V * /τ , not the position x x * /τ . As the Fourier transform is linear, the Fourier transform of V (t) is related to that of x(t) by the calibration factor β: Accordingly, if we experimentally measure V (t) and then calculate P V (f ), the PSD of the voltage data, then we expect Keep in mind that the main prediction, Eq. 37, for P (f ) was derived from continuous Fourier transforms assuming an infinite measurement time, whereas our data are collected in a discrete sampling over a finite interval τ .Because the discrete power spectrum P V (f j ) is derived from a finite set of V (t m ) collected over a time interval τ spaced ∆t apart, it is not expected to perfectly reproduce that prediction.However, the differences due to the finite acquisition time and sampling rate are well understood and predictable. One aberration is aliasing.The highest frequency represented in P V (f j ) is at j = L/2 or f j = ∆f L/2, which is just half the sampling rate and called the Nyquist frequency f Ny .If the true power spectrum is zero for all frequencies above f Ny , then P V (f j ) should agree well with the true P V (f ) at all f j .However, if the true P V (f ) has components above f Ny , these components show up as artifacts in P V (f j ).Components in the true P V (f ) at frequencies near f = f Ny + δf show up in the discrete version P V (f j ) at frequencies f j near f Ny − δf ; the true components are reflected about the Nyquist frequency.For example, for a 200 kHz sampling rate, the Nyquist frequency is 100 kHz and oscillations in V (t) at 104 kHz, show up in P V (f j ) near f j = 96 kHz.The effects of aliasing will be apparent in your data and can be dealt with easily. The randomness of the trajectory over the finite time interval leads to power spectra that OT -sjh,rd 15 have random variations from the predictionsthe PSDs will be noisy.The noise would decrease as we work toward the limit τ → ∞.However, it is not practical to take ever longer measurements, with correspondingly larger data sets.Data sets larger than a few hundred thousand data points are tedious to manage and analyze; the improvement in the result does not justify the extra effort of handling and processing such large data arrays.A far better way to approach the limit of τ → ∞ experimentally is just to collect a number of data sets of duration τ ∼ 1 sec and then average the P (f ) obtained from each set. After each τ -sized V (t) is measured, its discrete Fourier transform V (f j ) is calculated and then used to determine its power spectrum density P V (f j ).After sufficient averaging of such P V (f j ), the predicted behavior will begin to appear-a continuous part from P T (f ) and a sharp peak at f d due to P resp (f ).This averaged PSD is fit to the prediction of Eq. 37 (with Eq. 40) to determine the parameters of the optical trap: the trap constant β, the drag coefficient γ and the force constant k = 2πγf c . Exercise 5 In the optical trapping literature, typical reported values for the cutoff frequency are in the range f c 10 2 − 10 3 Hz.Assuming that these correspond to 1 µm diameter spherical particles in water at room temperature (295 K), estimate the magnitude of the trap stiffness constant k.For f c = 100 Hz, what displacement would result if the full weight of a 1 µm diameter silica sphere hung from a spring with this force constant? The equipartition theorem also applies to the average potential energy of a harmonic oscillator: Use this relation to find the rms deviation of the particle from its equilibrium position: x 2 .Compare this rms displacement and the size of the shift in the equilibrium position due to gravity/buoyancy, with the particle diameter. Exercise 6 Make two sketches of the P T (f ) term in Eq. 37 for a particle in a trap with f c = 100 Hz.The first sketch should use linear scales (P T vs. f ), while the second should use a log-log scale (log Comparing the predicted P V (f ) with one actually determined from the measured QPD voltage vs. time data is done in two steps: one for the thermal component P T (f ) and one for the delta function response P resp (f ).We will begin with a discussion of the latter. The main theoretical feature of a delta function is that its integral over any region containing the delta function is one.Thus, the predicted integral W of the P V (f ) of Eq. 40 associated with the delta function in Eq. 37 is easily seen to be In the experiment, the drive frequency f d will be chosen so that there will be an exact integer number of complete drive oscillations over the measurement interval τ .This makes f d one of the frequencies at which the P V (f j ) is evaluated and should produce one high point in this PSD.You will determine the height of that point above the thermal background and multiply by the spacing ∆f between points to get the experimental equivalent of integrating P V (f ) over the delta function.In rare cases, you may see the experimental delta function spread over several f j centered around f d .In these cases, the experimental integral is the sum of the amount these points exceed the thermal background multiplied by ∆f . The experimental value of W obtained this way is then used with Eq. 42 and the known stage oscillation amplitude A, the drive frequency f d , and the value of f c (determined in the next step) to determine the trap constant β. The force constant k and the drag coefficient γ are found by fitting the non-δ-function portion of the experimental P V (f ) (f = f d ) to the prediction of Eq. 37 (with Eq. 40).That is, for all values of f except f = f d , the predicted PSD can be written For fitting purposes, this equation is more appropriately expressed where B is predicted to be The experimental P V (f j ) is then fit to Eq. 44 over a range of f (not including the point at f = f d ) which then determines the fitting parameters B and f c .With f c determined directly from this fit, the experimental W is used with Eq. 42 to determine β.Then, if we assume T is equal to the measured room temperature, the fitted B can be used in Eq. 45 with f c and β to determine the value of γ.Finally, the force constant k = 2πγf c (Eq. 21) is determined and the three trap parameters γ, β and k are then known. Apparatus Overview Our optical trap is based on the design of Appleyard et al.The design uses an inverted microscope to focus an infrared diode laser beam onto the sample and detects the deflection of that beam with a quadrant photodiode detector (QPD).The design also illuminates the sample with white light and generates an image of the sample on a video camera.The details are somewhat complex, as the same optical elements perform several functions simultaneously.The layout is described below.Refer to Fig. 2 while considering the following two optical paths: The optical path for the infrared laser: The diode laser is a semiconductor device that outputs its (λ = 975 nm) infrared beam to a single-mode optical fiber.A converging lens (#1) receives the diverging light exiting the fiber and collimates it to a beam with a diameter of ∼ 10 mm, or sufficient to fill the back aperture of the trapping objective (#3).A pair of mirrors and the dichroic mirror (#2, infraredreflecting) are used to steer the laser beam vertically upward, along the central axis of the objectives.The beam enters the back aperture of the lower microscope objective (#3) (100× Nikon 1.25 NA, oilimmersion), which brings the beam to a focus at the sample, forming the optical trap.The upper microscope objective (#4) captures and re-collimates the infrared light that has passed through the sample and directs this energy upward.A dichroic (infrared-reflecting) mirror (#5) then deflects the beam toward a converging lens (#6), which focuses the beam onto the quadrant photodiode detector (#7, QPD). OT -sjh,rd 17 The optical path for visible light: An LED (#8) generates white light that passes through the dichroic mirror (#6) and is focused by the upper objective (#4) onto the sample.Transmitted light from the sample area near the trap is gathered by the lower objective (#3) and with lens (#9) is brought to an image at the camera. In this design the infrared laser serves two roles.It traps the particle at the focus, and it is also used to detect the motion of the particle within the trap.If there is no particle in the trap, the infrared laser beam propagates along the optical axis of the instrument, i.e., along the common cylindrical axis of the mi-croscope objectives).The recollimated beam exiting the upper objective travels parallel to the optical axis, and converging lens #6 brings this beam to a focus just a bit in front of the center of the QPD.However, if a small particle is near the laser focus, the beam is refracted away from the optical axis.The collimated beam leaving the upper objective will then propagate at an angle to the optical axis, and so it is focused by converging lens #6 to a spot that is displaced from the center of the QPD.The QPD reports this displacement as a voltage V , which is proportional to the particle's displacement x from the laser focus (see Eq. 39).The QPD actually detects deflections in the both the x and y directions, reporting two independent voltages V x and V y that you will measure. Hardware Data acquisition board The computer communicates with the tweezers apparatus via a USB connection or through a (National Instruments PCI-MIO-16E-4) multifunction data acquisition board (DAQ) located inside the computer.See Figure 3. The DAQ board supplies voltages that move the positioning stage in the xy plane and it reads voltages from the quadrant photodiode-the raw data for analyzing particle motion in the trap.Two components of the DAQ board are used to do these tasksan analog to digital converter (ADC) and two digital to analog converters (DACs). The ADC and both DACs are 12-bit versions, meaning they have a resolution of 1 part in 2 12 = 4096 of their full scale range.For example, on a ±10 V range setting, voltages are read or written to the nearest 4.88 mV.An amplifier in the DAQ allows for full scale ranges on the ADC from ±10 V to ±50 mV.The DAC range is ±10 V.The ADC can read analog voltages at speeds up to 500,000 readings per second, and the DACs can write output voltages at similar speeds.The ADC has a high speed switch called a multiplexer that allows it to read voltages on up to eight different inputs. A cable connects the DAQ card in the PC to an interface box (National Instruments BNC-2090) that has convenient BNC jacks for connecting coaxial cables between the various apparatus components and the DAQ input and output voltages. Laser The laser diode package (Thorlabs, PL980P330J) is premounted to a singlemode fiber which brings the laser light to the apparatus.The package is mounted on a temperature stabilized mount (Thorlabs, LM14S2) kept at constant temperature by the (Thorlabs, TED200C) temperature controller.An interlock requires the temperature controller to be on before the laser current controller will operate.The laser current is adjusted and stabilized by a current controller (Thorlabs, LDC210C).The laser current can be read off the controller.The laser turns on at a threshold current around 70 mA and then the laser power increases approximately linearly with current over threshold. Internal to the laser diode package, a small, constant fraction of the laser beam is made to fall on a photodiode which generates a current proportional to the laser beam power.This current is measured in the laser current controller and can be read if you set the front panel meter to display I P D .The supply allows you to scale I P D with any proportionality constant for display as P LD .By independently measuring the actual laser power P out of the 100× objective as the laser current is varied, the proportionality between P and I P D was confirmed and the proportionality constant has been adjusted so that P LD gives the laser power P out of the objective.Of course, P will not be P LD if the beam path is blocked or if the alignment of the laser is changed.The instructor should be involved if a new calibration is deemed necessary. Controller hub There are six Thorlabs "T-Cube" electronic modules mounted in the (Thorlabs, TCH-002) T-Cube controller hub.The modules, described below, are used to electronically control the position of the microscope slide and to control and read the quadrant photodiode detector.The hub supplies a signal path between different modules and between all six modules and the computer's USB bus. Quadrant Photodiode Detector A quadrant photodiode detector (Thorlabs, PDQ80A) is used to produce voltages that are linearly related to the position of a particle in the neighborhood of the laser focus.It has four photodiode plates arranged as in Fig. 4 around the origin of the xy-plane.The plates are separated from one another by a fraction of a millimeter and extend out about 4 mm from the origin.The QPD receives the infrared light from the laser and outputs a current from each quadrant proportional to the power on that quadrant.The Thorlabs TQD-001 module powers the QPD and processes the currents.It does not output the currents directly.Instead, it converts them to proportional voltages V 1 -V 4 by additional electronics to produce the following three output voltages.The x-diff voltage is the difference voltage ) and the sum voltage is the sum of all four.V x is thus proportional to the excess power on the two quadrants where x is positive compared to the two quadrants where x is negative.Similarly for the y-diff voltage.The sum voltage is proportional to the total laser power on all four photodiodes. With no scattering, the light that is brought to a focus by the 100× objective diverges from there and is refocused by the 10× objective and lens #6 so that it again comes to a focus just a bit in front of the QPD.The rays diverge from this focus before impinging on the photodiodes so that by the time they get there, the spot is a millimeter or two in diameter and when properly centered will hit all four quadrants equally. With a particle in the trap, the scattered and unscattered light interfere and produce an interference pattern on the QPD that depends on the location of the scatterer.For small variations of the particle's position from equilibrium, the QPD voltages V x and V y produced by these patterns are proportional to the particle's x and y positions.That is V x = βx and V y = βy. While the range of linearity between the V and x is quite small-on the order of a few microns, it is still large compared to the typical motions of a particle in the trap (see Exercise 5).Significantly, this voltage responds very quickly to the particle's position so that high frequency motion (to 100 kHz or more) is accurately represented by V (t). The QPD module has buttons for control of its function and it has an array of LEDs that show whether the beam intensity pattern is striking the QPD roughly in the center (center LED lit) or off-center (off-center LEDs lit).The QPD is mounted on a manuallycontrolled, relatively coarse xy stage that will be centered by hand during calibration. Microscope stage and piezoelectric control The microscope stage is the component that supports the microscope slide between the 100× and 10× objectives.It is built around the Thorlabs MAX311D 3-axis flexure stage, which provides three means for positioning the slide in the trapping beam. First, and very crudely, you can manually slide the stage across the table for coarse positioning in the x-and y-directions.You will need to do this to put your slide into the beam, but you will find it difficult to position the sample to better than about ±1 mm using this method. Second, the stage has a set of micrometers that can be turned manually to move the stage. Over a range of about 300 µm, the micrometers operate as differential screws, in which two internal threads with slightly different pitches turn simultaneously-producing a very fine translating motion around 50 µm/revolution.As you continue turning the micrometer spindle, the differential operation runs out and the motion switches to a coarser control in which the stage moves around 500 µm/revolution.The coarse control can be obtained directly by turning the micrometers at the knurled ring up from the spindle, which then bypasses the differential screw. Third, inside the stage there are three piezoelectric stacks that allow the computer to move the stage along each of the x, y and z axes.Piezoelectrics ("piezos") are crystals that expand or contract when voltages are applied across two electrodes, which are deposited on opposite sides of the crystal.The Thorlabs TPZ-001 piezo controller modules supply these control voltages (up to about 75 V).The piezos provide very fine and precise control of stage motion, but only over short distance ranges: The full 75 V range generates only about 20 µm of stage motion.The piezo voltage can be read from an LED indicator on the face of the module, which also has control buttons and a knob for the various modes of controlling this voltage. There are several ways to use the piezo controller.Manual mode, in which the voltage is controlled via the knob on the module, will be disabled and is not used in our setup.(The differential micrometers are far more convenient manual controls.)Only the following two electronically controlled methods will be used. One method is to use the DACs to supply analog voltages in the range of 0-10 V to the Thorlabs piezo-control module.The Thorlabs module amplifies these voltages to the 0-75 V scale and sends them to the piezo.This is the fastest method and is the main one used in our apparatus.Alternatively, the computer can communicate with the control module over the USB bus to request a desired piezo voltage. Unfortunately, piezos have strong hysteresis effects.Their length, i.e., how far they will move the stage, depends not only on the present electrode voltage, but also on the recent history of this voltage.One method to deal with piezo hysteresis is to obtain feedback data from a strain gauge mounted alongside the piezo.The stage has one strain gauge for each of the three axes.They are read by the Thorlabs TSG-001 strain gauge modules, which are placed next to the matching piezo module in the controller hub. The strain gauge is a position transducer with an output voltage that is very linear in the displacement caused by the corresponding piezo.The output voltage from the strain gauge module is internally wired to its corresponding piezo module through the controller hub.The displacement of the strain gauge caused by the piezo is indicated on a scale on top of the strain gauge module in units of percentage of the full scale: 0-100% for motion of about 20 µm. Using strain gauge feedback, the controller allows you to supply USB commands requesting stage positions as a percentage (0-100) of the full scale motion (i.e., 0-100% of 20 µm).This is the second mode of motion control used in this experiment.Electronic feedback circuitry adjusts the actual voltage sent to the OT -sjh,rd 21 piezo to achieve that percentage on the strain gauge.Our setup has only two strain gauge controllers, which are used only on the x and y piezos.(We do not use the z-piezo.) Another issue with the stage is cross talk between the x, y and z motions of the stage due to its flexure design.The stage is capable of roughly 4 mm of travel in each direction, but the motions can couple to one another.Right around the middle position of the stage, changing x, y or z piezo or micrometer, should only move the stage in the x, y or z direction.However, as you move away from this central position, changing the x-piezo or micrometer, for example, will not only change the x-position of the stage; the flexure design causes small changes in the y and z-positions as well.In addition, the motion calibration factors-how much stage motion will correspond to a given micrometer or piezo change will change.For example, when the stage is near the limit in one or more of the three directions (±2 mm), changing the x piezo, say, will move the stage in the y-and z-directions by as much as 30% of the amount moved in the x-direction.Consequently, it is worthwhile to try to operate the stage near the middle of its x, y and z ranges. Camera A Thorlabs DCU-224C color video camera is used to observe and monitor the happenings in the trap.It is also the means for transferring a length scale from a calibration slide to the motion caused by the piezo.The camera has a rectangular CCD sensor with pixels arranged in a 1280 × 1024 cartesian grid with 5.3 µm spacing.Thus distances measured on the image in pixels will scale the same way in x and y with real distances on the slide. Software UC480 The camera is controlled and read using the UC480 software program.This program has features for drawing or making measurements on the images, and for storing frames or video sequences.Select the Optimal Colors option at load time, then hit the Open camera buttonthe upper left item on the upper toolbar.The default camera settings generally work fine, but if there are image problems, many camera settings can be adjusted to improve image quality. Note that there is a bad light path in our apparatus that throws some non-image light onto the camera sensor.This artifact can be eliminated by partially closing the adjustable aperture directly under the camera. Become familiar with the measurement tools and the drawing tools on the utility toolbar arrayed along the left edge of the screen.In particular, you will use the Draw circle, Draw line, and Measure tools.Other settings and features can be found on the upper toolbar or the menu system.Start with the contrast and white balance set for automatic optimization.Learn how to set and clear an AOI or area of interest (a rectangular area on the sensor) so that only the data from that area is sent from the camera to the program.This increases the frame rate compared to using the full sensor. Initialize program This program sets up all the T-Cube modules to run in the appropriate modes used in other programs.It sets the piezo and strain gauge feedback channel, zeros the piezos' outputs and then zeros the strain gauges.Finally, it sets the x-and y-piezos near their midpoint voltages of 37.5 V, and sets the oper-ating mode to add this 37.5 V to the voltages generated by signals applied to the external input.In this way, the piezo is near the middle of its extension, and so both positive and negative translation in x and y can be generated by supplying positive or negative DAC voltages to the back of the piezo controller. Oscillate Piezo program This program creates sinusoidal waveforms from the two DACs for driving the x-and yinputs of the piezo controllers.It is used in conjunction with the UC480 Camera program to calibrate the amplitude of the stage motion when driven by a waveform of a given amplitude and frequency. Raster Scan program This program is used to scan the x-and ypiezos in a slow scan mode using strain gauge feedback while time averaging the signals from the QPD.The raster scan starts with a fixed voltage applied to the x-piezo while the ypiezo is scanned back and forth over a userdefined range.Then the x-piezo is moved a small amount in one direction and the ypiezo is scanned again.This move-x-and-scany process is repeated until the x-piezo has also scanned over the user defined range.At each xy value, the program digitizes the V x and V y signals from the QPD module and displays the results for each of these signals. This program is used to see how the QPD works, give a sense for the intensity pattern on the QPD, determine an approximate detector constant β, and see how β depends on both the laser intensity and objective focusing. Tweezers program The main measurements are made from this program.It has two tabbed pages along the right.One is labeled Acquire and is for setting the data acquisition parameters, measuring the V x and V y signals from the QPD, and computing and averaging the PSD.The other tab is labeled Fit and is for fitting the PSD to the predictions of Eq. 37. The default parameters for data acquisition should work fine.The number of points in each scan of V x and V y vs. t is forced to be a power of 2 (2 18 = 262144 is the default) so that fast Fourier transforms can be used.The sampling rate (number of readings per second) for the ADC is determined by dividing down a 20 MHz clock on the DAQ board.The divisor is the number of 20 MHz clock pulses between each digitization.The maximum speed of the ADC is around 250 kHz when reading two channels (V x and V y ).The 105 default value for this divisor leads to a sampling rate around 190 kHz.With 2 18 samples in each scan, each scan lasts 2 18 • 105/(20 × 10 6 Hz) = 1.38 s.The inverse of this time (0.73 Hz) is the frequency spacing between points in the PSD. The ADC has an instrument amplifier that allows bipolar full scale (F.S.) voltages from ±50 mV up to ±10 V.The F.S. range control should be set as small as possible without letting the V x or V y signals hit the range limits. The two DACs used to drive the stage piezos send discretized sinusoidal waveforms with adjustable amplitudes and with an adjustable phase between them.You can set the amplitude A x or A y to zero to get one-dimensional back and forth stage motion.However, it is recommended that the amplitudes be set equal with a 90 • phase difference so that the stage will move with nearly circular motion.This way no matter what direction the QPD's x and y responses are aligned to, the stage motion will be sinusoidal with the chosen amplitude in those directions. Recall that the drive frequency for the stage must be made equal to one of the discrete points in the QPD power spectrum.For this to happen there must be exact integer number M of stage oscillations spanning the data acquisition time.The default settings for this number is 32 and, with a data acquisition time of 1.38 s, gives a drive frequency f d = 23.2Hz (point 32 of the PSD). The output waveform is constructed with 512 (= 2 9 ) points per period of the sinusoid.This is the maximum on-board buffer size for each DAC and is not adjustable.The program must then calculate a separate (integer) divisor of the 20 MHz clock that determines the output rate for each point on this output waveform.In order for M periods of the output waveform to be exactly equal to the total sampling time for the ADC, M must have common factors with the clock divisor for the ADC.For example, if the default divisor (for the ADC) of 3 • 5 • 7 = 105 is used, allowed values for M would be any that can be made with single factors of 3, 5, and 7 and any number of factors of two.As a second example, an ADC divisor of 5•5•4 = 100 gives a sampling rate of 200 kHz and allowed values of M will be any that can be made with one or two factors of 5 and any number of factors of two.Selecting disallowed values for M (those that would produce a non-integer clock divisor) will disable the continue button. Once the data acquisition parameters have been accepted-by hitting an enabled continue button-they cannot be changed without restarting the program.One exception is the amplitude and phase of the drive waveforms.They can be adjusted by setting the new values in the controls for them and hitting the change amplitude button. The fitting routine, accessed from the fit tab, has several features designed for the data from this apparatus.First note the channel selector just above the graph.It is used to switch between the two channels (0 or 1, i.e., the QPD x-or y-directions).The two cursors on the graph must be set to determine the points in between that will be used in the fit to Eq. 44.The PSD is normally displayed on a log-log scale, but this can be changed using the tools in the scale legend at the lower left of the graph.Our PSDs show that many high frequency and some low frequency noise components are being picked up in the V x and V y signals.They might originate from external light sources, electrical interference, table and apparatus vibrations, etc.These unwanted signals typically appear as spikes on top of the normal Lorentzian shape of the PSD. Spikes at the high frequency end of the PSD can be eliminated from consideration by setting the second cursor below them.Set the high cursor to include enough points above f c , but below most of the high frequency spikes.Spikes between the cursors can still be eliminated from the fit by setting their weighting factors to zero.This is done programmatically by telling the program how to distinguish these spikes from the normal Lorentzian data.The criteria for eliminating the spikes thus requires an understanding of the normal and expected noise in the PSD. Ordinary random variations in V (t) over any finite time interval lead to noise in the Lorentzian PSD that becomes smaller as more data is averaged.Watch P V (f ) as you average 50 scans and then stop the acquisition.Note that the size of the noise (not the unwanted spikes) on the vertical log scale is nearly constant.While the band of noise may appear a bit wider at higher frequencies, this is at least partially an artifact of the log f scale for the horizontal axis; at higher frequencies the points are more closely spaced so that the number of 2-sigma and 3-sigma variations appear more frequently per unit length along the f -axis. Uniformly sized noise on a log scale implies the fractional uncertainty in P V (f ) is constant.Estimate the ±1-sigma fractional uncertainty that would include about 68% of the data points in any small region of frequency. As you should have noticed above, this fraction becomes smaller as more data is averaged.Check that it is roughly constant for all f even as P V (f ) varies by one or more orders of magnitude.Enter this fraction in the control for frac.unc.(fractional uncertainty).Then enter the rejection criterion in the reject control.For example, setting the frac.unc.control to 0.1 indicates that near any f , 68% of the P V (f ) data points should be within ±10 percent of the middle value.Setting the reject control to 3 would then exclude from the fit (set the weights to zero) any points more than 30% "off." The program uses the fitted PSD at any f as the central value for the rejection.For example, with the settings given above, any points more than 30% from the current estimate of the fitted curve would be thrown out.The initial guess parameters define the current estimate of the fitted P V (f ) according to Eq. 44, and these estimates must be set close enough that good points are not tossed.Click on the show guess button to see the current estimate of the fitted P V (f ) and the resulting rejected points, which are shown with overlying ×'s.Clicking on the do fit button initiates a round of nonlinear regression iterations excluding the rejected points.After the fitting routine returns, click on the copy button to transfer the ending parameter values from the fit to the initial guess parameters and display the new points that would be rejected in another round of fitting.Continue clicking the copy and then the do fit buttons until there are no further changes in the fit.P V (f ) varies over several orders of magnitude and the fact that the fractional uncertainty is roughly constant over a wide range, indicates even the points out in the tails of the Lorentzian contain statistically significant information.If an equally weighted fit were used, the points in the tails would not contribute to the fitting parameters as their contribution to the chi-square would be too small compared to the points at lower frequencies where P V (f ) is much larger.Consequently, the fit should not be equally weighted.Because the data point y-uncertainties σ i are proportional to y i , the fit uses weights 1/σ 2 i proportional to 1/y 2 i .If the fitting function accurately describes the data and the correct fractional uncertainty is provided, the normalized deviations between the data and the fit (y i − y fit i )/σ y should be approximately Gaussian-distributed with a mean of zero and a variance of one and the reduced chi-square for the fit should be about one. Check the graph of normalized deviations to verify the expected behaviors and check for systematic deviations.(Click on the alternate tab control for the graphs to find this graph.)This graph also shows excluded points and can also be used to make sure valid points are not being rejected.Even though the rejection criteria depends only on the product of the frac.unc.and reject controls, the frac.unc.control should be adjusted to get a reduced chi-square around one and the reject control should be adjusted so that only undesired points are rejected from the fit.Correctly setting both controls really only matters if you are interested in determining the fitting parameter uncertainties.Recall that the fitting parameter covariance matrix scales with the assumed covariance matrix for the input y i .With σ i set proportional to y i , setting the fractional uncertainty to force a reduced chi-square of one determines the proportionality factor to use in order to get the best estimates for the true input and output covariance matrices. Experimental overview The basic tasks are to measure the trap strength, calibration constant, and drag coefficient for small particles of silica (SiO 2 ) roughly 0.5-1.5 µm in diameter.Having gained this experience with the apparatus, you can then experiment with biological trapping by measuring, e.g., the force generated by a swimming bacterium. Note that it takes a couple of days to prepare the bacterial culture for this experiment, so you will need to plan ahead by notifying your instructor of the date when you plan to perform the bacterial study. Laser Safety Note that although this experiment is not dangerous, any eye exposure to the infrared laser beam would be very dangerous.The beam is very intense, with a power of several hundred mW, and it is invisible.Serious and permanent eye injury could result if the beam enters your eye.Proper laser eye safety precautions must be used at any time that the laser is running. The apparatus is designed to keep the infrared laser beam enclosed within its intended optical path and away from your eyes.The instrument is safe to use as long as the laser remains enclosed.Therefore, laser safety means that you should not operate the laser when the beam enclosure is open or any portion of the optical pathway has been opened or disassembled.If you open or disassemble any components while the laser is powered you could expose yourself to the IR beam and suffer a potentially severe injury.Do not attempt to align or adjust any part of the infrared laser optical path. The only point in the apparatus where the beam leaves its confining path is at the sample slide, between the two microscope objec-tives.In this region the beam is strongly converging/diverging and is not likely to present a hazard to the user.However you should use common sense and avoid diverting the beam out of this region.Do not place shiny, metallic or reflective objects like mirrors or foil into that region.Do not put your face close to the slide if the laser is on. General concerns In addition to laser safety issues, please take care to observe the following precautions • Alignment of the optical system: All optical elements have already been carefully aligned and optimized.The only optical adjustments you will need to make involve the xyz positioning of the microscope stage and xy positioning of the QPD.Do not attempt to move, disassemble or adjust the optical fiber or any of the mirrors and lenses and other optical components.If you disturb the laser alignment, the optical trap will cease to function and it will require tedious and time-consuming realignment.Any disassembly of the apparatus could also lead to accidental and very dangerous eye exposure to the laser beam. • The 100× objective: Please take care that nothing (except immersion oil and lens paper) ever touches the lens of the lower microscope objective.In focusing or adjusting the stage you should not crash or scrape the slide against the lens. • The laser optical fiber: Please do not touch or handle the optical fiber.It is extremely delicate and costly to repair. • The laser settings: The laser beam power is adjustable up to a maximum current of I LD = 650 mA.The laser also has a temperature controller that has been programmed to maintain the laser at its optimum temperature.You can adjust the laser current right up to the maximum limit value, but please do not attempt to change the limit or the laser control temperature. Procedures The following procedures should probably be done in the order outlined below.They will take more than one day.Be sure to follow the procedures in the Cleaning Up section before leaving. Initialization Turn on the power supply for the controller hub.Wait a few seconds for their firmware to initialize and then run the Initialize program.Check that the LED light source is on. Camera calibration Find the Thorlabs R1L3S3P grid slide and determine which side has the grid patterns.Place a small drop of immersion oil over the smallest (10 µm) grid pattern and place the slide on the sample stage with the calibration markings facing downward (oil side down). Then carefully slide the stage into position over the objective, watching that you do not crash the slide into it: the bottom of the slide should be above the objective. Start the UC480 camera program.Using the manual z micrometer, lower the slide down while watching the camera image for the grid to come into focus.You will need to get the slide quite close to the objective lens (less than a mm) to get into focus. [Putting a drop of oil on the coverslip, putting the slide onto the stage (coverslip down) and into the area just above the 100× objective will henceforth be referred to as "installing" the slide."Uninstalling" will mean raising the stage, sliding it away from the trap, and removing the slide.] The 10 µm grid is rather small and so coarse and fine adjustments in the x-and y-directions may also be needed just to get it into view.If you are having trouble finding it, be sure the slide is correctly oriented with the grid side down.You may want to find the focus with one of the larger grid patterns first.Now you can determine the pixel calibration constant: How many microns at the sample area correspond to one pixel on the camera image?Note this is not the actual pixel size (5.3 µm/pixel), but rather that size divided by the magnification, or roughly 0.05 µm/pixel.Use the camera software measuring tool to determine the separation in pixels of known lengths on the grid slide.(The grid squares are 10 × 10 µm.)Our camera pixels are square and you should find the same values in the xand y-directions.Determine the camera calibration constant in µm/pixel. Next use the x-and y-micrometers on the stage to determine their sensitivity on the fine (differential) operation.The micrometer finecontrol spindles are marked with 50 divisions per rotation.Because the distance moved for each division is somewhat variable, we will call them m-units; 50 m-units per rotation.Use the camera and grid markings to determine these fine-control m-units per actual distance moved.This calibration constant should be near 1 m-unit per µm of real motion.However, remember that this calibration can change a bit depending on how far the stage is from its central position. Uninstall the grid slide, clean it with alcohol, wiping it gently with a sheet of lens paper, and place it back in its protective case. OT -sjh,rd 27 Sample sphere preparation You will need to prepare two solutions of 1.2 µm diameter silica beads.Since you will need to make measurements on a single sphere, getting their concentration correct is very important.Too few spheres and it will be difficult to find any.Too many and the spheres will interfere with one another during the measurements. Have the instructor show you proper use of the pipettors and vortex mixer.Be sure to use the vortex mixer just before sampling from the stock solution, any intermediate solutions, and just before loading your final solution into the slide.The spheres tend to settle and the vortex mixer is needed to get them uniformly distributed in the suspending liquid.If you do not mix, the density of spheres will be wrong.Moreover, if you don't mix the main stock solution before taking a sample, you would be changing the concentration of the remaining stock solution. Prepare approximately 1.5 ml of a 150:1 dilution of the stock solution of the 1.2 µm spheres in deionized (DI) water.Even this diluted solution is still much too dense for measurement and another 150:1 dilution is needed.For stuck spheres, this second dilution should be into 1M NaCl water, which makes them stick to the slide.For free spheres, use DI water again.Only make the free sphere dilution at this point.Be sure to mark the vials with the sphere size, dilution factor, date, and whether it is in water or a salt solution.At a dilution of 150 2 = 22, 500, there should be an average of a few beads in the camera image. The Ibidi slide has wells on each side where the solution is introduced.The first sample needed will be the 1:22,500 dilution in DI water.Put about 50 µl in one well and use a syringe to suck it through the channel, taking care not to suck air into the channel.(Add another 50 µl as the well empties.)It is easier to see the liquid coming into the channel if the slide is placed on a dark background.When filled, add or remove the solution to the wells as necessary to get it about half-way high in each well.If the heights are unequal, there will be a pressure difference which will drive the fluid from one well to the other until the pressure difference is eliminated.Even if you get the well heights equal by eye, small differences can still drive the fluid and it can take several minutes for the motion to cease.It can be very difficult to see spheres if they are moving with any but the smallest velocity flow. Initial observation of a trapped sphere Make sure the laser is off.Install the slide prepared above.As you bring the slide down, look for individual spheres undergoing Brownian motion.Spheres and small dirt particles will often become stuck to the coverslip at the bottom of the channel or to the glass at the top of the channel.Find these surfaces and measure their separation in m-units to be sure they are, in fact, the top and bottom of the 100 µm channel.Being certain the focus is in the channel and just above the top of the coverslip is often helpful in the hunt for free spheres.Spheres will be more dense at the bottom of the channel, but should be found higher up as well. When you see spheres, turn on the laser to a power of approximately 15 mW and move the stage around manually as you try to capture a sphere into the trap.You will know that a particle is trapped because it will remain in the same location and same focus, even as you adjust the stage from side to side as well as up and down.Mark the trap position on the video image with the circle tool on the camera software and save this drawing.If there are too many spheres to catch only one in the laser focus, dilute the 150 2 solution by another factor of 10 or more and try again. A particle trapped in the z-direction will not change its focus (appearance on the image) when you adjust the stage up and down in the z-direction.While you are moving the slide, the trapped particle's position is fixed because the laser focus is fixed relative to the 100× objective.If you raise the slide enough, however, sooner or later the sphere will hit the bottom of the channel and then go out of focus if you continue raising the slide.Similarly, if you lower the slide enough, the sphere will hit the top of the channel.If the trap is weak, you may have problems keeping a sphere trapped near the top of the channel.Because of the optical properties of the objective and sample, the trap force in the z-direction is expected to weaken as the sphere height increases. Play around with this configuration a bit.Is the sphere density about right?Can you keep a single sphere trapped for many minutes or do other spheres often wander in?While you can compare measurements from different spheres with nearly the same diameter, small variations in their trap constants will affect the comparisons.Their dependence on laser power, for example, is smoother when all measurements are from the exact same sphere.To avoid spheres from wandering into an already filled trap, reduce the sample concentration.Setting it so there is about one sphere per camera image typically is about right.Working a bit higher in the channel helps in this regard as well. Another effect arises because spheres resting on near the bottom of the channel and near the laser beam (and thus directly below the trapped sphere) tend to get drawn even nearer the beam, i.e., they preferentially collect at the bottom of the channel just under the trapped sphere.These spheres foul the predicted behavior of the QPD signals.In particular, the spheres moving around near the bottom of the channel add mostly low-frequency components (below f c ) where the PSD spectrum is predicted to be constant.If your PSD spectrum shows this anomalous behavior, lowering the sphere concentration usually fixes the problem. Check the top and bottom z-micrometer positions as you demonstrate a trapped sphere can be moved from top to bottom of the channel.Save a short video sequence of an isolated, trapped sphere.Be sure you have recorded the trap position with a circle and have saved it as a drawing.It will be needed in later procedure steps. Always be sure to make measurements at least 20 µm from the bottom of the channel.Viscosity effects cause the motion of the liquid around the spheres to change when the spheres are close to the bottom or top surface of the channel.Beyond 20 µm or so, the surfaces are effectively infinitely far away as far as viscosity effects go. Uninstall the slide, empty it and refill it with an appropriate dilution of spheres in salt water to get, at most, a few per screen.This "stuck sphere" slide will be used in the next procedure. Piezo calibration You will next determine motion calibrations involving the use of the piezo controls on the stage.To do a piezo calibration requires observing a small object, such as a sphere, stuck to the slide.Install the slide prepared above and find a relatively isolated single sphere stuck to the coverslip. The direct DAC method of driving the piezo is used in the main Tweezers program where the stage is set into sinusoidal oscillations of known frequency and amplitude.Consequently, a calibration constant-from the am-OT -sjh,rd 29 plitude of the DAC drive voltage to the amplitude of the stage motion is needed.To perform this calibration, use the Oscillate Piezo program, which allows for convenient adjustments of the two DAC sinusoidal voltages.Their amplitudes as well as their common frequency and their phase difference are adjustable. Because of the nonlinear piezo behavior, an applied sinusoidal voltage of amplitude V DAC will cause nearly sinusoidal oscillations of the position with an amplitude A that depends nonlinearly on V DAC .Run the Oscillate Piezo program while viewing a stuck sphere.With the V DAC at zero, the piezo doesn't move and the amplitude of the stage motion is zero.As you increase V DAC , the stage motion amplitude increases in a near-linear fashion with a small quadratic component. If you apply equal amplitude oscillations to both the x-and y-piezos and set them 90 • out of phase with one another, the stage should move in a circle with a radius given by Eq. 46.Or, you can set either the x-or y-oscillation amplitude to zero so that the stage moves back and forth in only one dimension.Use either method.Measure the amplitude A versus V DAC in the range from 1 to 3.5 V at a 1 Hz frequency and fit that data to Eq. 46 to determine a 1 and a 2 .Be sure that there is no constant term in the fit (as in Eq. 46) because the amplitude of the motion must be zero with no drive voltage.The peak-to-peak amplitude (2A) can be measured (in pixels) from camera images where you try to see and measure either the diameter of the circular motion or the extrema of linear oscillations.Be sure to measure to the center of the spheres-a task more difficult than it sounds as the extrema are often faint and blurred.These measurements are then converted to real stage motion by the pixel to stage distance factor determined from the previous grid pattern measurements.Setting a small AOI (area of interest) around the sphere will speed up the frame rate, which can be quite useful in this step. Next, measure the stage amplitude with V DAC = 1 V at several drive frequencies up to 40 Hz.Then try it at 3-V drive amplitude.At higher frequencies, the stage accelerations for a given amplitude are larger and the stage inertia can affect the motion. When measuring the PSD for particles in the trap, the stage oscillations will be in the 10-30 Hz range, but will be at very low amplitudes-a few tenths of a micron driven by a V DAC of a few tenths of a volt.These oscillations are a bit too small to measure accurately with the camera.Instead, the calibration performed in this step should be extrapolated to these low amplitudes. Leave the stuck-sphere slide mounted as it will be used in the next procedure. QPD calibration For the in situ calibration described in the theory section, the detector constant β will be determined from the PSD of a trapped sphere on an oscillating stage.It is nonetheless worthwhile to look at another method, described here, for determining β using a stuck sphere.This method shows why an in situ calibration is so much better and demonstrates some of the limitations involved in either calibration method. Load the trap position drawing into the UC480 image and install the stuck-sphere slide.Move the slide so that there is a relatively isolated single sphere in the vicinity of the trap circle.Run the raster program.It starts in the calibration tab.Set the x and y strain gauge percentages in various combinations from 20 to 80% and measure the particle position on the camera for each x, y percent-OT -sjh,rd 31 fect how well it will be focused on the camera.While your initial raster scan was at roughly the same focus as a trapped sphere, do another with the sphere moved slightly higher and/or lower (by changing the z-focusing) such that there is a modest change in the appearance of the image.Note how much the stage was moved, run another raster scan and check how this affects β.What does this say about the assumption that β is a constant?How would a z-dependence to β affect the analysis? Full trap calibration Adjusting the piezos with the strain gauge feedback cannot always be done fast enoughparticularly when applying an oscillatory motion to the stage as for the in situ calibration method.In this case, the computer's two DACs will be used to apply sinusoidal voltages directly to the input of the x and y piezo modules.The piezo module amplifies those voltages by about 7.5, adds them to the 37.5 V offset and sends them on to the actual piezotransducers in the stage. Reuse or make a new slide with 1.2 µm spheres in DI water at an appropriate dilution to get about 1 sphere per CCD image.Install it between the objectives, find a trapped sphere and adjust the stage's z-position to get it about 30 µm above the coverslip. Start the Tweezers program.Zero the x and y V DAC amplitudes so the stage does not oscillate.Set the acquisition and timing parameters.Begin acquiring the QPD signal and averaging the calculated PSD P V (f j ).When it is sufficiently smooth, stop the averaging, switch over to the Fit tab and do a fit of the PSD to Eq. 44. Turn on the piezo oscillation of the stage and set the V DAC that would give a stage oscillation amplitude A = 0.1 − 0.2 µm.Begin averaging the PSD and perform a full analysis to determine β, γ and k.Repeat at different laser powers.Plot the trap strength k, the calibration constant β, and the drag coefficient γ as a function of laser power.Discuss the results.Are k and β directly proportional to laser power?Is γ constant?Can you see any systematic behavior with power?Why might this be reasonable? Possible additional studies Repeat the calibration procedure for other sphere sizes.Our largest are 5.1 µm in diameter and present several difficulties associated with their large size; they are about 75 times heavier than 1.2 µm spheres.We have spheres of diameter 0.5, 0.75, 1.0, 1.21, 1.5 and 5.1 µm.Most have not been studied.Except for the 1 µm spheres, the stock solutions are all 10% spheres by weight.Thus to get the same concentrations in particles per unit volume, the dilutions must scale in proportion to the sphere volume-twice the sphere diameter, 1/8 as much dilution.Scaling laws for the parameters can be investigated. Investigate vesicle transport in onion cells.You will have to bring in your own onion.Be sure it is fresh-hard and tight, not mushy.Be sure to take any leftover onion home.Do not dispose it in the lab trash.It stinks up the room rather quickly. Prepare a slide of onion epidermal cells in 0.1 M salt solution.Look for vesicles (bags of nutrients, waste or other cell material) floating in the cytoplasm and others traveling along specialized filaments.Find one and trap it.Move the slide to see if it is freely floating or stuck to a filament.How much can a filament stretch?What happens when you turn off the trap? Trap one on a filament and watch as other vesicles back up along that filament.Turn off the trap and describe your observations.Trap an isolated vesicle on a filament and lower the laser power until it breaks free.Repeat for others vesicles on filaments.Is the minimum laser power the same every time?What might affect the distribution of minimum laser power.Are there other quantitative measurements you can make? Cleaning Up When finished for the day, shut off the laser temperature and current controllers.Close all open LabVIEW programs and then turn off power to the T-Cube hub.Most importantly, this turns off all voltages to the piezos.Leaving a voltage on the piezos over long periods can change their properties.Turn off the power strip so the LED will turn off as well.(The computer and monitors are on a separate power strip.) Uninstall the slide and use a syringe to run DI water through the channel two or three times.Then fill it with DI water and leave it in a 200 ml cylinder also filled with DI water.This storage technique will help prevent breakage of the fragile coverslip.(If it is left full, as the water evaporates, the coverslip will crack.)The Ibidi slides can be reused, but check the coverslip and dispose of the slide if it has any cracks. Use a single sheet of lens paper (not a Kim-Wipe, which is very abrasive) to wipe the oil from the 100× objective.Do not scrub.Wipe gently once in one direction. Clean up the apparatus and sample preparation area.Dispose of tissues in the trash can and dispose of glass or plastic slide material or pipettor tips in the disposal box by the sink. Figure 1 : Figure 1: Ray model for the trapping force at the focus of a laser beam.A particle displaced horizontally (A) or vertically (B) from the focus (at x = y = z = 0) refracts the light away from the focus, leading to a reaction force that pulls the particle toward the focus; (C) Schematic of the restoring forces F x and F z versus displacement x and z of the particle from the trap center.Near the beam focus, F x ≈ −kx and F z ≈ −k z. Figure 2 : Figure 2: A and B show two views of the optical system, illustrating the paths of the infrared trapping rays (red arrows in A) and the visible illumination rays (blue arrows in B ) as they pass through the same optical elements.The sample is contained in a 100 µm-deep microchannel slide C, at the focus D of the 100× oil-immersion lens. Figure 3 : Figure3: Schematic of electronic interface between computer and tweezer apparatus.The DAQ board in the PC has an analog-to-digital converter that reads data from the QPD, as well as digital-to-analog converters that supply control voltages for the xy positioning of the microscope stage. Figure 4 : Figure 4: The QPD measures the intensity on four separate quadrant photodiodes.
23,525.6
2008-01-01T00:00:00.000
[ "Physics" ]